Deedy Profile picture
Nov 26, 2024 9 tweets 2 min read Read on X
A single fairly unknown Dutch company makes maybe the most expensive and complex non-military device ($200M) that builds on 40 years of Physics and has a monopoly responsible for all AI advancement today.

Here's the story of ASML, the company powering Moore's Law..

1/9 Image
ASML's extreme ultraviolet (EUV) machines are engineering marvels.

They shoot molten tin droplets 50,000x/s with a 25kW laser turning it into plasma as hot as the sun's surface to create 13.5nm UV light —so energetic it's absorbed by air itself.

2/9
Each $200M machine contains mirrors that are the smoothest objects humans have ever created.

They're made with layers of molybdenum/silicon, each just a few atoms thick. If you scaled one to the size of Germany, its largest imperfection would be 1mm high.

3/9
This light goes through the mirrors onto moving 300mm silicon wafers at highway speeds (~1m/s) with precision better than the width of a SINGLE SILICON ATOM (0.2nm).

That's like hitting a target in SF from NYC with the accuracy of a human hair.

4/9
TSMC's 4nm process for NVIDIA H100 needs ~15 EUV layers (+80 DUV layers).

Each layer must align within nanometers. One machine processes ~100 wafers/hr. Cost? About $150K of chips per hour.

Other techniques cannot get the quality + throughput + cost to this level.

5/9
Why monopoly?

The supplier network:
— Zeiss (optics)
— Trumpf (lasers)
— VDL (frames)

40 years of co-development, 40,000 patents, 700+ suppliers. They own 24.9% of Zeiss's semiconductor div.

Replication would take decades + $100B+.

6/9
The complexity is astounding.

Each machine ships in 40 containers and takes 4 months to install. The supply chain spans 700+ companies. 100K+ parts per machine, 40K patents protecting it.

One missing component = global semiconductor disruption.

7/9
Only three companies can run cutting-edge EUV:
— TSMC (that makes GPUs for Nvidia)
— Samsung
— Intel.

ASML machines are the only way to make chips dense enough for modern AI. Each H100 has 80B transistors. The next gen will need >100B.

Impossible without EUV.

8/9
Rich Sutton's "The Bitter Lesson" is that general methods that leverage
computation and Moore's Law are the most effective for advancing AI research.

In the iceberg of AI technology, while LLMs are at the top, ASML is at the murky depths.

It has kept Moore's Law alive.

9/9

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Deedy

Deedy Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @deedydas

Aug 31
This new DeepMind research shows just how broken vector search is.

Turns out some docs in your index are theoretically incapable of being retrieved by vector search, given a certain dimension count of the embedding.

Plain old BM25 from 1994 outperforms it on recall.

1/4 Image
This result gives me a lot of joy as a search nerd for more than a decade.

Haters will say that the dataset the authors created, LIMIT, is synthetic and unrealistic, but this has been my observation building search systems at Google / Glean.

Source:


2/4alphaxiv.org/pdf/2508.21038
Vector Search was popularized as an approachable drop-in search since OpenAI embeddings grew in popularity, but has clear limitations in production settings.

Even aside from this result, showing it just misses certain docs constantly, it
– doesn't search for concepts well
– often retrieves similar but unrelated results
– doesn't account for non-content signals of similarity (recency, popularity)

3/4
Read 4 tweets
Aug 10
I'm using GPT5 Pro to find me the best stocks and startup investments.

Asked it to use modern portfolio theory and size investments.
—Top Privates [+9.7%]: Databricks, Stripe, Anthropic, SpaceX
—Top Publics [+14.2%]: Nvidia, TSMC, Microsoft, Meta

Just put $1000 into the stocks! Image
Prompt: "Check all public / private stock market companies and tell me what I should invest in from first principles reasoning. You have $1000.

Please do deep research and present rationale for each investment. Each one should have a target price and expected value. Use advanced math for trading. Draw research from authoritative sources like research and unbiased pundits. Size my bets properly and use everything you know about portfolio theory. Corroborate each decision with a list of predictions about those companies.

Your goal is to maximize expected value. Make minimum 5 investments. Write it in a table."
This follows my previous experiment on Polymarket, which seemingly had ~2-4x the expected returns!
Read 4 tweets
Aug 8
Ridiculous that OpenAI claimed 74.9% on SWE-Bench just to prove they were above Opus 4.1’s 74.5%…

By running it on 477 problems instead of the full 500.

Their system card only says 74% too. Image
And yes, I know they’ve always reported on the 477 denominator, but that’s NOT “SWE-Bench verified”, that’s an entirely different metric, it’s “OpenAI’s subset of SWE Bench Verified” and that number can’t be compared
Read 4 tweets
Jul 31
🚨Anthropic is at $4.5B annualized revenue and is the fastest growing software company in history!

They just overtook OpenAI to become the market leader in LLM API cost.

We just dropped this and more in our mid year Enteprise AI report:

1/7 Image
Enterprise LLM API spend has exploded from $3.5B to $8.4B by mid year, and that number is already stale!

2/7 Image
Enterprises and startups are choosing closed source models.

Only 11% of enterprises show high open source model usage.

3/7 Image
Read 7 tweets
Jul 31
Microsoft just leaked their official compensation bands for engineers.

We often forget that you can be a stable high-performing engineer with
great work-life balance, be a BigTech lifer and comfortably retire with a net worth of ~$15M! Image
On the top chart:
Blue is base, purple is stock, green is bonus
Read 4 tweets
Jul 22
The best open-source AI model just dropped a detailed report on how it was trained, a rare resource for students given no frontier lab is publishing!

Kimi K2's estimated total cost of training is ~$20-30M, roughly in line with pricing: $0.6/M in $2.5/M out tokens.

10 highlights:Image
1. Generating tokens by rewriting high-quality tokens with LLMs in pre-training
2.  Mining 3000+ MCPs and using LLM-generated personas to improve agentic tool calling
3.  10,000 parallel Kubernetes sandboxes to solve Github issues
4.  New scaling laws for sparsity in MoE models
5. RL with verifiable rewards (RLVR) for math, coding, safety with self-critique model with long-reasoning penalty, causing direct, desisive answers
6. Training recipe of 4k sequences, then 32k then 128k with YaRN
7. High temp during initial RL training to promote exploration
Read 5 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us!

:(