A single fairly unknown Dutch company makes maybe the most expensive and complex non-military device ($200M) that builds on 40 years of Physics and has a monopoly responsible for all AI advancement today.
Here's the story of ASML, the company powering Moore's Law..
1/9
ASML's extreme ultraviolet (EUV) machines are engineering marvels.
They shoot molten tin droplets 50,000x/s with a 25kW laser turning it into plasma as hot as the sun's surface to create 13.5nm UV light —so energetic it's absorbed by air itself.
2/9
Each $200M machine contains mirrors that are the smoothest objects humans have ever created.
They're made with layers of molybdenum/silicon, each just a few atoms thick. If you scaled one to the size of Germany, its largest imperfection would be 1mm high.
3/9
This light goes through the mirrors onto moving 300mm silicon wafers at highway speeds (~1m/s) with precision better than the width of a SINGLE SILICON ATOM (0.2nm).
That's like hitting a target in SF from NYC with the accuracy of a human hair.
4/9
TSMC's 4nm process for NVIDIA H100 needs ~15 EUV layers (+80 DUV layers).
Each layer must align within nanometers. One machine processes ~100 wafers/hr. Cost? About $150K of chips per hour.
Other techniques cannot get the quality + throughput + cost to this level.
5/9
40 years of co-development, 40,000 patents, 700+ suppliers. They own 24.9% of Zeiss's semiconductor div.
Replication would take decades + $100B+.
6/9
The complexity is astounding.
Each machine ships in 40 containers and takes 4 months to install. The supply chain spans 700+ companies. 100K+ parts per machine, 40K patents protecting it.
One missing component = global semiconductor disruption.
7/9
Only three companies can run cutting-edge EUV:
— TSMC (that makes GPUs for Nvidia)
— Samsung
— Intel.
ASML machines are the only way to make chips dense enough for modern AI. Each H100 has 80B transistors. The next gen will need >100B.
Impossible without EUV.
8/9
Rich Sutton's "The Bitter Lesson" is that general methods that leverage
computation and Moore's Law are the most effective for advancing AI research.
In the iceberg of AI technology, while LLMs are at the top, ASML is at the murky depths.
It has kept Moore's Law alive.
9/9
• • •
Missing some Tweet in this thread? You can try to
force a refresh
This new DeepMind research shows just how broken vector search is.
Turns out some docs in your index are theoretically incapable of being retrieved by vector search, given a certain dimension count of the embedding.
Plain old BM25 from 1994 outperforms it on recall.
1/4
This result gives me a lot of joy as a search nerd for more than a decade.
Haters will say that the dataset the authors created, LIMIT, is synthetic and unrealistic, but this has been my observation building search systems at Google / Glean.
Vector Search was popularized as an approachable drop-in search since OpenAI embeddings grew in popularity, but has clear limitations in production settings.
Even aside from this result, showing it just misses certain docs constantly, it
– doesn't search for concepts well
– often retrieves similar but unrelated results
– doesn't account for non-content signals of similarity (recency, popularity)
3/4
I'm using GPT5 Pro to find me the best stocks and startup investments.
Asked it to use modern portfolio theory and size investments.
—Top Privates [+9.7%]: Databricks, Stripe, Anthropic, SpaceX
—Top Publics [+14.2%]: Nvidia, TSMC, Microsoft, Meta
Just put $1000 into the stocks!
Prompt: "Check all public / private stock market companies and tell me what I should invest in from first principles reasoning. You have $1000.
Please do deep research and present rationale for each investment. Each one should have a target price and expected value. Use advanced math for trading. Draw research from authoritative sources like research and unbiased pundits. Size my bets properly and use everything you know about portfolio theory. Corroborate each decision with a list of predictions about those companies.
Your goal is to maximize expected value. Make minimum 5 investments. Write it in a table."
This follows my previous experiment on Polymarket, which seemingly had ~2-4x the expected returns!
And yes, I know they’ve always reported on the 477 denominator, but that’s NOT “SWE-Bench verified”, that’s an entirely different metric, it’s “OpenAI’s subset of SWE Bench Verified” and that number can’t be compared
Microsoft just leaked their official compensation bands for engineers.
We often forget that you can be a stable high-performing engineer with
great work-life balance, be a BigTech lifer and comfortably retire with a net worth of ~$15M!
The best open-source AI model just dropped a detailed report on how it was trained, a rare resource for students given no frontier lab is publishing!
Kimi K2's estimated total cost of training is ~$20-30M, roughly in line with pricing: $0.6/M in $2.5/M out tokens.
10 highlights:
1. Generating tokens by rewriting high-quality tokens with LLMs in pre-training 2. Mining 3000+ MCPs and using LLM-generated personas to improve agentic tool calling 3. 10,000 parallel Kubernetes sandboxes to solve Github issues 4. New scaling laws for sparsity in MoE models
5. RL with verifiable rewards (RLVR) for math, coding, safety with self-critique model with long-reasoning penalty, causing direct, desisive answers 6. Training recipe of 4k sequences, then 32k then 128k with YaRN 7. High temp during initial RL training to promote exploration