Lennart Heim Profile picture
managing the flop | prev @RANDcorporation @GovAIOrg @EpochAIResearch
Apr 29, 2025 12 tweets 3 min read
China's AI models are closing the gap—and will continue to improve. However, this misses America's strategic compute advantage.

In my new commentary, I argue that the TOTAL compute advantage is what export controls preserve and—if leveraged correctly—provides the real edge. 1/ Image You can read it here:
Below a brief summary of my main points. 2/chinatalk.media/p/chinas-model…
Apr 15, 2025 7 tweets 3 min read
Taiwan's exports of likely AI chips to Malaysia are surging (h/t @kakashiii111). HS84718 shows patterns consistent with AI chips.

With US export controls coming May 15th, this could be a rush before the deadline—or just processing in Malaysia before shipping elsewhere. 1/ Image Most of the value is HS84718: "Other units of automatic data processing machines."
This doesn't necessarily indicate smuggling. Could be data centers being built for remote access (you don't need local chips to use them) or just processing before onward shipment. 2/ Image
Mar 11, 2025 16 tweets 4 min read
Huawei's next AI accelerator—the Ascend 910C—is entering production. It's China's best AI chip.
Thanks to backdoor sourcing, we could easily see 1M H100-equiv this year.
Here’s what we know about its performance and strategic implications. Spoiler: selectively competitive. 1/ Image The 910C is basically two co-packaged Ascend 910Bs, China's best current-gen accelerator. But there's a twist: most (potentially all) of these chips weren't produced domestically—they were illicitly procured from TSMC despite export controls. 2/ Image
Jan 14, 2025 16 tweets 4 min read
In a new perspective, I explain and analyze the AI Diffusion Framework—what it does, how it works, its rationale, why it was needed, why China can't easily fill the void, and some thoughts on model weight controls.
1/ Image Full paper here:
This table gives the best overview: The framework applies rules based on company HQ location and export destination—covering both advanced AI chips and certain model weights. 2/ rand.org/content/dam/ra…Image
Dec 2, 2024 15 tweets 3 min read
Yearly export control update just dropped, restricting high-bandwidth memory (HBM). HBM is critical for advanced AI accelerators, especially for deployment workloads with long context windows.
The goal? Stop the PRC from equipping their AI accelerators with HBM. 1/ Image Quick HBM primer: HBM is the most advanced high-performance memory. It’s made by stacking DRAM dies. Only SK Hynix, Micron, and Samsung currently produce it at scale. All current leading data center AI chips use HBM. 2/ Image
Jan 2, 2024 34 tweets 4 min read
Some personal musings about AI Governance and Policy until I run out ... First, AI training compute is still doubling every 6 months.
Oct 17, 2023 8 tweets 2 min read
The US just published its revised export controls on AI chips, moving away from the 'chip-to-chip' interconnect bandwidth threshold to a threshold on computational performance (OP/s), including its derived performance density (OP/s per mm²).
1/ Image As I've highlighted before, there were loopholes in the initial controls. At first glance, these new measures seem to address those. The prior 'escape/scaling path' allowed continued scaling computational performance while bounding the interconnect.

2/
Image
Dec 27, 2022 7 tweets 3 min read
Currently doing my yearly review. Such a fun and useful thing to spend your time on between the holidays. Can be done in a couple of hours (or up to days if you like), alone or with friends. Some tips + resources that I like:
🧵⬇️ First, pick what works for you. I use parts of all the resources linked below and created my own template. Roughly divided into (1) Personal review and planning, and (2) Career review and planning.
Also, I'm a huge fan of themes (seasonal though):
Oct 8, 2022 7 tweets 3 min read
The 🇺🇸US just announced new tech export restrictions against China 🇨🇳. We're talking about billions of $ in trade.
It affects all types of integrated circuits (ICs) and semiconductor manufacturing equipment (SME). The motivation explicitly includes AI and supercomputing.
🧵⬇️ It includes Chips fabbed outside the US (looking at you Taiwan's TSMC).
- No 5-year old NVIDIA V100,
- Extended SME ban: anything below 16nm,
- Not more than 600GB/s of bandwidth for ICs.
- ...
This will put China years behind the cutting edge.
Apr 11, 2022 15 tweets 6 min read
Our blog post "Compute Funds and Pre-Trained Models" is out. We argue that the National AI Research Resource (+ other compute funds) should provide structured access to models, not just data and compute.
It's on the @GovAI_ blog: governance.ai/post/compute-f…
🧵⬇️ Authored by @Manderljung, @TShevlane, and me.

We've seen that private AI labs are producing an increasing share of high-compute SOTA AI models — leading many to worry about a growing compute divide between academia and the private sector.
2/
Mar 30, 2022 11 tweets 6 min read
🇨🇳 Part of China's Roadmap for Big Model is a *Large Scale Intelligent Computing System (LSICS)*!
"LSICS is to an intelligent society what water conservancy and transportation are to an agricultural society"🧐
Here are the highlights from Section 4:🧵⬇️ arxiv.org/pdf/2203.14101…
1/ Image They discuss the main difference between LSICS and traditional high-performance computing:
- specialized hardware (such as GPU, TPU, or NPU)
- with reduced precision (FP16 for training; INT8 for inference)
- local high-performance storage
- high throughput interconnect
2/ Image
Feb 15, 2022 13 tweets 12 min read
**ML training compute has been doubling every 6 months since 2010!**
Our preprint "Compute Trends Across Three Eras of Machine Learning" is out. arxiv.org/abs/2202.05924
🧵 Thread below ↓
1/ 1) We have curated a dataset of 123 milestone ML models.
2) We frame the trends in compute in three eras.
3) We discuss various interpretations of this trend.
Work by @Jsevillamol @ansonwhho @tamaybes @MariusHobbhahn and Pablo Villalobos.
+ more who helped to curate the dataset!