Epoch AI Profile picture
Apr 6 6 tweets 3 min read Read on X
Compute may be the most important input to AI. So who owns the world’s AI compute?

Introducing our new AI Chip Owners explorer, showing our analysis of how leading AI chips are distributed among hyperscalers and other major players, broken down by chip type over time. Image
To estimate global compute ownership, we build on our previous estimates of overall AI chip sales. We then use earnings commentary from chipmakers and hyperscalers, as well as media reports and industry researcher estimates, to allocate chips across owners. Image
We estimate that over 60% of global AI compute is owned by the top US hyperscalers, led by Google with the equivalent of roughly 5 million Nvidia H100 GPUs!

Unlike the other hyperscalers, which rely primarily on Nvidia, Google’s fleet is dominated by its custom TPU chips. Image
Chinese companies collectively own just over 5% of global AI compute — less than any single top US hyperscaler, and decreasing over time due to export controls.

This excludes smuggled chips, which reporting suggests are significant but unlikely to meaningfully close the gap. Image
Notably, Nvidia’s official exports to China were largely paused in early 2025 due to export controls, slowing China’s compute purchases. Huawei has now overtaken Nvidia as the leading source of AI computing power in China, at least in terms of aggregate compute specs on paper. Image
We plan to expand our coverage of AI chip owners and global compute over time. See detailed breakdowns by company, chip family, and model, plus a full methodology, in our AI Chip Owners explorer!

epoch.ai/data/ai-chip-o…

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Epoch AI

Epoch AI Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @EpochAIResearch

Feb 26
Developing more powerful AI isn’t just about scaling compute. It’s also about improving algorithms and data quality, which let you build better models with the same compute.

We call this “AI software progress” — here’s everything you need to know about it: 🧵 Image
There are many ways to improve algorithms and data. For example, you could change model architectures, build better RL environments, and improve training recipes.

But how do you concretize what makes some AI software better than others?
One way is to say that better AI software reduces the compute needed to reach the same capability.

For example, imagine a curve relating a measure of capabilities to log(training compute). After making an algorithmic innovation, the curve shifts to the left, saving compute: Image
Read 8 tweets
Feb 26
In 2024, @EpochAIResearch estimated the rate of software progress in language models. We found that training compute efficiency was improving at ~3x per year.

But this estimate was for pre-training, and is now outdated — so @ansonwhho took a new look at the numbers. 🧵 Image
Almost all existing estimates suggest very fast progress, on the order of several times per year, though the uncertainty intervals are really wide.

Still, it’s very possible that training efficiency improves much faster than 3× per year. Even 10× per year seems possible! Image
The numbers are very uncertain for two reasons.

1. They’re based on limited data, because we lack long-run time series with both model performance and training compute, which we need to derive estimates of software progress.
Read 8 tweets
Feb 26
AI training compute efficiency has improved extremely fast: each year, you need several times less training compute to reach the same capability.

But AI architectures/algorithms haven’t changed *that* much in recent years.

So where do these efficiency improvements come from? 🧵 Image
One explanation is that these improvements came not from better algorithms, but better data.

For example, training has shifted from uncurated web data to heavily processed (and often synthetic) data. AI companies are also spending billions on data, like RL environments:
Another explanation is that measured efficiency gains came from innovations that depend on training compute scale.

Here’s the idea: most existing estimates assume that innovations are scale-independent. This means shifting scaling curves in parallel to the left… Image
Read 12 tweets
Jan 28
Was serving GPT-5 profitable?

According to @Jsevillamol, @exponentialview’s Hannah Petrovic, and @ansonwhho, it depends. Gross margins were around 45%, making inference look profitable.

But after accounting for the cost of operations, OpenAI likely incurred a loss.🧵 Image
Even the gross profits from running models weren’t enough to recoup R&D costs.

Gross profits running GPT-5 were less than OpenAI's R&D costs in the four months before launch. And the true R&D cost was likely higher than that. Image
The core problem: AI R&D is expensive, and model lifecycles are too short to get enough revenue.

So even if it’s profitable to run models, the full lifecycle is likely loss-making — as long as GPT-5 is representative of other models.
Read 8 tweets
Jan 8
Global AI compute capacity now totals over 15 million H100-equivalents.

Our new AI Chip Sales data explorer tracks where this compute comes from across Nvidia, Google, Amazon, AMD, and Huawei, making it the most comprehensive public dataset available. Image
Nvidia’s B300 GPU now accounts for the majority of its revenue from AI chips, while H100s make up under 10%.

We estimate chip-level spending using earnings reports, company disclosures, and analyst and media coverage. Image
These chips present massive resource demands.

Even before the power overheads of servers and data centers, this many chips would draw over 10 GW of power - around twice the average power consumption of New York City. Image
Read 4 tweets
Dec 12, 2025
GPT-5.2 scores 152 on the Epoch Capabilities Index (ECI), our tool for aggregating benchmark scores. This puts it second only to Gemini 3 Pro.

🧵 with individual scores. Image
GPT-5.2 ranks first or second on most of the benchmarks we run ourselves, including a top score on FrontierMath Tiers 1–3 and our new chess puzzles benchmark. The exception is SimpleQA Verified, where it scores notably worse than even previous GPT-5 series models. Image
Our AIME variant, OTIS Mock AIME 2024-2025, is nearly saturated. There remains a single problem no model has solved, shown below. The diagram is given to the model in the Asymptote vector graphics language. Image
Read 4 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us!

:(