Epoch AI Profile picture
Jul 17 12 tweets 3 min read Read on X
How fast has society been adopting AI?

Back in 2022, ChatGPT arguably became the fastest-growing consumer app ever, hitting 100M users in just 2 months. But the field of AI has transformed since then, and it’s time to take a new look at the numbers. 🧵 Image
Historically, technology adoption took decades. For example, telephones took 60 years to reach 70% of US households. But tech diffuses faster and faster over time, and we should expect AI to continue this trend. Image
But even if we account for this trend, AI adoption seems incredibly fast. ~10% of the US used ChatGPT weekly within just 2 years, and ~30% in under 2.5 years. Image
It’s not just ChatGPT. OpenAI, Anthropic, and DeepMind revenues have collectively grown by >$10B since ChatGPT’s release. Furthermore, almost ~40% of US businesses are now paying for AI tools, and this will reach ~80% by 2028 on the current trajectory. Image
These numbers suggest that AI systems reached the current number of users incredibly quickly, faster than almost any previous technology.
Besides the number of users, to understand the rate of AI diffusion we also need to look at how and how much AI systems are being used. Are people using frontier models more? Are they using them more intensively?
Firstly, ~95% of ChatGPT users are on the free-tier, with limited access to frontier AI. In contrast, paying users quickly adopt the best models: On OpenRouter, nearly all token usage of Claude models shifts to the latest models <2 months after release. Image
But despite rapid total user growth, the fraction of paid ChatGPT users hasn’t grown. If anything, it’s been declining: paid users grew ~3.3x from Jan 2024 to Apr 2025, but total users increased ~4.5x. That’s evidence against increased usage intensity.
Survey data gives mixed evidence. A Pew survey found no changes in AI interaction frequency between 2022 and 2024, whereas a Gallup poll found frequent use nearly doubled from 11% to 19% (2023-2025), though mostly among white-collar workers.
On the other hand, token usage per user has likely grown a lot. Sam Altman reported a 50x increase in OpenAI’s token volume between Nov 2023 and Oct 2024. Adjusting for user growth, that could mean up to ~20x more tokens per user. Image
Taking everything into account, there have likely also been substantial increases in how much individuals use AI since ChatGPT’s release, though the evidence is somewhat tricky to interpret.
This week’s Gradient Update was coauthored by @ardenaberg and @ansonwhho. You can find the full post here: epochai.substack.com/p/after-the-ch…

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Epoch AI

Epoch AI Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @EpochAIResearch

Jul 17
We have graded the results of @OpenAI's evaluation on FrontierMath Tier 1–3 questions, and found a 27% (± 3%) performance. ChatGPT agent is a new model fine-tuned for agentic tasks, equipped with text/GUI browser tools and native terminal access. 🧵 Image
This evaluation is not directly comparable to those on Epoch AI’s benchmarking hub, as it uses a different scaffold. First, we did not run the model ourselves—we only graded the outputs provided by OpenAI and don’t have access to their code to run the model. Second, ChatGPT agent has access to tools not available to other models we've assessed—most notably browser tools, which may have helped on questions related to recent research papers. Finally, the evaluation allowed up to 128K tokens per question, compared to our standard 100K; this difference is unlikely to have significantly affected results.
@OpenAI OpenAI has exclusive access to all FrontierMath problem statements and 237 of the 290 Tier 1–3 solutions. Epoch AI holds out the remaining solutions. We found no statistically significant performance difference between the held-out and non-held-out sets. Image
Read 6 tweets
Jul 9
The IMO is next week. What will it tell us about AI?

@GregHBurnham argues that an AI gold medal could be a non-event or could be an important breakthrough—it depends on whether the AI system exhibits creative problem-solving. How to tell the difference? Read on! Image
@GregHBurnham It will be tempting to focus on whether an AI system gets a gold medal. Formal proof systems like Google’s AlphaProof are quite close to this, and even general-purpose LLMs have a fighting chance. But that's not the outcome to pay the most attention to.
@GregHBurnham Rather, the big thing to watch for is qualitative: can AI systems solve problems that require a lot of creativity?
Read 8 tweets
Jul 3
What would a Manhattan Project for AI look like?

@ansonwhho and @ardenaberg argue that if one reaches the scale of previous national projects, an AI Manhattan project could result in a ~1000x compute scaleup by 2027. Image
@ansonwhho @ardenaberg A national AI project has become more and more of a possibility in the last year, with one as the top recommendation from a US-China congressional commission. Image
@ansonwhho @ardenaberg Previous national projects at their peaks spent an equivalent fraction of GDP as $120B-$250B today. The authors find that such a budget could centralize most NVIDIA compute in the US. Image
Read 8 tweets
Jul 2
The state of large-scale AI models, July 2025:

- The number of large-scale model releases is growing rapidly (418 models over 10^23 FLOP)
- The UK has fallen behind, China has caught up (9 vs 151 models)
- There are far more of the largest models (33 models over 10^25 FLOP) Image
First, the number of large-scale model releases is growing rapidly.

In 2020, there were 4 models trained with more than 10^23 FLOP.
By the end of 2024, there were 327 such models in our dataset. Image
Most large-scale models — those trained on over 10^23 FLOP — are language models.

Of the 418 large-scale models in our data, 326 are language models, of which 86 are vision-language (like GPT-4). Image
Read 11 tweets
Jun 27
LLM context windows have grown, but can models really use all this content?

We find signs of recent, rapid progress in their ability to do so. Read on to learn more! Image
From Claude 2.0’s 100k tokens in 2023 to Llama 4 Maverick’s 10M earlier this year, there’s no doubt that context windows are getting longer. On a set of models from Artificial Analysis, we find that the longest available context windows have grown at about 30x/year.
But, how effectively can models use these longer windows? We measured the input lengths at which models score above 80% on two moderately-challenging long-context benchmarks, Fiction.liveBench and MRCR (2-needle).
Read 10 tweets
Jun 20
The bottlenecks to >10% GDP growth are weaker than expected, and existing $500B investments in Stargate may be tiny relative to optimal AI investment

In this week’s Gradient Update, @APotlogea and @ansonwhho explain how their work on the economics of AI brought them to this view Image
@APotlogea @ansonwhho Skepticism around explosive AI growth often hinges on "Baumol effects"—bottlenecks from human-dependent tasks. But to their surprise, the most comprehensive integrated assessment model of AI to date suggests these constraints are weaker than expected
@APotlogea @ansonwhho Contrary to their expectations, even very partial AI automation—just 30% of tasks—can lead to growth rates above 20% under best-guess parameters. Achieving explosive growth (>30%) requires around 50-70% automation, still well below full automation Image
Read 11 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us!

:(