Benjamin Todd Profile picture
Apr 4, 2025 11 tweets 4 min read Read on X
AGI by 2027?

I spent weeks writing this new in-depth primer on the best arguments for and against.

Starting with the case for...🧵 Image
1. Company leaders think AGI is 2-5 years away.

They’re probably too optimistic, but shouldn't be totally ignored – they have the most visibility into the next generation of models. Image
2. The four recent drivers of progress don't run into bottlenecks until at least 2028.

And with investment in compute and algorithms continuing to increase, new drivers are likely to be discovered. Image
3. Benchmark extrapolation suggests in 2028 we'll see systems with superhuman coding and reasoning that can autonomously complete multi-week tasks.

All the major benchmarks ⬇️ Image
4. Expert forecasts have consistently moved earlier.

AI and forecasting experts now place significant probability on AGI-level capabilities pre-2030.

I remember when 2045 was considered optimistic.

80000hours.org/2025/03/when-d…Image
5. By 2030, AI training compute will far surpass estimates for the human brain.

If algorithms approach even a fraction of human learning efficiency, we'd expect human-level capabilities in at least some domains.

cold-takes.com/forecasting-tr…Image
6. While real-world deployment faces many hurdles, AI is already very useful in virtual and verifiable domains:

• Software engineering & startups
• Scientific research
• AI development itself

These alone could drive massive economic impact and accelerate AI progress.

Image
The strongest counterargument?

Current AI methods might plateau on ill-defined, contextual, long-horizon tasks—which happens to be most knowledge work.

Without continuous breakthroughs, profit margins fall and investment dries up.

Basically the debate comes down to whether the trend on this chart will continue: Image
Other meaningful arguments against:

• GPT-5/6 disappoint due to diminishing data quality
• Answering questions → novel insights could be a huge gap
• Persisting perceptual limitations limit computer use (Moravec's paradox)
• Benchmarks mislead due to data contamination & difficulty capturing real-world tasks
• Economic crisis, Taiwan conflict, or regulatory crackdowns delay progress
• Unknown bottlenecks (planning fallacy)
My take: It's remarkably difficult to rule out AGI before 2030.

Not saying it's certain—just that it could happen with only an extension of current trends.

Full analysis here:
80000hours.org/agi/guide/when…

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Benjamin Todd

Benjamin Todd Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @ben_j_todd

Aug 6, 2025
Sometimes the value of a 'human touch' is negative. People willing to pay 50% more for Waymo than Lyft, despite longer waiting times. Image
Waymo's insane growth chart: Image
Waymo already surpassed Lyft in SF: Image
Read 4 tweets
Jun 17, 2025
If you make $100-200k in a white collar job, studies suggest you're most exposed to AI automation

But what people miss: while AI reduces demand for some skills, others get MORE valuable

I spent weeks analysing existing research, job data, and how AI works to sum up which ones: Image
/2 These skills increase in value (for now) because they fall into 4 categories:

• Hard for AI (messy, long-horizon, data-poor, human-in-the-loop tasks)
• Needed for AI deployment
• Produce things we want much more of
• Difficult for others to learn

The best skills hit 2+ categories.
3/ Which skills have a more uncertain future?

Ten years ago our advice was "learn to code" and "do data science" — that worked out great.

Today that's less clear. Coding is what AI does best.

It's already a cliche, but "learn to deploy AI" seems to be the new "learn to code"Image
Read 7 tweets
Jun 11, 2025
1/ It's true software engineering employment seems to have flattened, but it's unclear this has much to do with AI. Image
2/ All job postings boomed in 2022 post-COVID, then fell. Software engineering merely showed a more exaggerated version of this trend. Image
3/ So most of the explanation needs to be something that affects *all* jobs – plausibly rising interest rates and the end of the COVID stimulus.

The tech sector is more sensitive to interest rates because it's fuelled by VC. If cash pays 5% rather than 1%, the hurdle for VC goes up.

So it had a bigger boom, and bigger crash.
Read 6 tweets
Apr 29, 2025
Why to quit your job and work on risks from AI (the short version) 🧵 Image
1/ There's a significant chance of AI systems that could accelerate science and automate many skilled jobs in five years.

80000hours.org/agi/guide/when…
2/ Lots of people hype AI as 'transformative' but few internalise how crazy it could really be. There's three different types of possible acceleration, which are much more grounded in empirical research than a couple of years ago. Image
Read 10 tweets
Feb 3, 2025
1/ Most AI risk discussion focuses on sudden takeover by super capable systems.

But when I imagine the future, I see a gradual erosion of human influence in an economy of trillions of AIs.

So I'm glad to see a new paper about those risks🧵
2/ We could soon be in a world with millions of AI agents, growing 10x per year. After 10 years, there's 1000 AIs per person thinking 100x faster.

In that world, competitive pressure means firms are run more & more by AI.
3/ A military without AI defences could get immediately disabled by cyber attack etc.

So humans are gradually taken out of the loop on more and more decisions.

What happens?
Read 15 tweets
Jan 21, 2025
People are saying you shouldn't use ChatGPT due to statistics like:

* A ChatGPT search emits 10x a Google search
* It uses 200 olympic swimming pools of water per day
* Training AI emits as much as 200 plane flights from NY to SF

These are bad reasons to not use GPT...🧵
1/ First, we need to compare ChatGPT to other online activities.

It turns out its energy & water consumption is tiny compared to things like streaming video.

Rather than quit GPT, you should quit Netflix & Zoom. Image
2/ Second, our online activities use a relatively tiny amount of energy – the virtual world is far more energy efficient than the real one.

If you want to cut your individual emissions, focusing on flights, insulation, electric cars, buying fewer things etc. will achieve 100x more.Image
Read 7 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us!

:(