Benjamin Todd Profile picture
Apr 4 11 tweets 4 min read Read on X
AGI by 2027?

I spent weeks writing this new in-depth primer on the best arguments for and against.

Starting with the case for...🧵 Image
1. Company leaders think AGI is 2-5 years away.

They’re probably too optimistic, but shouldn't be totally ignored – they have the most visibility into the next generation of models. Image
2. The four recent drivers of progress don't run into bottlenecks until at least 2028.

And with investment in compute and algorithms continuing to increase, new drivers are likely to be discovered. Image
3. Benchmark extrapolation suggests in 2028 we'll see systems with superhuman coding and reasoning that can autonomously complete multi-week tasks.

All the major benchmarks ⬇️ Image
4. Expert forecasts have consistently moved earlier.

AI and forecasting experts now place significant probability on AGI-level capabilities pre-2030.

I remember when 2045 was considered optimistic.

80000hours.org/2025/03/when-d…Image
5. By 2030, AI training compute will far surpass estimates for the human brain.

If algorithms approach even a fraction of human learning efficiency, we'd expect human-level capabilities in at least some domains.

cold-takes.com/forecasting-tr…Image
6. While real-world deployment faces many hurdles, AI is already very useful in virtual and verifiable domains:

• Software engineering & startups
• Scientific research
• AI development itself

These alone could drive massive economic impact and accelerate AI progress.

Image
The strongest counterargument?

Current AI methods might plateau on ill-defined, contextual, long-horizon tasks—which happens to be most knowledge work.

Without continuous breakthroughs, profit margins fall and investment dries up.

Basically the debate comes down to whether the trend on this chart will continue: Image
Other meaningful arguments against:

• GPT-5/6 disappoint due to diminishing data quality
• Answering questions → novel insights could be a huge gap
• Persisting perceptual limitations limit computer use (Moravec's paradox)
• Benchmarks mislead due to data contamination & difficulty capturing real-world tasks
• Economic crisis, Taiwan conflict, or regulatory crackdowns delay progress
• Unknown bottlenecks (planning fallacy)
My take: It's remarkably difficult to rule out AGI before 2030.

Not saying it's certain—just that it could happen with only an extension of current trends.

Full analysis here:
80000hours.org/agi/guide/when…

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Benjamin Todd

Benjamin Todd Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @ben_j_todd

Feb 3
1/ Most AI risk discussion focuses on sudden takeover by super capable systems.

But when I imagine the future, I see a gradual erosion of human influence in an economy of trillions of AIs.

So I'm glad to see a new paper about those risks🧵
2/ We could soon be in a world with millions of AI agents, growing 10x per year. After 10 years, there's 1000 AIs per person thinking 100x faster.

In that world, competitive pressure means firms are run more & more by AI.
3/ A military without AI defences could get immediately disabled by cyber attack etc.

So humans are gradually taken out of the loop on more and more decisions.

What happens?
Read 15 tweets
Jan 21
People are saying you shouldn't use ChatGPT due to statistics like:

* A ChatGPT search emits 10x a Google search
* It uses 200 olympic swimming pools of water per day
* Training AI emits as much as 200 plane flights from NY to SF

These are bad reasons to not use GPT...🧵
1/ First, we need to compare ChatGPT to other online activities.

It turns out its energy & water consumption is tiny compared to things like streaming video.

Rather than quit GPT, you should quit Netflix & Zoom. Image
2/ Second, our online activities use a relatively tiny amount of energy – the virtual world is far more energy efficient than the real one.

If you want to cut your individual emissions, focusing on flights, insulation, electric cars, buying fewer things etc. will achieve 100x more.Image
Read 7 tweets
Dec 22, 2024
The AI safety community has grown rapidly since the ChatGPT wake-up, but available funding doesn’t seem to have kept pace.

What's more, there’s a more recent dynamic that’s created even better funding opportunities, which I witnessed in a recent grantmaking round..
1/ Most philanthropic (vs. government or industry) AI safety funding (>50%) comes from one source: Good Ventures.

But they’ve recently stopped funding several categories of work:

a. Republican think tanks
b. Post-alignment work like digital sentience
c. The rationality community
d. High school outreach
2/ They're also not fully funding:

e. Technical safety non-profits
f. Many non-US think tanks
g. Foundations can't donate to political campaigns
h. Nuclear security
i. Other organisations they've decided are below their funding bar
Read 7 tweets
Dec 22, 2024
How can you personally prepare for AGI?

Well maybe we all die. Then all you can do is try to enjoy your remaining years.

But let’s suppose we don’t. How can you maximise your chances of surviving and flourishing in whatever happens after?

The best ideas I've heard so far: 🧵
1/ Seek out people who have some clue what's going on.

Imagine we're about to enter a period like COVID – life is upended, and every week there are confusing new developments. Except it lasts a decade. And things never return to normal.

In COVID, it was really helpful to follow people who were ahead of the curve and could reason under uncertainty. Find the same but for AI.
2/ Save as much money as you can.

AGI probably causes wages to increase initially, but eventually they collapse. Once AI models can deploy energy and other capital more efficiently to do useful things, there’s no reason to employ most humans any more.

You'll then need to live of whatever you've saved for the rest of your life.

The good news is you have one last chance to make bank in the upcoming boom.Image
Read 9 tweets
Dec 1, 2024
Just returned to China after 8 years away (after visiting a lot 2008-2016). Here's some changes I saw in tier 1/2 cities 🇨🇳
1/ Much more politeness: people actually queue, there's less spitting, and I was only barged once or twice.

But Beijing still has doorless public bathrooms without soap. Image
2/ Many street vendors have been cleared out. Of the 30 clubs that used to exist in a tower block in Chengdu, only 1 survives. It's more similar to other rich countries. Image
Read 16 tweets
Nov 29, 2024
10 points about AI in China (from my recent 2-week visit) 🇨🇳

And why calls for a Manhattan project for AI could be self-defeating.
1/ China's AI bottleneck isn't compute – it's government funding. Despite export controls, labs can access both legal NVIDIA A800s and black-market H100s. Cloud costs are similar to the West.

ft.com/content/10aacf…
2/ The real constraint is funding:

* Chinese VC is 20-40% of Western size
* Tencent/Alibaba profits ≈ 1/5 of Google/Microsoft
* Government hasn't made big AI allocations (yet)
Read 13 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us!

:(