Tom Davidson Profile picture
Aug 5 11 tweets 3 min read Read on X
So, exactly how big will the intelligence explosion be?

…Ten years of AI progress in a year? In a month?

Our new paper tackles this question head-on.

I've researched AI takeoff speeds for many years. This is my best stab at an answer.🧵 Image
An intelligence explosion is where AI makes smarter AI, which quickly makes even smarter AI, etc.

Our scenario: AI fully replaces humans at improving AI “software” (algorithms and data).

(We conservatively assume that the amount of compute remains constant.) Image
Our model has three main parameters:
1) Initial speed-up in software progress from AI automating AI research
2) After the initial speed-up, does progress accelerate or decelerate?
3) How far until AI software reaches fundamental limits on compute efficiency? Image
We estimate these three parameters through a mix of empirical evidence and guesswork. Image
One interesting insight:

Compressing >10 years of total AI progress into <1 year via software improvements is tough.

It would require >10 OOMs of efficiency improvements. (Effective compute for developing AI has recently risen by 10X/year.)

That's a 10 billion-fold increase!
We put probability distributions over the params and run a Monte Carlo simulation.

The model spits out the probability of compressing multiple years of total AI progress into a few months from software improvements alone. Image
I roughly estimate that our model assigns a ~20% probability to takeoff being faster than in AI-2027

@DKokotajlo @eli_lifland

How scary would this be?

6 years of progress might take us from 30,000 expert-level AIs thinking 30x human speed to 30 million superintelligent AIs thinking 120X human speed (h/t @Ryan)

If that happens in <1 year, that's scarily fast just when we should proceed cautiously
It goes without saying: the model is very basic and has many big limitations.

E.g. we assume AI progress will follow smooth trends.

But if there’s a big paradigm shift, AI progress could be much more dramatic. Alternatively, the current paradigm could fizzle out.
See the full paper for more:
forethought.org/research/how-q…
This builds on previous work by myself and @daniel_271828

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Tom Davidson

Tom Davidson Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @TomDavidsonX

Apr 16
New paper on AI-enabled coups.

When AI gets smarter than humans, a few leaders could direct insane amounts of cognitive labor towards seizing power.

In the extreme, an autonomous AI military could be made secretly (or not so secretly!) loyal to one person.

What can be done? 🧵 Image
Coup mechanism #1: Singularly loyal AI

Today, even dictators must rely on others to maintain power.

Sufficiently advanced AI removes this constraint.

A leader could replace humans with singularly loyal AIs and become unaccountable to the law, the public, or even former allies Image
Consider a national security crisis where the military rapidly deploys AI-controlled robots that can fully replace human soldiers.

Meanwhile, the head of state pushes hard for the robots to prioritise their commands, despite nominal legal constraints—enabling a coup. Image
Read 12 tweets
Mar 26
📄New paper!

Once we automate AI R&D, there could be an intelligence explosion, even without labs getting more hardware.

Empirical evidence suggests the positive feedback loop of AI improving AI could overcome diminishing returns.

See 🧵.
A software intelligence explosion is where AI improves in a runaway feedback loop: AI makes smarter AI, which makes even-smarter AI etc.

AND this happens just via better AI software – algorithms, data, post-training, etc. – without needing more hardware.

Could that happen?
Each doubling of AI software efficiency will take more effort than the last – that’s diminishing returns.

But how much more effort?

If it takes >2X the effort, there’s no software intelligence explosion.
If it takes <2X the effort, there is one. Image
Read 11 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us!

:(