Leopold Aschenbrenner Profile picture
Mar 23, 2023 17 tweets 9 min read Read on X
Best thing I’ve read on GPT-4’s capabilities. You should read it.

Impressive qualitative jump over ChatGPT. It’s definitely not just memorizing, it’s learning to think and reason.

Probably the most important thing happening in the world right now.

Thread with some highlights:
Step change over ChatGPT.

GPT-4 can comprehend complex ideas, reason abstractly, solve problems and learn from interactive feedback and experience, and exhibits common sense.
Common sense:
Text-only GPT-4 (version *not* trained on images, *only* text) learned what things look like! Not just memorization; it can draw a unicorn, manipulate drawings, etc.

Again, it learned to see… from just learning to predict text.
It’s visualizing a map!
Qualitatively much better output than ChatGPT on interdisciplinary tasks. Feels much less like generic regurgitation and more like what a creative human would produce.
GPT-4 is excellent at coding. Probably better than the average software engineer.

It’s using common sense, interactive, and reasoning through nontrivial problems.
More coding.

Very closely watching how good these models get at deep learning research tasks…. (when do feedback loops start?)
Math:

GPT-4 does better than Minerva (state-of-the-art math-specific model).

Of the ones GPT-4 gets wrong, the large majority seem to be simple arithmetic errors…

(rather than getting approach/reasoning fundamentally wrong, which was more often the case with ChatGPT).
(They check this to make sure it’s not just memorization.)
I find GPT-4’s reasoning on novel math problems pretty impressive here. Qualitative jump from ChatGPT.

Next-word prediction (-> linear thinking) still constrains model though, so it can get off track.
More examples of impressive mathematical reasoning:
GPT-4 is getting the hang of Fermi estimates
GPT-4 doing some simple hacking via the command line
Seems to be pretty flexible at getting the hang of tool use. Huge capabilities overhang for startups to make really capable products with.
GPT-4 getting much better at reasoning about theory of mind and social situations.
It’s incredible how much GPT-4 can do.

Fundamentally, these models are still really gimped though. Mostly just trained to predict the next word.

No memory, no scratchpad, no planning, can’t circle back and revise, etc.

What happens when we ungimp these models?

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Leopold Aschenbrenner

Leopold Aschenbrenner Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @leopoldasch

Mar 29, 2023
Encore post today: Want to win the AGI race? Solve alignment.

Look, I really don't want Xi Jinping Thought to rule the world.

But, practically, society cares about safety, a lot. To deploy your AGI systems, people will demand confidence that it's safe.
Don't underestimate the endogenous societal response. Things will get crazy, and people will pay attention.

AI risk/AI safety is already going mainstream. People have been primed by sci-fi; all the CEOs have secretly believed in it for years.
Yes, the discourse will be incredibly dumb, and the societal response will be a dumpster-fire.

But it will be an *intense* societal response. That could be a big barrier to deploying your AGI—unless you have a convincing solution to (scalable) alignment.
Read 6 tweets
Mar 29, 2023
New post: Nobody's on the ball on AGI alignment

With all the talk about AI risk, you'd think there's a crack team on it. There's not.
- There's far fewer people on it than you might think
- The research is very much not on track

(But it's a solvable problem, if we tried!)
There's ~300 alignment researchers in the world (counting generously).

There were 30,000 attendees at ICML alone (a conference for ML researchers).

OpenAI has ~7 people on its scalable alignment team.

There just aren't many great researchers out there focused on this.
But much more than the numbers, what made this visceral to me was ... looking at the research.

There's very little research that feels like it's getting at the core of the problem—and is on track for actually solving it in <5 years.

I go on a quick, stylized, incomplete tour:
Read 7 tweets
Mar 29, 2023
Fwiw, I think this is a bad idea.

Models aren’t actually dangerous yet. Risks “crying wolf.” Keep the powder dry for if/when we face real xrisk.
And not sure what 6mths pause would accomplish.

Capabilities overhang/compensating efforts would undo most of the timeline slowdown after the 6mths.

Pause much more useful in crunchtime, when can do alignment research with more powerful AI systems.
Read 5 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us!

:(