Miles Brundage Profile picture
Dec 29 6 tweets 1 min read Read on X
RL on chain of thought (i.e. the o1/o3 series of models from OpenAI and similar ones elsewhere) is relevant to non-math, non-coding problems.

It’s nice to have an infinite source of ground truth (“is this the exact right number,” “is this proof valid”) but not essential. 🧵
1. RL on chain of thought leads to generally useful tactics like problem decomposition and backtracking that can improve peak problem solving ability and reliability in other domains.
2. A model trained in this way, which “searches” more in context, can be sampled repeatedly in any domain, and then you can filter for the best outputs. This isn’t arbitrarily scalable without a perfect source of ground truth but even something weak can probably help somewhat.
3. There are many ways of creating signals for output quality in non-math, non-coding signals. OpenAI has said this is a data-efficient technique - you don’t nec. need millions, maybe hundreds as with their new RLT service. And you can make up for imperfection with diversity.
Why do I mention this?

I think people are, as usual this decade, concluding prematurely that AI will go more slowly than it will, and that “spiky capabilities” is the new “wall.”
Math/code will fall a bit sooner than law/medicine but only kinda because of the ground truth thing—they’re also more familiar to the companies, the data’s in a good format, fewer compliance issues etc.

Do not mistake small timing differences for a grand truth of the universe.

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Miles Brundage

Miles Brundage Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @Miles_Brundage

Dec 27
Glad to see OpenAI sharing their latest thinking re: corporate structure, and a lot of the explanation makes sense. However, there are some red flags here that need to be urgently addressed + better explained publicly before the transition goes through.🧵
First, just noting that I agree that AI is capital intensive in a way that was less clear at the time of OpenAI’s founding, and that a pure non-profit didn’t work given that. And given the current confusing bespoke structure, some simplification is very reasonable to consider.
Beyond those points, though, I have serious disagreements/concerns.

First, there is surprisingly little discussion of actual governance details, despite this arguably being the key issue. What will the vote/share split be between different constituencies + considerations?
Read 9 tweets
Dec 20
I've been saying recently that completely superhuman AI math and coding by end of 2025 was plausible - 50/50 or so.

Now I'd say it's much more likely than not (o3 is already better than almost all humans).
Does this mean humans will be able to add 0 value in these areas? Not necessarily - knowing the problem to solve requires insight/data from other domains, and it may be like chess/Go where there's a centaur period where humans can *occasionally* help even if weaker head-to-head.
But it does mean there will be big implications for these domains + for many other aspects of life/the economy, and it will mean a shift from humans doing most of the work to humans doing the high level vision/management + being in the loop for responsibility/safety purposes.
Read 4 tweets
Sep 12
My team + many others across OpenAI have been thinking for a while about the implications of AIs that can think for a while.

More to say later (and do read all the o1 blog posts, system card, testimonials etc. and ofc try it), but for now, 3 personal reflections.
1. Test-time compute as a new scaling paradigm has implications for governance/safety. Models can (if deployers want) have more "self-control" via thinking things through first, the capabilities attainable with a given base model are greater, + inference compute matters more.
2. The economic potential of models that are more reliable + and approach/exceed human expert levels of performance on various STEM tasks could be substantial. How substantial is early to say and we'll learn more as people experiment. But I expect acceleration in productivity.
Read 5 tweets
May 31, 2023
"They're just saying that their technology might kill everyone in order to fend off regulation" seems like a very implausible claim to me, and yet many believe it.

What am I missing? Are there precedents for that tactic? How would regulating *less* help with the killing thing?
I certainly can't rule out that there are some galaxy brain execs out there, but this seems pretty obviously not the reason why e.g. there was a statement spearheaded by...academics, and many people who have talked about this stuff long before they had products to regulate.
BTW, I realize there are more sophisticated versions of this view (e.g. I've seen them from @rcalo @j2bryson @rajiinio etc.); my point is not to comprehensively debunk it--hard to "prove a negative." My point is just to gesture at the prima facie implausibility.
Read 4 tweets
May 22, 2023
The cluster of issues around:

- Use of AI in influence operations + scams
- Watermarking/provenance for AI outputs
- Online proof of personhood

is definitely on the harder end of the spectrum as far as AI policy issues go.

Lots of collective action problems + tradeoffs.
It's also among the more underestimated issues--it's *starting* to get semi mainstream but the full severity + linkages to other issues (how do we solve the other stuff if no one knows what's going on + democracy is breaking) + lack of silver bullets are not widely appreciated.
Among other collective action issues:
- aligning on standards on these things across AI providers, social media platforms, artistic/productivity applications, etc.
- implementing them when users don’t like them or doing so disproportionately affects a subset
- social norms re:…
Read 4 tweets
May 6, 2023
A lot of people are wondering if the current moment of interest in AI regulation is just a passing thing or the new normal.

I think it's almost certainly the new normal. The reason it's happening is widespread AI adoption, and *that* is only going to massively increase.
(unless, that is, there is significant regulation to prevent it, so... 🤷‍♂️ )
For transparency, I have been off here before, e.g. this tweet--I may have been literally correct but set the bar too low and in my head, I expected more.

Things are super different now w.r.t. capabilities, adoption, impact, and public awareness.

Read 4 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us!

:(