Nat McAleese Profile picture
Jan 23 6 tweets 1 min read Read on X
Epoch AI are going to publish more details, but on the OpenAI side for those interested: we did not use FrontierMath data to guide the development of o1 or o3, at all. (1/n)
We didn't train on any FM derived data, any inspired data, or any data targeting FrontierMath in particular (3/n)
I'm extremely confident, because we only downloaded frontiermath for our evals *long* after the training data was frozen, and only looked at o3 FrontierMath results after the final announcement checkpoint was already picked 😅 (4/n)
We did partner with EpochAI to build FrontierMath — hard uncontaminated benchmarks are incredibly valuable and we build them somewhat often, though we don't usually share results on them. (5/n)
Our agreement with Epoch means that they can evaluate other frontier models and we can evaluate models internally pre-release, as we do on many other datasets (6/n)
I'm sad there was confusion about this, as o3 is an incredible achievement and FrontierMath is a great eval. We're hard at work on a release-ready o3 & hopefully release will settle any concerns about the quality of the model! (7/7)

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Nat McAleese

Nat McAleese Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @__nmca__

Dec 20, 2024
o3 represents enormous progress in general-domain reasoning with RL — excited that we were able to announce some results today! Here’s a summary of what we shared about o3 in the livestream (1/n) Image
o1 was the first large reasoning model — as we outlined in the original “Learning to Reason” blog, it’s “just” an LLM trained with RL. o3 is powered by further scaling up RL beyond o1, and the strength of the resulting model the resulting model is very, very impressive. (2/n)
Firstly and most importantly: we tested on recent unseen programming competitions and find that the model would rank amongst some of the best competitive programmers in the world, with an estimated CodeForces rating over 2700. (3/n)
Read 14 tweets
Nov 4, 2022
Learn your classification task with 2x less data & better final accuracy via active learning in our new paper: arxiv.org/abs/2211.01568. How does it work? (1/n) Image
Models should use what they have learned in the past to pick the most informative things to learn in the future. This has proved surprisingly tricky so far with naive exploration common in RL, and many AL methods failing to make the most of pre-trained models. (2/n)
How do we learn what will be informative? It helps to separate aleatoric & epistemic uncertainty. Ian argues you can do this with the joint distribution of your labels - and has a key paper on it, introducing EpiNets arxiv.org/abs/2107.08924 (3/n) Image
Read 9 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us!

:(