Epoch AI are going to publish more details, but on the OpenAI side for those interested: we did not use FrontierMath data to guide the development of o1 or o3, at all. (1/n)
We didn't train on any FM derived data, any inspired data, or any data targeting FrontierMath in particular (3/n)
I'm extremely confident, because we only downloaded frontiermath for our evals *long* after the training data was frozen, and only looked at o3 FrontierMath results after the final announcement checkpoint was already picked 😅 (4/n)
We did partner with EpochAI to build FrontierMath — hard uncontaminated benchmarks are incredibly valuable and we build them somewhat often, though we don't usually share results on them. (5/n)
Our agreement with Epoch means that they can evaluate other frontier models and we can evaluate models internally pre-release, as we do on many other datasets (6/n)
I'm sad there was confusion about this, as o3 is an incredible achievement and FrontierMath is a great eval. We're hard at work on a release-ready o3 & hopefully release will settle any concerns about the quality of the model! (7/7)
• • •
Missing some Tweet in this thread? You can try to
force a refresh
o3 represents enormous progress in general-domain reasoning with RL — excited that we were able to announce some results today! Here’s a summary of what we shared about o3 in the livestream (1/n)
o1 was the first large reasoning model — as we outlined in the original “Learning to Reason” blog, it’s “just” an LLM trained with RL. o3 is powered by further scaling up RL beyond o1, and the strength of the resulting model the resulting model is very, very impressive. (2/n)
Firstly and most importantly: we tested on recent unseen programming competitions and find that the model would rank amongst some of the best competitive programmers in the world, with an estimated CodeForces rating over 2700. (3/n)
Learn your classification task with 2x less data & better final accuracy via active learning in our new paper: arxiv.org/abs/2211.01568. How does it work? (1/n)
Models should use what they have learned in the past to pick the most informative things to learn in the future. This has proved surprisingly tricky so far with naive exploration common in RL, and many AL methods failing to make the most of pre-trained models. (2/n)
How do we learn what will be informative? It helps to separate aleatoric & epistemic uncertainty. Ian argues you can do this with the joint distribution of your labels - and has a key paper on it, introducing EpiNets arxiv.org/abs/2107.08924 (3/n)