Nat McAleese Profile picture
Research @OpenAI. Previously @DeepMind. Views my own.
Jan 23 6 tweets 1 min read
Epoch AI are going to publish more details, but on the OpenAI side for those interested: we did not use FrontierMath data to guide the development of o1 or o3, at all. (1/n) We didn't train on any FM derived data, any inspired data, or any data targeting FrontierMath in particular (3/n)
Dec 20, 2024 14 tweets 3 min read
o3 represents enormous progress in general-domain reasoning with RL — excited that we were able to announce some results today! Here’s a summary of what we shared about o3 in the livestream (1/n) Image o1 was the first large reasoning model — as we outlined in the original “Learning to Reason” blog, it’s “just” an LLM trained with RL. o3 is powered by further scaling up RL beyond o1, and the strength of the resulting model the resulting model is very, very impressive. (2/n)
Nov 4, 2022 9 tweets 4 min read
Learn your classification task with 2x less data & better final accuracy via active learning in our new paper: arxiv.org/abs/2211.01568. How does it work? (1/n) Image Models should use what they have learned in the past to pick the most informative things to learn in the future. This has proved surprisingly tricky so far with naive exploration common in RL, and many AL methods failing to make the most of pre-trained models. (2/n)