Tamay Besiroglu Profile picture
Jan 23 7 tweets 2 min read Read on X
1/6 We haven't communicated clearly enough about FrontierMath's relationship with OpenAI, and I want to own that. By not being transparent from the start, we caused confusion for contributors, researchers, and the public.
2/6 OpenAI commissioned Epoch AI to produce 300 math problems for FrontierMath. Because it was a commissioned project, OpenAI owns those problems. They have access to the statements and solutions—except for a 50-question holdout set we're finalizing.
3/6 Epoch AI is free to conduct and publish evaluations of any models using the benchmark, as we have done already. We retain this right to evaluate models independently.
4/6 While we announced OpenAI's support before the o3 model launch in December, we didn't clearly communicate their data access and ownership agreements. We also failed to systematically inform contributors about industry sponsorship. That was a miss on our side.
5/6 This was our first project of this scale, involving nearly 100 contractors and complex agreements. Our lack of experience led to communication failures, particularly around industry sponsorship and data access agreements.
6/6 Going forward, we’ll proactively disclose industry sponsorship and data access agreements, and make sure contributors have that info up front. We can and will do better on transparency. More details in our blog post: epoch.ai/blog/openai-an…
And I appreciate Nat's clarity here. I do trust OpenAI's use of it is appropriate as a benchmark (not training on it, or otherwise targeting for it)

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Tamay Besiroglu

Tamay Besiroglu Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @tamaybes

Dec 21, 2024
I’m excited to announce the development of Tier 4, a new suite of math problems that go beyond the hardest problems in FrontierMath. o3 is remarkable, but there’s still a ways to go before any single AI system nears the collective prowess of the math community.
FrontierMath currently spans three broad tiers:
• T1 (25%) Advanced, near top-tier undergrad/IMO
• T2 (50%) Needs serious grad-level background
• T3 (25%) Research problems demanding relevant research experience
All can take hours—or days—for experts to solve.

Tier 4 aims to push the boundary even further. We want to assemble problems so challenging that solving them would demonstrate capabilities on par with an entire top mathematics department.
Read 5 tweets
Dec 21, 2024
1/11 I’m genuinely impressed by OpenAI’s 25.2% Pass@1 performance on FrontierMath—this marks a major leap from prior results and arrives about a year ahead of my median expectations. Image
2/11 For context, FrontierMath is a brutally difficult benchmark with problems that would stump many mathematicians. The easier problems are as hard as IMO/Putnam; the hardest ones approach research-level complexity.
3/11 With earlier models like o1-preview, Pass@1 performance (solving on first attempt) was only around 2%. When allowing 8 attempts per problem (Pass@8) and counting problems solved at least once, we saw ~6% performance. o3's 25.2% at Pass@1 is substantially more impressive.
Read 12 tweets
May 16, 2024
A few weeks ago, we attempted to replicate the Chinchilla paper. We found that their estimated model fails to adequately fit the reconstructed data, that it implies inconsistent scaling policies, and that their confidence intervals are implausibly narrow.
The authors responded, clarifying that this was the result of their optimizer stopping early due to a bad loss scale choice. They plan to update their results and release the data. We appreciate @borgeaud_s and others' openness in addressing this issue.
This error is understandable. From experience, choosing the right optimizer and loss scale is often non-trivial, with no obvious error signs in case of poor convergence. I know at least another otherwise great paper that had a very similar issue.
Read 9 tweets
Apr 17, 2024
The Chinchilla scaling paper by Hoffmann et al. has been highly influential in the language modeling community. We tried to replicate a key part of their work and discovered discrepancies. Here's what we found. (1/9) Image
We reconstructed the data by extracting the SVG from the paper, parsing out the point locations & colors, mapping the coordinates to model size & FLOP, and mapping the colors to loss values. This let us closely approximate their original dataset from just the figure. (2/9) Image
When we fit their parametric scaling law, we get strikingly different estimates (Chi-squared p-value <1e-60!). The differences are significant for the data-scaling coefficient β and the irreducible loss E. (3/9) Image
Read 10 tweets
Mar 12, 2024
Language models have come a long way since 2012, when recurrent networks struggled to form coherent sentences. Our new paper finds that the compute needed to achieve a set performance level has been halving every 5 to 14 months on average. (1/10) Image
This rate of algorithmic progress is much faster than the two-year doubling time of Moore's Law for hardware improvements, and faster than other domains of software, like SAT-solvers, linear programs, etc. (2/10) Image
We estimate this using a dataset of over 200 language models from 2012 to 2023, evaluated on WikiText and Penn Treebank. By fitting a modified neural scaling law to this data, we estimate the rate of algorithmic efficiency improvements over time. (3/10) Image
Read 11 tweets
Dec 13, 2022
How much progress in machine learning has been due to advances in algorithms (architectures, optimisers, activation functions, etc.), and how much as been due to the scaling of compute or datasets?
@EgeErdil2 and I provide new answers: arxiv.org/abs/2212.05153
We use a dataset of over a hundred computer vision models from the last decade to investigate how better algorithms and architectures have enabled researchers to use compute and data more efficiently.
We find that every 9 months, the introduction of better algorithms contribute the equivalent of a doubling of compute budgets. This is much faster than the gains from Moore’s law! That said, there's uncertainty (our 95% CI spans 4 to 25 months).
Read 6 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us!

:(