ARC Prize Profile picture
A North Star for AGI. Co-founders: @fchollet @mikeknoop. President: @gregkamradt. Help support the mission - make a donation today.
Jul 18 10 tweets 4 min read
Today, we're announcing a preview of ARC-AGI-3, the Interactive Reasoning Benchmark with the widest gap between easy for humans and hard for AI

We’re releasing:
* 3 games (environments)
* $10K agent contest
* AI agents API

Starting scores - Frontier AI: 0%, Humans: 100% Image Every game environment is novel, unique, and only requires core-knowledge priors

No language, trivia, or specialized knowledge is needed to beat the games

* Play: three.arcprize.org
* Compete: arcprize.org/arc-agi/3/
* Build: three.arcprize.org/docs
Jul 10 4 tweets 2 min read
Grok 4 (Thinking) achieves new SOTA on ARC-AGI-2 with 15.9%

This nearly doubles the previous commercial SOTA and tops the current Kaggle competition SOTA Image On ARC-AGI-1, Grok 4 (Thinking) achieves 66.7% inline with the Pareto frontier for AI reasoning systems we reported last month Image
Apr 22 8 tweets 4 min read
o3 and o4-mini on ARC-AGI's Semi Private Evaluation

* o3-medium scores 53% on ARC-AGI-1
* o4-mini shows state-of-the-art efficiency
* ARC-AGI-2 remains virtually unsolved (<3%)

Through analysis we highlight differences from o3-preview and other model behavior Image As mentioned before, OpenAI has confirmed that the version of o3 that was released last week is not the same version that we tested in December ‘24.

For more on this see the tweet below or the blog post

Mar 24 14 tweets 5 min read
Today we are announcing ARC-AGI-2, an unsaturated frontier AGI benchmark that challenges AI reasoning systems (same relative ease for humans).

Grand Prize: 85%, ~$0.42/task efficiency

Current Performance:
* Base LLMs: 0%
* Reasoning Systems: <4% Image ARC-AGI-1 (2019) pinpointed the moment AI moved beyond pure memorization in late 2024 demonstrated by OpenAI's o3 system.

Now, ARC-AGI-2 raises the bar significantly, challenging known test-time adaptation methods.

@MLStreetTalk is helping us launch ARC-AGI-2 with an interview of @mikeknoop & @fchollet.

Jan 21 4 tweets 1 min read
Verified DeepSeek performance on ARC-AGI's Public Eval (400 tasks) + Semi-Private (100 tasks)

DeepSeek V3:
* Semi-Private: 7.3% ($.002)
* Public Eval: 14% ($.002)

DeepSeek Reasoner:
* Semi-Private: 15.8% ($.06)
* Public Eval: 20.5% ($.05)

(Avg $ per task) Thank you to @rishab_partha for helping with this analysis

The purpose of the 100 Semi-Private Tasks is to provide a secondary hold out test set score.

The 400 Public Eval tasks were published in 2019. They have been widely studied and included in other model training data.
Dec 6, 2024 8 tweets 2 min read
ARC Prize remains unbeaten.

In 2024, SoTA moved from 33% to 55.5%.

Announcing: ARC Prize 2024 Winners & Technical Report. Image In part due to ARC Prize 2024, we believe AGI progress is no longer stalled.

But new ideas are still needed.

All scores below are open sourced & reproducible.

Winners and Technical Report: arcprize.org/2024-results