Artificial Analysis Profile picture
Jun 17 3 tweets 3 min read Read on X
Announcing MicroEvals🧩: the fastest way to vibe check models your use case

Every time we benchmark a model, we want to go beyond the numbers and see how it feels. It’s surprisingly hard to do that and it shouldn’t be!

Today, we’re announcing MicroEvals on Artificial Analysis. It’s the fastest way to test a bunch of prompts across models.

MicroEvals are free to create and can be built in under a minute, simply add prompts and select models. Currently all MicroEvals are public and we provide comparison-friendly views including HTML rendering, SVG support (credit to @simonw's famous pelican on a bike SVG), p5 js animations and more. Careful, playing the games is addictive!

We can’t wait to see what MicroEvals the community will create!

Here are a few examples of questions that MicroEvals might help answer:
➤ Is Claude’s writing style really still better than GPT’s?
➤ Are any <3B models smart enough for my classification task?
➤ Can o3 or Claude 4 solve my questions that no other model gets right?
➤ Which model bounces balls in a spinning hexagon best?
➤ How do different versions of my prompt change the way different models answer?

See below for a link to try it out! 👇
One of our favorites: "Create a p5.js animation of an astronaut dropping a feather and a hammer simultaneously on the surface of the moon (the Apollo 15 lunar module should be visible in the background)."

Link to MicroEval: artificialanalysis.ai/microevals/p5j…


Link to MicroEvals:


Creating your own is free and takes less than a minute 🚀artificialanalysis.ai/microevals

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Artificial Analysis

Artificial Analysis Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @ArtificialAnlys

Jun 12
Google is firing on all cylinders across AI - Gemini 2.5 Pro is equal #2 in intelligence, Veo 3 and Imagen 4 are amongst the leaders in media generation, and with TPUs they're the only vertically integrated player

🧠 Google is now equal #2 Artificial Analysis Intelligence Index with the recent release of the Gemini 2.5 Pro (June 2025) model, rivaling others including OpenAI, DeepSeek and Grok

📽️ Google Veo 3 now ranks second in the Artificial Analysis Video Arena Leaderboard only behind ByteDance’s new Seedance 1.0 model

🖼️ Google Imagen 4 now occupies 2 out of the top 5 positions on the Artificial Analysis Image Arena Leaderboard

👨‍🏭 Google has a full stack AI offering with offerings across the application layer, models, cloud inference and hardware TPUs)Image
Image
Google has consistently been shipping intelligence increases in its Gemini Pro series Image
Google Veo 3 now occupies second place in the Artificial Analysis Video Arena, after originally debuting in the first place. Still a significant leap over Google Veo 2! Image
Read 5 tweets
May 29
DeepSeek’s R1 leaps over xAI, Meta and Anthropic to be tied as the world’s #2 AI Lab and the undisputed open-weights leader

DeepSeek R1 0528 has jumped from 60 to 68 in the Artificial Analysis Intelligence Index, our index of 7 leading evaluations that we run independently across all leading models. That’s the same magnitude of increase as the difference between OpenAI’s o1 and o3 (62 to 70).

This positions DeepSeek R1 as higher intelligence than xAI’s Grok 3 mini (high), NVIDIA’s Llama Nemotron Ultra, Meta’s Llama 4 Maverick, Alibaba’s Qwen 3 253 and equal to Google’s Gemini 2.5 Pro.

Breakdown of the model’s improvement:
🧠 Intelligence increases across the board: Biggest jumps seen in AIME 2024 (Competition Math, +21 points), LiveCodeBench (Code generation, +15 points), GPQA Diamond (Scientific Reasoning, +10 points) and Humanity’s Last Exam (Reasoning & Knowledge, +6 points)

🏠 No change to architecture: R1-0528 is a post-training update with no change to the V3/R1 architecture - it remains a large 671B model with 37B active parameters

🧑‍💻 Significant leap in coding skills: R1 is now matching Gemini 2.5 Pro in the Artificial Analysis Coding Index and is behind only o4-mini (high) and o3

🗯️ Increased token usage: R1-0528 used 99 million tokens to complete the evals in Artificial Analysis Intelligence Index, 40% more than the original R1’s 71 million tokens - ie. the new R1 thinks for longer than the original R1. This is still not the highest token usage number we have seen: Gemini 2.5 Pro is using 30% more tokens than R1-0528

Takeaways for AI:
👐 The gap between open and closed models is smaller than ever: open weights models have continued to maintain intelligence gains in-line with proprietary models. DeepSeek’s R1 release in January was the first time an open-weights model achieved the #2 position and DeepSeek’s R1 update today brings it back to the same position

🇨🇳 China remains neck and neck with the US: models from China-based AI Labs have all but completely caught up to their US counterparts, this release continues the emerging trend. As of today, DeepSeek leads US based AI labs including Anthropic and Meta in Artificial Analysis Intelligence Index

🔄 Improvements driven by reinforcement learning: DeepSeek has shown substantial intelligence improvements with the same architecture and pre-train as their original DeepSeek R1 release. This highlights the continually increasing importance of post-training, particularly for reasoning models trained with reinforcement learning (RL) techniques. OpenAI disclosed a 10x scaling of RL compute between o1 and o3 - DeepSeek have just demonstrated that so far, they can keep up with OpenAI’s RL compute scaling. Scaling RL demands less compute than scaling pre-training and offers an efficient way of achieving intelligence gains, supporting AI Labs with fewer GPUs

See further analysis below 👇Image
DeepSeek has maintained its status as amongst AI labs leading in frontier AI intelligence Image
Today’s DeepSeek R1 update is substantially more verbose in its responses (including considering reasoning tokens) than the January release. DeepSeek R1 May used 99M tokens to run the 7 evaluations in our Intelligence Index, +40% more tokens than the prior release Image
Read 6 tweets
May 8
Google’s Gemini 2.5 Flash costs 150x more than Gemini 2.0 Flash to run Artificial Analysis Intelligence Index

The increase is driven by:
➤ 9x more expensive output tokens - $3.5 per million with reasoning on ($0.6 with reasoning off) vs $0.4 for Gemini 2.0 Flash
➤ 17x higher token usage across our evals due to adding reasoning - the greatest volume of tokens used in reasoning that we have observed for any model to date

This doesn’t mean Gemini 2.5 Flash is not a compelling value proposition - its 12 point bump in Artificial Analysis Intelligence Index makes it suitable for a range of use cases that may not perform sufficiently well on Gemini 2.0 Flash. With per-token pricing still slightly below OpenAI’s o4-mini, Gemini 2.5 Flash may still be a cost-effective option for certain use cases.

It does mean that Gemini 2.5 Flash with Reasoning may not be a clear upgrade for everyone - for many use cases, developers may want to stay with 2.0 Flash or use 2.5 Flash with reasoning off.Image
Breakdown of token usage, pricing and end-to-end latency. Image
Image
Image
Image
See further details and other comparisons: artificialanalysis.ai/models?models=…
Read 4 tweets
Mar 28
Today’s GPT-4o update is actually big - it leapfrogs Claude 3.7 Sonnet (non-reasoning) and Gemini 2.0 Flash in our Intelligence Index and is now the leading non-reasoning model for coding

This makes GPT-4o the second highest scoring non-reasoning model (excludes o3-mini, Gemini 2.5 Pro, etc), coming in just behind DeepSeek’s V3 0324 release earlier this week.

Key benchmarking results:
➤ Significant jump in the Artificial Analysis Intelligence Index from 41 to 50, putting GPT-4o (March 2025) ahead of Claude 3.7 Sonnet
➤ Now the the leading non-reasoning model for coding: 🥇#1 in the Artificial Analysis Coding Index and in LiveCodeBench, surpassing DeepSeek V3 (March 2025) and Claude 3.7 Sonnet

@OpenAI has committed an all-new AI model naming sin of simply refusing to name the model at all, so we will be referring to it as GPT-4o (March 2025).

This update has also been released in a fairly confusing way - the March 2025 version of GPT-4o is currently available:
➤ In ChatGPT, when users select GPT-4o in the model selector
➤ Via API on the chatgpt-4o-latest endpoint - a non-dated endpoint that OpenAI described at launch as intended for research use only, with developers encouraged to use the dated snapshot versions of GPT-4o for most API use cases

As of today, this means that the chatgpt-4o-latest endpoint is serving a significantly better model than the proper API versions GPT-4o (ie. the August 2024 and November 2024 snapshots).

We recommend some caution for developers considering moving workloads to the chatgpt-4o-latest endpoint given OpenAI’s previous guidance, and note that OpenAI will likely release a dated API snapshot soon. We also note that OpenAI prices the chatgpt-4o-latest endpoint at $5/$15 per million input/output tokens, whereas the API snapshots are priced at $2.5/$10.

See below for further analysis 👇Image
GPT-4o (March 2025) is now the leading non-reasoning coding model, surpassing DeepSeek V3 (March 2025) and Claude 3.7 Sonnet in the Artificial Analysis Coding Index (made up of LiveCodeBench and SciCode) and is #1 in LiveCodeBench Image
Image
GPT-4o (March 2025) still lags behind reasoning models, though these can be considered separately considering their higher latency and typically higher cost Image
Read 5 tweets
Mar 25
DeepSeek takes the lead: DeepSeek V3-0324 is now the highest scoring non-reasoning model

This is the first time an open weights model is the leading non-reasoning model, a milestone for open source.

DeepSeek V3-0324 has jumped forward 7 points in Artificial Analysis Intelligence Index, now sitting ahead of all other non-reasoning models. It sits behind DeepSeek’s own R1 in Intelligence Index, as well as other reasoning models from OpenAI, Anthropic and Alibaba, but this does not take away from the impressiveness of this accomplishment. Non-reasoning models answer immediately without taking time to ‘think’, making them useful in latency-sensitive use cases.

Three months ago, DeepSeek released V3 and we we wrote that there is a new leader in open source AI - noting that V3 came close to leading proprietary models from Anthropic and Google but did not surpass them.

Today, DeepSeek are not just releasing the best open source model - DeepSeek are now driving the frontier of non-reasoning open weights models, eclipsing all proprietary non-reasoning models, including Gemini 2.0 Pro, Claude 3.7 Sonnet and Llama 3.3 70B. This release is arguably even more impressive than R1 - and potentially indicates that R2 is going to be another significant leap forward.

Most other details are identical to the December 2024 version of DeepSeek V3, including:
➤ Context window: 128k (limited to 64k on DeepSeek’s first-party API)
➤ Total parameters: 671B (requires >700GB of GPU memory to run in native FP8 precision - still not something you can run at home!)
➤ Active parameters: 37B
➤ Native FP8 precision
➤Text only - no multimodal inputs or outputs
➤ MIT LicenseImage
DeepSeek V3-0324 marks the first time an open weights model has been the leading non-reasoning model. Image
Compared to leading reasoning models, including DeepSeek’s own R1, DeepSeek V3-0324 remains behind - but for many uses, the increased latency associated with letting reasoning models ‘think’ before answering makes them unusable. Image
Read 4 tweets
Feb 13
Announcing Artificial Analysis Intelligence Index V2 - the biggest upgrade to our eval suite yet

Summary of Intelligence Index V2:
➤ Harder evals: MMLU-Pro, HLE (Humanity's Last Exam), GPQA Diamond, MATH-500, AIME 2024, SciCode, and LiveCodeBench - see below for a description of each evaluation.
➤ Independent: As always, Artificial Analysis has independently run every eval on every model - no inconsistent lab-claim results anywhere to be seen
➤ Standardized: We evaluate models under identical conditions with consistent prompting, temperature settings and answer extraction techniques
➤ Extensive sensitivity testing: We’ve run every eval in Index V2 dozens of times in our pre-launch assessment phase to understand variability, and set the number of repeats we use to achieve our target confidence intervals
➤ More robust software stack: This one is a little inside baseball but is actually a pretty big deal - we’re running tens of thousands of queries on hundreds of models so our entire benchmarking stack has to be extremely robust, and allow our team to monitor evals for errors and anomalies so we can have confidence in every number published

Artificial Analysis has independently run thousands of evals across hundreds of models to support this launch - today, we already have Intelligence Index scores for all leading models published on our updated website.

For further information regarding how models perform, the evals we have chosen to include and our methodology, see below.Image
Deep-dive into the evals included in Intelligence Index V2

On the Artificial Analysis website we report all eval scores individually allowing you to understand the individual components of the index and understand model strengths and weaknesses.

Reasoning and Knowledge (50% weighting):
➤ MMLU Pro: Comprehensive evaluation of advanced knowledge across domains, adapted from original MMLU but focusing on harder questions and using a 10 option multi-choice format
➤ Humanity's Last Exam: Recent frontier academic benchmark from the Centre for AI Safety (led by Dan Hendrycks, @ai_risks)
➤ GPQA Diamond: Scientific knowledge and reasoning benchmark

Mathematical Reasoning (25% weighting):
➤ MATH-500: Mathematical problem-solving across various difficulty levels; a subset of 500 questions from Hendrycks' 2021 MATH dataset, created by OpenAI as a consequence of OpenAI training on ~90% of the original 5000 MATH questions for reinforcement learning on o1-series models
➤ AIME 2024: Advanced mathematical problem-solving dataset from the 2024 American Invitational Mathematics Examination

Code Generation and Comprehension (25% weighting):
➤ SciCode: Python programming to solve scientific computing tasks; we test with scientist-annotated background information included in the prompt and report the sub-problem score
➤ LiveCodeBench: Python programming to solve programming scenarios derived from LeetCode, AtCoder, and Codeforces; we test 315 problems from the 1 July 2024 to 1 Jan 2025 subset from release_v5Image
Artificial Analysis Intelligence Index runs on both reasoning and non-reasoning models. Our Intelligence Index clearly shows reasoning models outperforming non-reasoning and DeepSeek R1 rivaling OpenAI’s o1 and o3-mini. Image
Read 5 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us!

:(