Hao AI Lab Profile picture
Feb 28 3 tweets 2 min read Read on X
Claude-3.7 was tested on Pokémon Red, but what about more real-time games like Super Mario 🍄🌟?

We threw AI gaming agents into LIVE Super Mario games and found Claude-3.7 outperformed other models with simple heuristics. 🤯

Claude-3.5 is also strong, but less capable of planning complex maneuvers. Gemini-1.5-pro and GPT-4o perform less well.
We built Gaming agents to run platformers and puzzle video games in real time. Check out our demos and try our repo yourself to customize your own gaming agent! 🎮



In addition to Super Mario Bros, we also support 2048, as well as Tetris. More games are coming soon! 👾github.com/lmgame-org/Gam…
In addition to the classics, our LMGames team also designs and hosts computer games for AI evaluations.

Our mission is to study new perspectives for AI evaluations and the evolving roles humans play in evaluations.

We believe games provide challenging and dynamic environments for testing LLM agents.

Checkout our released Roblox game as well as the leaderboard: lmgame.org/#/blog/ai_spac…

More information at: lmgame.org

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Hao AI Lab

Hao AI Lab Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @haoailab

Feb 17
Reasoning models often waste tokens self-doubting.

Dynasor saves you up to 81% tokens to arrive at the correct answer! 🧠✂️
- Probe the model halfway to get the certainty
- Use Certainty to stop reasoning
- 100% Training-Free, Plug-and-play

🎮Demo: hao-ai-lab.github.io/demo/dynasor-c…
[2/n] Observation: Reasoning models (🟠) use WAY more tokens than needed vs traditional models (🔵).
Although reasoning models achieve higher acc%, they consume much more tokens than traditional models. User who can accept a lower acc% will waste tons more money💰💰.

Why? Image
[3/n] 🐢 Reasoning model usually self-doubt.
Model only spends 300 tokens arriving at the right answer, but spends the extra 990 tokens on meaningless verification loops, making no progress at all!
➡️ "Wait, is 2+2 really 4? Let me check..." Image
Read 8 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us!

:(