Big news: we've figured out how to make a *universal* reward function that lets you apply RL to any agent with:
- no labeled data
- no hand-crafted reward functions
- no human feedback!
A 🧵 on RULER
First, our results: small models trained with RULER+GRPO are more reliable than o3 on 4/4 tasks, despite being 1/20th the cost. Surprisingly, they even beat models trained with hand-crafted reward functions on 3/4 tasks.
Why is this a big deal? RL is fantastic at making agents more reliable, but every task required either labeled data or hand-crafted reward functions. Each training pipeline was unique, expensive and error-prone.
RULER relaxes this requirement, making RL more plug-and-play.
RULER is simple and self-contained. It's a single file that can be dropped into any RL training pipeline. It also comes pre-integrated in our open-source RL framework, ART. (Stars appreciated! 🙏🤩) github.com/OpenPipe/ART/b…
RULER is based on an LLM-as-judge that takes multiple candidate solutions and ranks them _relative to each other_. This is an easier problem than scoring each solution in isolation.
💡 Thanks to GRPO math, we don't need to worry about whether RULER scores are calibrated _across_ different groups, only whether they're calibrated _within_ each group. By showing a full group at a time it can effectively self-calibrate in practice.
RULER comes with a default rubric that works well for many tasks (all the results up-thread were with the default rubric, zero customization!). But you can pass your own rubric to the judge if you want to customize it for your task. The default rubric is simple and general.
Lots of future work. I'm personally excited about:
1. Using RULER as a test-time compute method to improve performance on the fly 2. Using RULER with real user inputs in production to continuously improve performance (online learning)
Hot RL summer continues: we just released Summary-RL, an RL-trained summarization model that reaches SOTA on ServiceNow's Repliqa summarization benchmark!
Why did we do this? LLMs are already good at generating summaries, but they don't always focus on the information you care about. RL lets you customize a model to focus specifically on the types of data you want to preserve.
By directly optimizing on the number of questions that could be successfully answered from the summary, we taught Summary-RL what kinds of information to include. Within 30 training steps it already reached SOTA! ($22 to train)
🚀 Meet ART·E—our open-source RL-trained email research agent that searches your inbox and answers questions more accurately, faster, and cheaper than o3. Let's go deeper on how we built it. 🧵
We were inspired by OpenAI’s Deep Research, which showed how effective RL can be to teach an agent a research task. Our goal with ART·E was to replicate similar performance wins using open data and code!
The results exceeded expectations: ART·E surpasses o3 on accuracy, slashes latency 5×, and cuts costs 64×. Turns out RL works really well!
Sharing an important lesson learned from working with hundreds of customers: there’s a big difference in the right way to evaluate and fine-tune LLMs depending on whether your task has one right answer or many. RFT, DPO, RLHF, evals… all downstream of this! 🧵
I’ll call tasks with one correct answer (or just a few) “deterministic.” They include things like:
- Classification
- Structured extraction
- Copilot flows that produce a single action
They’re tasks where you can quickly check if an output is objectively correct.
On the other hand, “freeform” tasks have infinitely many correct outputs—think:
- Summaries
- Email drafts
- Chatbots
Here, correctness is more subjective. There’s no single “right” answer, and that affects how we measure success.
A few weeks ago, OpenAI announced Reinforcement Fine-Tuning (RFT)—a new way to adapt LLMs to complex tasks with very little training data. Here’s a quick rundown of how it works, why it’s a big deal, and when you should use it. 🧵
RFT helps a reasoning model (like o1) learn from just a few dozen examples. It's much more data efficient than standard supervised fine-tuning (SFT), since it teaches the model both the correct answer as well as how to reason about it.
Why does this matter? Because collecting tons of labeled data is still a bottleneck. Cutting that requirement by an order of magnitude (or more) means we can handle complex tasks—even with very small datasets—without drowning in labeling work.
Just launched agent.exe, a free, open-source Mac/Windows/Linux app that lets you use Claude 3.5 Sonnet to control your computer!
This was a fun little project to explore the API and see what the model can do. Computer use is really cool—I expect 2025 will be the year of agents.
Here's agent.exe booking travel on Google Flights. ✈️Claude 3.5 definitely isn't perfect—note that it confidently chooses the wrong dates!
All the code as well as a (still minimal) README for running the app is available here with an open source Apache 2 license. This is definitely still research-project-quality, but would love to see more development happening on top!
Super excited to announce our Mixture of Agents model+FT pipeline: beats GPT-4, but 25x cheaper!
- Humans prefer MoA outputs vs GPT-4 59% of the time
- New SOTA on both Arena-Hard (84.8) and Alpaca Eval (LC 68.4)
- Optimized for synthetic data generation for fine-tuning 🧵
The MoA architecture is simple: generate 3 initial GPT-4 completions, have GPT-4 reflect on them, and then have GPT-4 produce a final output based on its deliberations.
This works remarkably well—in practice, the MoA model seems to follow instructions much more consistently and completely than just a single-shot prompt.