Kyle Corbitt Profile picture
Jul 11 11 tweets 4 min read Read on X
Big news: we've figured out how to make a *universal* reward function that lets you apply RL to any agent with:
- no labeled data
- no hand-crafted reward functions
- no human feedback!

A 🧵 on RULER
First, our results: small models trained with RULER+GRPO are more reliable than o3 on 4/4 tasks, despite being 1/20th the cost. Surprisingly, they even beat models trained with hand-crafted reward functions on 3/4 tasks. Image
Image
Why is this a big deal? RL is fantastic at making agents more reliable, but every task required either labeled data or hand-crafted reward functions. Each training pipeline was unique, expensive and error-prone.

RULER relaxes this requirement, making RL more plug-and-play.
RULER is simple and self-contained. It's a single file that can be dropped into any RL training pipeline. It also comes pre-integrated in our open-source RL framework, ART. (Stars appreciated! 🙏🤩) github.com/OpenPipe/ART/b…
RULER is based on an LLM-as-judge that takes multiple candidate solutions and ranks them _relative to each other_. This is an easier problem than scoring each solution in isolation.
💡 Thanks to GRPO math, we don't need to worry about whether RULER scores are calibrated _across_ different groups, only whether they're calibrated _within_ each group. By showing a full group at a time it can effectively self-calibrate in practice.
RULER comes with a default rubric that works well for many tasks (all the results up-thread were with the default rubric, zero customization!). But you can pass your own rubric to the judge if you want to customize it for your task. The default rubric is simple and general. Image
Super easy to incorporate. Fully documented here: art.openpipe.ai/fundamentals/r…Image
Lots of future work. I'm personally excited about:

1. Using RULER as a test-time compute method to improve performance on the fly
2. Using RULER with real user inputs in production to continuously improve performance (online learning)
TONS more info in the launch post, check it out! openpipe.ai/blog/ruler
And if you think this is cool, one more time, please star the repo 🤩 github.com/OpenPipe/ART

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Kyle Corbitt

Kyle Corbitt Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @corbtt

Jun 25
Hot RL summer continues: we just released Summary-RL, an RL-trained summarization model that reaches SOTA on ServiceNow's Repliqa summarization benchmark! Image
Why did we do this? LLMs are already good at generating summaries, but they don't always focus on the information you care about. RL lets you customize a model to focus specifically on the types of data you want to preserve.
By directly optimizing on the number of questions that could be successfully answered from the summary, we taught Summary-RL what kinds of information to include. Within 30 training steps it already reached SOTA! ($22 to train) Image
Read 4 tweets
Apr 29
🚀 Meet ART·E—our open-source RL-trained email research agent that searches your inbox and answers questions more accurately, faster, and cheaper than o3. Let's go deeper on how we built it. 🧵 Image
We were inspired by OpenAI’s Deep Research, which showed how effective RL can be to teach an agent a research task. Our goal with ART·E was to replicate similar performance wins using open data and code!
The results exceeded expectations: ART·E surpasses o3 on accuracy, slashes latency 5×, and cuts costs 64×. Turns out RL works really well! Image
Image
Image
Read 11 tweets
Jan 15
Sharing an important lesson learned from working with hundreds of customers: there’s a big difference in the right way to evaluate and fine-tune LLMs depending on whether your task has one right answer or many. RFT, DPO, RLHF, evals… all downstream of this! 🧵
I’ll call tasks with one correct answer (or just a few) “deterministic.” They include things like:
- Classification
- Structured extraction
- Copilot flows that produce a single action

They’re tasks where you can quickly check if an output is objectively correct.
On the other hand, “freeform” tasks have infinitely many correct outputs—think:
- Summaries
- Email drafts
- Chatbots

Here, correctness is more subjective. There’s no single “right” answer, and that affects how we measure success.
Read 14 tweets
Dec 30, 2024
A few weeks ago, OpenAI announced Reinforcement Fine-Tuning (RFT)—a new way to adapt LLMs to complex tasks with very little training data. Here’s a quick rundown of how it works, why it’s a big deal, and when you should use it. 🧵
RFT helps a reasoning model (like o1) learn from just a few dozen examples. It's much more data efficient than standard supervised fine-tuning (SFT), since it teaches the model both the correct answer as well as how to reason about it.
Why does this matter? Because collecting tons of labeled data is still a bottleneck. Cutting that requirement by an order of magnitude (or more) means we can handle complex tasks—even with very small datasets—without drowning in labeling work.
Read 13 tweets
Oct 23, 2024
Just launched agent.exe, a free, open-source Mac/Windows/Linux app that lets you use Claude 3.5 Sonnet to control your computer!

This was a fun little project to explore the API and see what the model can do. Computer use is really cool—I expect 2025 will be the year of agents.Image
Here's agent.exe booking travel on Google Flights. ✈️Claude 3.5 definitely isn't perfect—note that it confidently chooses the wrong dates!
All the code as well as a (still minimal) README for running the app is available here with an open source Apache 2 license. This is definitely still research-project-quality, but would love to see more development happening on top!

github.com/corbt/agent.exe
Read 5 tweets
Jun 20, 2024
Super excited to announce our Mixture of Agents model+FT pipeline: beats GPT-4, but 25x cheaper!
- Humans prefer MoA outputs vs GPT-4 59% of the time
- New SOTA on both Arena-Hard (84.8) and Alpaca Eval (LC 68.4)
- Optimized for synthetic data generation for fine-tuning 🧵 Image
The MoA architecture is simple: generate 3 initial GPT-4 completions, have GPT-4 reflect on them, and then have GPT-4 produce a final output based on its deliberations. Image
This works remarkably well—in practice, the MoA model seems to follow instructions much more consistently and completely than just a single-shot prompt.
Read 9 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us!

:(