Kyle Corbitt Profile picture
Currently building @OpenPipeAI. Formerly @ycombinator, @google. I am always down to go on a quest.
Jul 11 11 tweets 4 min read
Big news: we've figured out how to make a *universal* reward function that lets you apply RL to any agent with:
- no labeled data
- no hand-crafted reward functions
- no human feedback!

A 🧵 on RULER First, our results: small models trained with RULER+GRPO are more reliable than o3 on 4/4 tasks, despite being 1/20th the cost. Surprisingly, they even beat models trained with hand-crafted reward functions on 3/4 tasks. Image
Image
Jun 25 4 tweets 2 min read
Hot RL summer continues: we just released Summary-RL, an RL-trained summarization model that reaches SOTA on ServiceNow's Repliqa summarization benchmark! Image Why did we do this? LLMs are already good at generating summaries, but they don't always focus on the information you care about. RL lets you customize a model to focus specifically on the types of data you want to preserve.
Apr 29 11 tweets 5 min read
🚀 Meet ART·E—our open-source RL-trained email research agent that searches your inbox and answers questions more accurately, faster, and cheaper than o3. Let's go deeper on how we built it. 🧵 Image We were inspired by OpenAI’s Deep Research, which showed how effective RL can be to teach an agent a research task. Our goal with ART·E was to replicate similar performance wins using open data and code!
Jan 15 14 tweets 4 min read
Sharing an important lesson learned from working with hundreds of customers: there’s a big difference in the right way to evaluate and fine-tune LLMs depending on whether your task has one right answer or many. RFT, DPO, RLHF, evals… all downstream of this! 🧵 I’ll call tasks with one correct answer (or just a few) “deterministic.” They include things like:
- Classification
- Structured extraction
- Copilot flows that produce a single action

They’re tasks where you can quickly check if an output is objectively correct.
Dec 30, 2024 13 tweets 3 min read
A few weeks ago, OpenAI announced Reinforcement Fine-Tuning (RFT)—a new way to adapt LLMs to complex tasks with very little training data. Here’s a quick rundown of how it works, why it’s a big deal, and when you should use it. 🧵 RFT helps a reasoning model (like o1) learn from just a few dozen examples. It's much more data efficient than standard supervised fine-tuning (SFT), since it teaches the model both the correct answer as well as how to reason about it.
Oct 23, 2024 5 tweets 2 min read
Just launched agent.exe, a free, open-source Mac/Windows/Linux app that lets you use Claude 3.5 Sonnet to control your computer!

This was a fun little project to explore the API and see what the model can do. Computer use is really cool—I expect 2025 will be the year of agents.Image Here's agent.exe booking travel on Google Flights. ✈️Claude 3.5 definitely isn't perfect—note that it confidently chooses the wrong dates!
Jun 20, 2024 9 tweets 4 min read
Super excited to announce our Mixture of Agents model+FT pipeline: beats GPT-4, but 25x cheaper!
- Humans prefer MoA outputs vs GPT-4 59% of the time
- New SOTA on both Arena-Hard (84.8) and Alpaca Eval (LC 68.4)
- Optimized for synthetic data generation for fine-tuning 🧵 Image The MoA architecture is simple: generate 3 initial GPT-4 completions, have GPT-4 reflect on them, and then have GPT-4 produce a final output based on its deliberations. Image