Kyle Corbitt Profile picture
Aug 6 4 tweets 2 min read Read on X
Announcing MCP•RL: teach your model how to use any MCP server automatically using reinforcement learning!

Just connect any MCP server, and your model will start playing with it and (using RL) "learn from experience" how to use its tools most effectively!Image
How does it work? When you connect a server, MCP•RL:

1. Queries the server to get a list of tools
2. Uses a strong model to brainstorm tasks that the tools might be useful for
3. Tries to complete the task using the tools
4. Improves using RULER

In practice, it trains great! Image
MCP•RL is fully open source and is released as part of the Agent Reinforcement Trainer (ART) project.

We have an example notebook training Qwen2.5 to use an MCP server here! github.com/OpenPipe/ART?t…
all credit to @dvdcrbt, this was his project 🙂

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Kyle Corbitt

Kyle Corbitt Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @corbtt

Jul 11
Big news: we've figured out how to make a *universal* reward function that lets you apply RL to any agent with:
- no labeled data
- no hand-crafted reward functions
- no human feedback!

A 🧵 on RULER
First, our results: small models trained with RULER+GRPO are more reliable than o3 on 4/4 tasks, despite being 1/20th the cost. Surprisingly, they even beat models trained with hand-crafted reward functions on 3/4 tasks. Image
Image
Why is this a big deal? RL is fantastic at making agents more reliable, but every task required either labeled data or hand-crafted reward functions. Each training pipeline was unique, expensive and error-prone.

RULER relaxes this requirement, making RL more plug-and-play.
Read 11 tweets
Jun 25
Hot RL summer continues: we just released Summary-RL, an RL-trained summarization model that reaches SOTA on ServiceNow's Repliqa summarization benchmark! Image
Why did we do this? LLMs are already good at generating summaries, but they don't always focus on the information you care about. RL lets you customize a model to focus specifically on the types of data you want to preserve.
By directly optimizing on the number of questions that could be successfully answered from the summary, we taught Summary-RL what kinds of information to include. Within 30 training steps it already reached SOTA! ($22 to train) Image
Read 4 tweets
Apr 29
🚀 Meet ART·E—our open-source RL-trained email research agent that searches your inbox and answers questions more accurately, faster, and cheaper than o3. Let's go deeper on how we built it. 🧵 Image
We were inspired by OpenAI’s Deep Research, which showed how effective RL can be to teach an agent a research task. Our goal with ART·E was to replicate similar performance wins using open data and code!
The results exceeded expectations: ART·E surpasses o3 on accuracy, slashes latency 5×, and cuts costs 64×. Turns out RL works really well! Image
Image
Image
Read 11 tweets
Jan 15
Sharing an important lesson learned from working with hundreds of customers: there’s a big difference in the right way to evaluate and fine-tune LLMs depending on whether your task has one right answer or many. RFT, DPO, RLHF, evals… all downstream of this! 🧵
I’ll call tasks with one correct answer (or just a few) “deterministic.” They include things like:
- Classification
- Structured extraction
- Copilot flows that produce a single action

They’re tasks where you can quickly check if an output is objectively correct.
On the other hand, “freeform” tasks have infinitely many correct outputs—think:
- Summaries
- Email drafts
- Chatbots

Here, correctness is more subjective. There’s no single “right” answer, and that affects how we measure success.
Read 14 tweets
Dec 30, 2024
A few weeks ago, OpenAI announced Reinforcement Fine-Tuning (RFT)—a new way to adapt LLMs to complex tasks with very little training data. Here’s a quick rundown of how it works, why it’s a big deal, and when you should use it. 🧵
RFT helps a reasoning model (like o1) learn from just a few dozen examples. It's much more data efficient than standard supervised fine-tuning (SFT), since it teaches the model both the correct answer as well as how to reason about it.
Why does this matter? Because collecting tons of labeled data is still a bottleneck. Cutting that requirement by an order of magnitude (or more) means we can handle complex tasks—even with very small datasets—without drowning in labeling work.
Read 13 tweets
Oct 23, 2024
Just launched agent.exe, a free, open-source Mac/Windows/Linux app that lets you use Claude 3.5 Sonnet to control your computer!

This was a fun little project to explore the API and see what the model can do. Computer use is really cool—I expect 2025 will be the year of agents.Image
Here's agent.exe booking travel on Google Flights. ✈️Claude 3.5 definitely isn't perfect—note that it confidently chooses the wrong dates!
All the code as well as a (still minimal) README for running the app is available here with an open source Apache 2 license. This is definitely still research-project-quality, but would love to see more development happening on top!

github.com/corbt/agent.exe
Read 5 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us!

:(