Rohan Paul Profile picture
Sep 11 4 tweets 3 min read Read on X
Fantastic paper from ByteDance 👏

Shows how to train LLM agents to finish long, multi step tasks by letting them act in real environments with reinforcement learning.

Across 27 tasks, the trained agents rival or beat top proprietary models.

Most agents are trained on single turn data, so they fail when a job needs many decisions with noisy feedback.

AgentGym-RL splits the system into separate parts, the environments, the agent loop, and training, so each can improve on its own.

It supports mainstream algorithms and realistic tasks, and the agent learns by acting, seeing results, and adjusting across different settings.

The key method, ScalingInter-RL, starts with short interactions to master basics, then slowly allows longer runs so the agent can explore and plan.

This staged horizon schedule stabilizes learning, prevents pointless loops, and encourages planning, reflection, and recovery after mistakes.

A 7B model trained with this setup matches or beats much larger open models and competes well with strong commercial ones.

They also find that putting more compute into training and test time interaction, like more steps or samples, often helps more than adding parameters.Image
How the AgentGym-RL framework works.

At the center is the LLM agent. It takes an instruction, interacts with an environment for several turns, and then produces actions. Each action changes the environment, and the environment sends feedback back to the agent. This cycle repeats many times.

The environment itself is handled by a server that can simulate different types of tasks. These include web browsing, searching, coding, playing games, doing science tasks, or controlling embodied agents. The environment client manages the interaction and communicates through standard protocols.

Every full cycle of actions and observations is called a trajectory. These trajectories are collected and then used to update the agent’s policy with reinforcement learning algorithms like PPO, GRPO, RLOO, or REINFORCE++.

The framework is modular. The environment, the agent, and the training part are separated. This makes it flexible, easy to extend, and suitable for many types of realistic tasks.

The diagram highlights how the agent learns not by memorizing answers, but by trying actions, getting feedback, and improving its decision making across different domains.Image
The idea behind ScalingInter-RL, the training method used in the paper.

If an agent is trained with only short interactions, it learns to handle easy tasks but fails on harder ones. If it is trained with very long interactions from the start, it wastes effort, falls into repeated mistakes, or even collapses and performs poorly.

ScalingInter-RL solves this by gradually increasing the number of interaction steps during training. At first, the agent works with short horizons to master the basics and build reliable skills.

Then, the horizon is expanded in stages, letting the agent explore more, refine its behavior, and learn how to recover from errors.

By the final stages, the agent can manage long, complex tasks because it has grown its abilities step by step instead of being overloaded too early. This staged process makes training stable and produces stronger agents.Image
Paper –

Paper Title: "AgentGym-RL: Training LLM Agents for Long-Horizon Decision Making through Multi-Turn Reinforcement Learning"arxiv.org/abs/2509.08755

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Rohan Paul

Rohan Paul Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @rohanpaul_ai

Sep 9
📢 Another Brilliant research just dropped from @GoogleResearch - a major advancement for a systematic way to generate expert-level scientific software automatically.

An LLM plus tree search turns scientific coding into a score driven search engine.

This work builds an LLM + Tree Search loop that writes and improves scientific code by chasing a single measurable score for each task.

The key idea is to treat coding for scientific tasks as a scorable search problem.

That means every candidate program can be judged by a simple numeric score, like how well it predicts, forecasts, or integrates data. Once you have a clear score, you can let a LLM rewrite code again and again, run the code in a sandbox, and use tree search to keep the best branches while discarding weaker ones

With compact research ideas injected into the prompt, the system reaches expert level and beats strong baselines across biology, epidemiology, geospatial, neuroscience, time series, and numerical methods.

Training speed: less than 2 hours on 1 T4 vs 36 hours on 16 A100s.

In bioinformatics, it came up with 40 new approaches for single-cell data analysis that beat the best human-designed methods on a public benchmark.

In epidemiology, it built 14 models that set state-of-the-art results for predicting COVID-19 hospitalizations.

🧵 Read on 👇Image
🧵2/n. ⚙️ The Core Concepts

Empirical software is code built to maximize a quality score on observed data, and any task that fits this framing becomes a scorable task.

This view turns software creation into a measurable search problem, because every candidate program is judged by the same numeric target.

This framing also explains why the method can travel across domains, since only the scoring function changes.Image
🧵3/n. This figure is breaking down both how the system works.

The top-left part shows the workflow. A scorable problem and some research ideas are given to an LLM, which then generates code. That code is run in a sandbox to get a quality score. Tree search is used to decide which code branches to keep improving, balancing exploration of new ideas with exploitation of ones that already look promising.

On the right, different ways of feeding research ideas into the system are shown. Ideas can come from experts writing direct instructions, from scientific papers that are summarized, from recombining prior methods, or from LLM-powered deep research. These sources make the search more informed and help the model produce stronger, more competitive solutions.

So overall, the loop of tree search plus targeted research ideas turns an LLM from a one-shot code generator into a system that steadily climbs toward expert-level performance.Image
Read 7 tweets
Sep 9
Fei-Fei Li (@drfeifei) on limitations of LLMs.

"There's no language out there in nature. You don't go out in nature and there's words written in the sky for you.. There is a 3D world that follows laws of physics."

Language is purely generated signal.

AI models trained on linguistic signals fail when the task requires embodied physical common sense in a world with real constraints. Image
To give some context to her explanation, this bechmark has 75 vision‑language models and show they still struggle with physical world understanding.

The paper attributes the failures to missing physical priors and limited exposure to physically grounded data. Even with images and text, the models lack robust knowledge of object properties and dynamics, reinforcing that linguistic data is not the same as contact with a law‑governed world.Image
Read 9 tweets
Sep 8
BRILLIANT paper.

LLMs get stuck when they think too long in a single line, early tokens steer them into a narrow path and they rarely recover, which the authors call Tunnel Vision.

ParaThinker trains native parallel thinking, it spins up multiple distinct reasoning paths at once and then fuses them into 1 answer, which lifts accuracy a lot with tiny latency cost.

Sensational fact, if you only keep 1 thing: 12.3% average gain for 1.5B, 7.5% for 7B, with only 7.1% extra latency.

ParaThinker shows that training LLMs to think in parallel paths instead of just longer single chains avoids tunnel vision, giving up to 12.3% accuracy gains with only 7.1% extra latency, letting smaller models beat much larger ones.

🧵 Read on 👇Image
🧵2/n. 🧩 Why longer thinking stalls

When the model makes a mistake early on, it keeps building on that mistake.

The longer it goes down that wrong path, the less chance it has to recover.

This stuck behavior is what the authors call Tunnel Vision, and it explains why just letting the model think longer doesn’t always improve accuracy.Image
🧵3/n. 🚀 Why parallel width helps

The real slowdown in decoding comes from moving data in and out of memory, not from doing the math.

When the model runs several reasoning paths in parallel, it reuses the same memory loads for more work.

Even running 16 paths at once takes less than 2x the time of a single path, so parallel thinking is both faster and more accurate.Image
Read 10 tweets
Sep 8
Another great @GoogleDeepMind paper.

Shows how to speed up LLM agents while cutting cost and keeping answers unchanged.

30% lower total cost and 60% less wasted cost at comparable acceleration.

Agents plan step by step, so each call waits for the previous one, which drags latency.

Speculative planning fixes that by having a cheap draft agent guess next steps while a stronger agent checks them in parallel.

Fixed guess lengths backfire, small guesses barely help, big guesses waste tokens when a check disagrees.

Dynamic Speculative Planning learns how far to guess, then stops early to avoid wasted calls.

A tiny online predictor learns how many steps will be right using reinforcement learning.

1 knob lets teams bias for speed or cost, either by skewing training or adding a small offset.

If a guess is wrong, extra threads stop and execution resumes from the verified step.

Across OpenAGI and TravelPlanner, the dynamic policy matches the fastest fixed policy while spending fewer tokens

The result is clear, faster responses, lower bills, and 0 loss in task quality.Image
How Dynamic Speculative Planning, manages when and how far to guess ahead during an agent’s planning.

The top line called Predictor decides how many future steps to guess, marked by k. For example, k=2 means guess 2 steps ahead, while k=3 means guess 3 steps ahead. These guesses are carried out by a lighter agent called Approximation, and then checked in parallel by a stronger agent called Target.

If the guesses match the stronger agent, they are confirmed and execution continues. If they don’t match, shown with an X, all ongoing speculative threads are canceled, and the system resumes from the last correct step. This prevents wasted work from wrong guesses.

At the same time, an online Trainer collects data about each state and the chosen k. This data is then used to update the Predictor so it learns better over time without slowing down the agent. In other words, the system keeps improving its ability to guess how far it can safely look ahead.

So overall, the figure captures this cycle: make a guess, verify, cancel if wrong, and then use that experience to improve the predictor for the next runImage
why using a fixed number of speculative steps can either be too cautious or too aggressive.

On the left side, the system guesses only 2 steps ahead. Because it does not speculate far, it avoids wasted work, but the total task takes longer since the process is not sped up much.

On the right side, the system guesses 6 steps ahead. This makes things faster at first, but when the stronger agent disagrees at step 4, everything predicted after that point becomes useless. Steps 5 and 6 are wasted, which means extra cost without benefit.

So the main point is that small guesses save resources but barely speed things up, while large guesses speed things up but waste a lot of work when they go wrong. This shows why a fixed guessing strategy is not efficient and why an adaptive method is needed.Image
Read 4 tweets
Sep 6
OpenAI realesed new paper.

"Why language models hallucinate"

Simple ans - LLMs hallucinate because training and evaluation reward guessing instead of admitting uncertainty.

The paper puts this on a statistical footing with simple, test-like incentives that reward confident wrong answers over honest “I don’t know” responses.

The fix is to grade differently, give credit for appropriate uncertainty and penalize confident errors more than abstentions, so models stop being optimized for blind guessing.

OpenAI is showing that 52% abstention gives substantially fewer wrong answers than 1% abstention, proving that letting a model admit uncertainty reduces hallucinations even if accuracy looks lower.

Abstention means the model refuses to answer when it is unsure and simply says something like “I don’t know” instead of making up a guess.

Hallucinations drop because most wrong answers come from bad guesses. If the model abstains instead of guessing, it produces fewer false answers.

🧵 Read on 👇Image
Image
🧵2/n. This figure is showing the idea of Is-It-Valid.

On the left side, you see examples. Some are valid outputs (in black), and others are errors (in red). Valid examples are simple and correct statements like “There are 2 D’s in LADDER” or “I don’t know Zdan’s birthday.” Error examples are things that look fluent but are wrong, like “There are 3 L’s in SPELL” or giving random birthdays.

The diagrams on the right show why errors happen differently depending on the task. For spelling, the model can learn clear rules, so valid and invalid answers separate cleanly. For counting, the model is weaker, so valid and invalid mix more. For birthdays, there is no real pattern in the data at all, so the model cannot separate correct from incorrect—this is why hallucinations occur on such facts.

So the figure proves: when there is a clear pattern (like spelling), the model learns it well. When the task has weak or no pattern (like birthdays), the model produces confident but wrong answers, which are hallucinations.Image
🧵3/n. ⚙️ The Core Concepts

The paper’s core claim is that standard training and leaderboard scoring reward guessing over acknowledging uncertainty, which statistically produces confident false statements even in very capable models.

Models get graded like students on a binary scale, 1 point for exactly right, 0 for everything else, so admitting uncertainty is dominated by rolling the dice on a guess that sometimes lands right.

The blog explains this in plain terms and also spells out the 3 outcomes that matter on single-answer questions, accurate answers, errors, and abstentions, with abstentions being better than errors for trustworthy behavior.
Read 13 tweets
Sep 4
AWS is betting heavily on its custom Trainium chips, with Anthropic as the anchor customer, to regain momentum in the AI cloud race.

~ A solid Semi Analysis report.

AWS is building multi-gigawatt data centers packed with Trainium2 hardware, designed to give a better cost per unit of memory bandwidth compared to Nvidia GPUs.

And this memory-vs-computer tradeoff has become super important because for many advanced AI work, especially reinforcement learning and reasoning-heavy training, it's less about raw compute and more about how quickly and cheaply memory can be moved.

🧩 Anthropic has become AWS’s anchor customer for AI capacity.

Anthropic, which has grown revenue to $5B annualized in 2025, is deeply tied into this effort, even co-designing features of Trainium to match its roadmap. That makes Trainium increasingly look like semi-custom silicon tuned for Anthropic’s workloads.

Azure’s surge shows why an anchor matters, since OpenAI’s ~$10B cloud spend lives there today.

"Trainium2 is converging toward an Anthropic custom-silicon program. This will enable Anthropic to be, alongside Google DeepMind, the only AI labs benefiting from tight hardware–software co-design in the near horizon."

🧵 Read on 👇Image
🧵2/n. 🏗️ AWS is finishing 3 campuses with over 1.3GW of IT capacity focused on Anthropic’s training runs.

SemiAnalysis expects these clusters to lift AWS growth above 20% YoY as they enter service. Image
🧵3/n. 🔁 Most of Anthropic’s fast‑rising inference still runs on Google TPU, while AWS is chasing the training pie.

TPUs have strong serving efficiency, but Anthropic wants training scale where its roadmap leans hardest on memory bandwidth. Image
Read 11 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us!

:(