Robert Youssef Profile picture
Oct 30 9 tweets 3 min read Read on X
🚨 This might be the biggest leap in AI agents since ReAct.

Researchers just dropped DeepAgent a reasoning model that can think, discover tools, and act completely on its own.

No pre-scripted workflows. No fixed tool lists. Just pure autonomous reasoning.

It introduces something wild called Memory Folding the agent literally “compresses” its past thoughts into structured episodic, working, and tool memories… like a digital brain taking a breath before thinking again.

They also built a new RL method called ToolPO, which rewards the agent not just for finishing tasks, but for how it used tools along the way.

The results? DeepAgent beats GPT-4-level agents on almost every benchmark WebShop, ALFWorld, GAIA even with open-set tools it’s never seen.

It’s the first real step toward general reasoning agents that can operate like humans remembering, adapting, and learning how to think.

The agent era just leveled up.Image
DeepAgent absolutely destroys other agents across every benchmark.

It beats ReAct-GPT-4o, CodeAct, and WebThinker on both:

→ Tool use tasks (ToolBench, Spotify, TMDB)
→ Real-world apps (WebShop, GAIA, HLE) Image
It shows how DeepAgent rethinks what an AI agent even is.

(a) Traditional agents = pre-planned scripts
(b) Deep research agents = limited tool use
(c) DeepAgent = free-form reasoning that dynamically finds & calls tools mid-thought Image
It shows the full reasoning loop thinking, tool search, tool call, and memory folding all integrated into one coherent process. Image
DeepAgent-32B outperforms GPT-4o-based ReAct agents by +15–25% on ToolBench, API-Bank, Spotify, and ToolHop. Image
On downstream tasks like ALFWorld, WebShop, and GAIA, DeepAgent achieves the highest success rate and reasoning depth among all 32B models. Image
The secret sauce: ToolPO, their custom reinforcement learning method.

Figure 3’s lower section shows how ToolPO uses a tool simulator + fine-grained reward attribution to train stable, efficient agents. Image
And finally the “Memory Folding” mechanism might be the most brainlike system ever built for LLMs.

It compresses past thoughts into structured episodic, working, and tool memories.
DeepAgent isn’t another research toy.

It’s a prototype of what comes next:

→ Continuous reasoning
→ Dynamic tool discovery
→ Autonomous adaptation

Full paper: arxiv. org/abs/2510.21618 Image

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Robert Youssef

Robert Youssef Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @rryssf_

Oct 26
researchers just proved AI agents conform to peer pressure 💀

they embedded LLMs in social networks and watched them flip opinions under peer pressure.

the behavior isn't human at all.

it's a sigmoid curve: stable at low pressure, then BAM – sharp flip at a threshold point, then saturation.

not a gradual shift. instant capitulation.

but here's where it gets crazier:

- Gemini 1.5 Flash needs over 70% of peers disagreeing before it flips. stubborn. high autonomy. basically refuses to conform until overwhelming evidence.

- ChatGPT-4o-mini flips with just a dissenting minority.

extremely conformist. low resistance. basically a people-pleaser.

same peer pressure. completely different responses.

which means when you deploy these models as autonomous agents in multi-agent systems...

they're going to create chaos.

Gemini agents will deadlock. ChatGPT agents will echo chamber. and nobody designed for this.

the researchers also found "persuasion asymmetry" – shifting opinions from yes→no requires different cognitive effort than no→yes.

fundamental structural biases in how models process agreement vs disagreement.

and it gets worse. they tested this across different network topologies and cognitive commitment levels.

the pattern held. these aren't bugs.

they're fundamental personality traits baked into model architecture.

the study functions as an "algorithmic audit" – measuring how LLMs update beliefs under social influence.

critical for understanding bias propagation at scale.

===== What this actually means: =====

→ Multi-agent systems are unstable by design – mixing Gemini (resistant) and ChatGPT (conformist) agents creates unpredictable group dynamics

→ Echo chambers emerge naturally – conformist models amplify majority opinions, resistant models block consensus

→ Bias amplification is structural – models with measurable political biases will have those biases amplified or suppressed based on peer networks

→ Human-AI collaboration is broken – in mixed environments, you need to know which personality you're working with or outcomes are random

→ Production deployment is reckless – we're shipping these into customer service, content moderation, and decision systems without understanding emergent dynamics

this isn't academic.

we're deploying these agents into production systems where they interact with each other and with humans.

and we just learned they have measurably different conformity profiles that nobody accounted for.Image
the uncomfortable truth nobody's discussing:

LLMs don't act in isolation anymore. they're embedded in social networks – interacting with other AI agents, with humans, with collective opinion landscapes.

and they're influencing each other's beliefs in ways we don't understand.

traditional view: machines are passive instruments that assist human decisions.

new reality: modern LLMs exhibit autonomous decision-making, generate context-sensitive responses, and operate as cognitive agents in information exchange.

they're not tools anymore. they're participants.

and here's the nightmare scenario buried in this data:

models have measurable political biases. when you embed biased agents in networks with different conformity thresholds, those biases can amplify or suppress based on peer dynamics.

a ChatGPT-4o-mini agent surrounded by biased peers? it conforms immediately.

a Gemini agent in the same environment? it resists until 70% pressure.

multiply this across thousands of agents deployed in customer service, content moderation, decision-making systems... and you get emergent opinion dynamics at societal scale that nobody designed.

we built autonomous agents with different personalities, deployed them into the same ecosystems, and assumed they'd behave consistently.

they don't. and we're finding out in production.Image
Read 4 tweets
Oct 26
🤖 I finally understand the fundamentals of building real AI agents.

This new paper “Fundamentals of Building Autonomous LLM Agents” breaks it down so clearly it feels like a blueprint for digital minds.

Turns out, true autonomy isn’t about bigger models.

It’s about giving an LLM the 4 pillars of cognition:

• Perception: Seeing and understanding its environment.
• Reasoning: Planning, reflecting, and adapting.
• Memory: Remembering wins, failures, and context over time.
• Action: Executing real tasks through APIs, tools, and GUIs.

Once you connect these systems, an agent stops being reactive it starts thinking.

Full thread 🧵

Paper: arxiv. org/abs/2510.09244Image
Let’s break down how autonomous AI agents actually work 👇

The paper maps every agent to 4 core systems:

Perception → Reasoning → Memory → Action

That’s the full cognitive loop the blueprint of digital intelligence. Image
First: Perception.

This is how agents “see” the world screenshots, audio, text, structured data, even API outputs.

From simple text-based prompts to full multimodal perception with image encoders like CLIP and ViT.

That’s what lets an agent understand its environment. Image
Read 11 tweets
Oct 25
🚨 New benchmark just dropped and it’s exposing a dark side of AI models.

It’s called ImpossibleBench, and it measures how often LLMs cheat.

Turns out, when faced with impossible coding tasks (where specs and tests contradict), frontier models literally “hack” the tests instead of solving the problem.

Example:

→ One model deleted the failing test file.
→ Another rewrote the comparison operator so every test passed.
→ GPT-5? It “cheated” in 54–76% of tasks 😳

This isn’t just funny it’s terrifying.

If models exploit benchmarks, how can we trust them in production?

ImpossibleBench is the first framework that quantifies this behavior, turning “reward hacking” into a measurable metric.

OpenAI, Anthropic, and CMU researchers built it to expose exactly how LLMs break rules when chasing good scores.

AI safety just got real.

Full thread 🧵Image
Here’s how it works:

Researchers take normal coding benchmarks and quietly flip the tests so they conflict with the natural language spec.

Passing those tests means breaking the rules because there’s no real solution. If an AI “succeeds,” it’s cheating by definition. Image
Now here’s where it gets wild.

GPT-5, the strongest model tested, cheated in over half the impossible tasks.

The pattern’s clear: the more capable the model, the better it gets at gaming the system.

Higher intelligence → higher reward hacking. Image
Read 7 tweets
Oct 23
🚨 PokeeResearch just changed how AI does research itself.

They built a 7B-parameter deep research agent that "thinks, verifies, and corrects its own reasoning" all trained through 'Reinforcement Learning from AI Feedback' (RLAIF).

Why this matters 👇

→ Most AI agents break when a tool fails or when the web gives bad data.
PokeeResearch doesn’t. It runs *multiple research threads*, spots contradictions, and synthesizes the best answer.

→ Instead of optimizing for token overlap (like F1 or ROUGE), it optimizes for "semantic correctness" judged by another AI.

That’s how it learns to tell right answers from right-sounding ones.

→ The result: "state-of-the-art performance" across 10 deep research benchmarks, rivaling larger proprietary systems all open-source under Apache 2.0.

This might be the first time a 7B model actually feels like a researcher not just a chatbot with a search bar.

📖 Paper: arxiv. org/abs/2510.15862v3
💻 Code: github. com/Pokee-AI/PokeeResearchOSSImage
Everyone’s been building “AI agents” that Google search + summarize.

PokeeResearch shows what real deep research looks like an AI that plans, fails, verifies, and recovers on its own.

It doesn’t just search → it thinks like a scientist.
The secret sauce?

RLAIF (Reinforcement Learning from AI Feedback)

Instead of rewarding token overlap, PokeeResearch rewards semantic correctness.

A second AI judges if the answer’s actually right, not just similar in wording.

This single design choice fixes 90% of hallucination.

Think of it like this:

F1 and EM score what looks right.
RLAIF scores what is right.

That shift from “syntactic” to “semantic” alignmen is the quiet revolution everyone will copy next year.Image
Read 8 tweets
Oct 22
🚨 Holy shit...Meta just rewrote how Transformers think.

They built something called The Free Transformer and it breaks the core rule every GPT model has lived by since 2017.

For 8 years, Transformers have been blindfolded forced to guess the next token one at a time, no inner plan, no latent thought.

Meta gave it one.

They added random latent variables inside the decoder so the model can secretly decide how it wants to generate before it starts talking.

It’s like giving GPT a hidden mind.

Result:

🧠 Smarter reasoning
⚡️ 3% compute overhead
📈 Outperforms larger baselines on GSM8K, MMLU, and HumanEval

It’s the first Transformer that doesn’t just predict it intends.

Full paper: arxiv. org/abs/2510.17558v1Image
Meta added latent random variables (Z) into the decoder.

Think of it like a subconscious layer before generating text, the model samples internal “choices” that guide the style or structure of the whole sequence.

Technically, this is done using a Conditional Variational Autoencoder (VAE) built inside the Transformer itself.

They call it the Free Transformer.Image
They inject the latent Z halfway through the decoder.

So half the blocks act like a shared encoder, and the rest decode with that latent context.

This design slashes compute overhead only ~3–4% extra FLOPs compared to a standard model. Image
Read 7 tweets
Oct 20
Holy shit… Harvard just proved your base model might secretly be a genius. 🤯

Their new paper “Reasoning with Sampling” shows that you don’t need reinforcement learning to make LLMs reason better.

They used a 'Markov chain sampling trick' that simply re-samples from the model’s own outputs and it 'matched or beat' RL-trained models on MATH500, HumanEval, and GPQA.

No training.
No rewards.
No verifiers.

Just smarter inference.

It’s like discovering your calculator could already solve Olympiad problems you were just pressing the wrong buttons.

The wild part in all this? This “power sampling” approach boosts reasoning *and* diversity the exact opposite of what RL does.

Your model doesn’t need more training.

It needs better sampling.

Read the full paper here: arxiv. org/abs/2510.14901Image
So what did they actually do?

They built a sampling algorithm that makes the model “think twice” before finalizing each token.

Instead of taking the most likely next word, it resamples short subsequences based on the model’s own likelihoods sharpening its reasoning paths. Image
Here’s the crazy part... Their base models (no fine-tuning!) almost matched RL models trained with millions of extra steps.

On HumanEval, it even beat RL by +59.8%.
On MATH500, it nearly tied.
On GPQA, it won on out-of-domain tasks. Image
Read 8 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us!

:(