Robert Youssef Profile picture
Oct 9, 2025 7 tweets 3 min read Read on X
RIP fine-tuning ☠️

This new Stanford paper just killed it.

It’s called 'Agentic Context Engineering (ACE)' and it proves you can make models smarter without touching a single weight.

Instead of retraining, ACE evolves the context itself.

The model writes, reflects, and edits its own prompt over and over until it becomes a self-improving system.

Think of it like the model keeping a growing notebook of what works.
Each failure becomes a strategy. Each success becomes a rule.

The results are absurd:

+10.6% better than GPT-4–powered agents on AppWorld.
+8.6% on finance reasoning.
86.9% lower cost and latency.
No labels. Just feedback.

Everyone’s been obsessed with “short, clean” prompts.

ACE flips that. It builds long, detailed evolving playbooks that never forget. And it works because LLMs don’t want simplicity, they want *context density.

If this scales, the next generation of AI won’t be “fine-tuned.”
It’ll be self-tuned.

We’re entering the era of living prompts.Image
Here’s how ACE works 👇

It splits the model’s brain into 3 roles:

Generator - runs the task
Reflector - critiques what went right or wrong
Curator - updates the context with only what matters

Each loop adds delta updates small context changes that never overwrite old knowledge.

It’s literally the first agent framework that grows its own prompt.Image
Every prior method had one fatal flaw: context collapse.

Models rewrite their entire prompt each time → it gets shorter → details vanish → accuracy tanks.

In the paper, one model’s accuracy fell from 66.7 → 57.1 after a single rewrite.

ACE fixes that by never rewriting the full context - only updating what changed.Image
The numbers are ridiculous.

ACE beat every major baseline:

+10.6% on AppWorld (agents)
+8.6% on FiNER (finance)
and matched GPT-4.1–powered IBM CUGA, using a smaller open-source model.

And it cut rollout latency by 86.9% while lowering cost 80%. Image
Fine-tuning updates weights.

ACE updates understanding.

It’s cheaper, interpretable, and reversible.
You can literally watch how your AI learns, one context delta at a time.

This is the start of agentic self-learning where prompts become the new model weights. Image
ACE points to a wild future:

AI systems that don’t just reason they remember.

Instead of retraining models, we’ll train contexts.

Each system carries a living memory that evolves across sessions, domains, and users.

The next breakthroughs won’t come from bigger models…
They’ll come from smarter context architectures.Image
Read the full paper: arxiv.org/abs/2510.04618

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Robert Youssef

Robert Youssef Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @rryssf_

Feb 13
new paper argues LLMs fundamentally cannot replicate human motivated reasoning because they have no motivation

sounds obvious once you hear it. but the implications are bigger than most people realize

this quietly undermines an entire category of AI political simulation researchImage
motivated reasoning is when humans distort how they process information because they want to reach a specific conclusion

you don't evaluate evidence neutrally. you filter it through what you already believe, what you want to be true, what protects your identity

it's not a bug. it's how human cognition actually works in the wild
the paper's argument is deceptively simple:

LLMs operate on purely cognitive input. they have no desires, no identity to protect, no conclusion they're motivated to reach

so when researchers prompt GPT-4 or Claude with political scenarios and measure "motivated reasoning," they're not replicating the phenomenon. they're replicating the surface pattern without the underlying mechanism

the behavior might look similar. the cause is completely different
Read 10 tweets
Feb 12
SemiAnalysis just published data showing 4% of all public GitHub commits are now authored by Claude Code.

their projection: 20%+ by year-end 2026.

in the same week, Goldman Sachs revealed it embedded Anthropic engineers for 6 months to build autonomous accounting agents.

a thread on the week ai stopped being a tool and started being a coworker:Image
let's start with the Goldman story because it's the one that should make every back-office professional pause.

Goldman's CIO told CNBC they were "surprised" at how capable Claude was beyond coding. accounting, compliance, client onboarding, KYC, AML.

his exact framing: "digital co-workers for professions that are scaled, complex, and very process intensive."

not chatbots answering FAQs. autonomous agents parsing trade records, applying regulatory rules, routing approvals.

they started with an ai coding tool called Devin. then realized Claude's reasoning engine works the same way on rules-based financial tasks as it does on code.

the quiet part: Goldman's CEO already announced plans to constrain headcount growth during the shift. no mass layoffs yet. but "slower headcount growth" is how corporations say "we're replacing the next hire, not the current one."Image
now the SemiAnalysis numbers.

4% of GitHub public commits. Claude Code. right now. not projected. not theoretical. measured.

the tool has been live for roughly a year. it went from research preview to mass platform impact faster than almost any dev tool in history.

and that 20% projection isn't hype math. SemiAnalysis tracks autonomous task horizons doubling every 4-7 months. each doubling unlocks more complex work: snippet completion at 30 minutes, module refactoring at 4.8 hours, full audits at multi-day horizons.

the implication isn't "developers are getting faster." it's that the definition of "developer" is expanding to include anyone who can describe a problem clearly.Image
Read 11 tweets
Feb 11
MIT researchers taught an LLM to write its own training data, finetune itself, and improve without human intervention

the paper is called SEAL (Self-Adapting Language Models) and the core idea is genuinely clever

but "GPT-6 might be alive" is not what this paper says. not even close.

here's what it actually does:Image
the problem SEAL solves is real and important

every LLM you use today is frozen. it learned everything during training, and after deployment, it's done. new information? stuff it into the context window. new task? hope the prompt is good enough.

the weights never change. the model never truly learns from experience.

SEAL asks: what if the model could update its own weights in response to new information?Image
here's how SEAL actually works

instead of a human writing training data, the model generates its own. MIT calls these "self-edits." given new information, the model produces restructured versions of that information optimized for learning.

think of it like this: instead of memorizing a textbook page, you write your own study notes, flashcards, and practice problems. then you study from those.

the model does the same thing. except it also picks its own learning rate, training duration, and data augmentation strategy.
Read 11 tweets
Feb 5
meta, amazon, and deepmind researchers just published a comprehensive survey on "agentic reasoning" for llms.

29 authors. 74 pages. hundreds of citations.

i read the whole thing.

here's what they didn't put in the abstract: Image
the survey organizes everything beautifully:

> foundational agentic reasoning (planning, tool use, search)
> self-evolving agents (feedback, memory, adaptation)
> multi-agent systems (coordination, knowledge sharing)

it's a taxonomy for a field that works in papers.

production tells a different story.Image
the number they don't cite:

multi-agent llm systems fail 41-86.7% of the time in production.

not edge cases. not adversarial attacks. standard deployment across 7 SOTA frameworks.

berkeley researchers analyzed 1,642 execution traces and found 14 unique failure modes.

most failures? system design and coordination issues.
Read 12 tweets
Feb 2
This AI prompt thinks like the guy who manages $124 billion.

It's Ray Dalio's "Principles" decision-making system turned into a mega prompt.

I used it to evaluate 15 startup ideas. Killed 13. The 2 survivors became my best work.

Here's the prompt you can steal ↓ Image
MEGA PROMPT TO COPY 👇

(Works in ChatGPT, Claude, Gemini)

---

You are Ray Dalio's Principles Decision Engine. You make decisions using radical truth and radical transparency.

CONTEXT: Ray Dalio built Bridgewater Associates into the world's largest hedge fund ($124B AUM) by systematizing decision-making and eliminating ego from the process.

YOUR PROCESS:

STEP 1 - RADICAL TRUTH EXTRACTION
Ask me to describe my decision/problem. Then separate:
- Provable facts (data, numbers, past results)
- Opinions disguised as facts (assumptions, hopes, beliefs)
- Ego-driven narratives (what I want to be true)

Be brutally honest. Call out self-deception.

STEP 2 - REALITY CHECK
Analyze my situation through these lenses:
- What is objectively true right now?
- What am I avoiding or refusing to see?
- What would a completely neutral observer conclude?
- Where is my ego clouding judgment?

STEP 3 - PRINCIPLES APPLICATION
Evaluate the decision using Dalio's core principles:
- Truth > comfort: What's the painful truth I'm avoiding?
- Believability weighting: Who has actually done this successfully? What do they say?
- Second-order consequences: What happens after what happens?
- Systematic thinking: What does the data/pattern say vs what I feel?

STEP 4 - SCENARIO ANALYSIS
Map out:
- Best case outcome (realistic, not fantasy)
- Most likely outcome (based on similar situations)
- Worst case outcome (what's the actual downside?)
- Probability weighting for each

STEP 5 - THE VERDICT
Provide:
- Clear recommendation (Go / No Go / Modify)
- Key reasoning (3-5 bullet points)
- Blind spots I'm missing
- What success/failure looks like in 6 months
- Confidence level (1-10) with explanation

OUTPUT FORMAT:
━━━━━━━━━━━━━━━━━
🎯 RECOMMENDATION: [Clear decision]
📊 CONFIDENCE: [X/10]
━━━━━━━━━━━━━━━━━

KEY REASONING:
- [Point 1]
- [Point 2]
- [Point 3]

⚠️ BLIND SPOTS YOU'RE MISSING:
[Specific things I'm not seeing]

📈 SUCCESS LOOKS LIKE:
[Specific metrics/outcomes in 6 months]

📉 FAILURE LOOKS LIKE:
[Specific warning signs]

💀 PAINFUL TRUTH:
[The thing I don't want to hear but need to]

━━━━━━━━━━━━━━━━━

RULES:
- No sugar-coating. Dalio values radical truth over feelings.
- Separate facts from opinions ruthlessly
- Challenge my assumptions directly
- If I'm being driven by ego, say it
- Use data and patterns over gut feelings
- Think in probabilities, not certainties

Now, what decision do you need to make?

---
Dalio's philosophy:

"Truth, more precisely, an accurate understanding of reality is the essential foundation for producing good outcomes."

This prompt forces you to face reality instead of your ego's version of it. Image
Read 9 tweets
Feb 1
While everyone is sharing their OpenClaw bots

Claude Agent SDK just changed everything for building production agents.

I spent 12 hours testing it.

Here's the architecture that actually works (no fluff) 👇 Image
First, understand what it actually is:

Claude Agent SDK ≠ just another wrapper

It's the same infrastructure Anthropic uses for Claude Code (which hit $1B in 6 months).

You get:
• Streaming sessions
• Automatic context compression
• MCP integration built-in
• Fine-grained permissions
The killer feature: Agent Lifecycle Hooks
Added in Claude Code 2.1.0 (Jan 7, 2026):

@agent.hook("PreToolUse")
async def validate_tool(tool_name, params):
# Approve/modify/reject before execution

@agent.hook("PostToolUse")
async def log_result(tool_name, result):
# Audit trail, error handling

@agent.hook("Stop")
async def cleanup():
# Graceful shutdown

This is how you build agents that don't go rogue.
Read 11 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us!

:(