Robert Youssef Profile picture
Mar 19 3 tweets 3 min read Read on X
Your AI has been quietly forgetting everything you told it.

Not randomly. Not loudly. Systematically. Starting with the decisions that matter most.

> The constraint you set three months ago "never use Redis, the client vetoed it after a production incident." Gone. The GDPR deployment region restriction. Gone. The retry limit you tested empirically after the cascade failure. Gone.

> The model never told you. It just started using defaults.

> This is called context rot. And Cambridge and Independent researchers just quantified exactly how bad it is.

> Every production AI system that runs long enough will eventually compress its context to make room for new information. That compression is catastrophically lossy. They tested it directly: 2,000 facts compressed at 36.7× left 60% of the knowledge base permanently irrecoverable. Not hallucinated. Not wrong. Just gone. The model honestly reported it didn't have the information anymore.

> Then they tested something worse. They embedded 20 real project constraints into an 88-turn conversation the kind of constraints that emerge naturally in any long-running project then applied cascading compression exactly like production systems do. After one round: 91% preserved. After two rounds: 62%. After three rounds: 46%.

> The model kept working with full confidence the entire time. Generating outputs that violated the forgotten constraints. No error signal. No warning. Just silent reversion to reasonable defaults that happened to be wrong for your specific situation.

> They tested this across four frontier models. Claude Sonnet 4.5, Claude Sonnet 4.6, Opus, GPT-5.4. Every single one collapsed under compression. This isn't a model problem. It's architectural.

→ 60% of facts permanently lost after single compression pass
→ 54% of project constraints gone after three rounds of cascading compression
→ GPT-5.4 dropped to 0% accuracy at just 2× compression
→ Even Opus retained only 5% of facts at 20× compression
→ In-context memory costs $14,201/year at 7,000 facts vs $56/year for the alternative

The AI labs know this. Their solution is bigger context windows. A 10M-token window is a larger bucket. It's still a bucket. Compaction is inevitable for any long-running system. The window size only determines when the forgetting starts not whether it happens.Image
Your premium AI bundle to 10x your business

→ Prompts for marketing & business
→ Unlimited custom prompts
→ n8n automations
→ Weekly updates

Get lifetime access👇
godofprompt.ai/complete-ai-bu…

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Robert Youssef

Robert Youssef Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @rryssf_

Mar 6
Google DeepMind just taught an AI to do something most AI models are terrible at: actually learn from being told it's wrong.

the technique is called Social Meta-Learning. it's borrowed from developmental psychology, not machine learning.

and it transfers across domains. train it on math correction, it gets better at learning from coding feedback too.

here's what they did:Image
here's the uncomfortable truth about every chatbot you use right now.

current LLMs are trained almost entirely for single-turn performance. give a prompt, get an answer. one shot.

this means they're actually bad at the thing conversations are supposed to be for: learning through back-and-forth.

you correct them, they don't really integrate the correction. you give feedback, they acknowledge it but don't fundamentally shift their approach. the dialogue feels static because it is.

the researchers say post-training might actually make this worse.
the DeepMind team reframed the problem completely.

instead of asking "how do we make models give better single-turn answers?"
they asked: "how do we teach a model to learn from being taught?"

they borrowed a concept from developmental psychology called social meta-learning. it's how children learn to learn from other people. not just absorbing information, but learning the skill of extracting useful information from social interaction.

the insight: learning from feedback is itself a trainable skill. not an emergent property. a skill.
Read 11 tweets
Feb 28
researchers put heavy TikTok users in an EEG and found something unsettling.

their frontal lobe activity was reduced during focus tasks.

the weird part: their behavioral performance looked normal. the damage only showed up in the brain scans.

here's what's actually happening: Image
the study measured "theta power" in the prefrontal cortex during attention tasks.

theta waves are the neural signature of executive control. the thing that lets you focus, ignore distractions, and finish what you started.

heavy short-form video users showed significantly reduced theta activity in the frontal region.

even after controlling for anxiety, depression, age, and gender.
here's the disturbing part:

the behavioral tests looked fine. participants could still complete the tasks.

but the neural machinery underneath was working harder and firing weaker.

this is what early-stage cognitive decline looks like. function stays normal while the infrastructure degrades.
Read 11 tweets
Feb 26
Google DeepMind just used AlphaEvolve to breed entirely new game-theory algorithms that outperform ones humans spent years designing

the discovered algorithms use mechanisms so non-intuitive that no human researcher would have tried them.

here's what actually happened and why it matters:Image
first, the framing matters.

this isn't "ask ChatGPT to write an algorithm." this is AlphaEvolve, Google's evolutionary coding agent powered by Gemini 2.5 Pro.

it treats algorithm source code as a genome. the LLM acts as a genetic operator, rewriting logic, injecting new control flows, mutating symbolic operations.

then it evaluates the offspring against game-theoretic benchmarks and evolves the next generation.

it's not prompting. it's natural selection over code.
the target: two foundational families in multi-agent reinforcement learning.

counterfactual regret minimization (CFR) and policy space response oracles (PSRO).

these are the algorithms behind things like superhuman poker AI. they find Nash equilibria in imperfect-information games.

the problem: designing effective variants of these algorithms has been a manual, intuition-driven process for nearly two decades. each new game setting demands its own specialized tweaks.

DeepMind asked: what if you let evolution find the tweaks instead?
Read 10 tweets
Feb 26
Google DeepMind just published something that isn't a benchmark or a new model.

it's a governance framework for when AI agents start hiring other AI agents.

sounds abstract. it's not. this is the missing infrastructure layer for the "agentic web."

here's why it matters: Image
current multi-agent systems treat delegation as task splitting.

"break this into subtasks, assign them to tools."

DeepMind's argument: that's not delegation. that's just decomposition.

real delegation transfers authority, responsibility, and accountability. current systems transfer none of these.Image
when an agent delegates to another agent today, you get:

> no clear authority boundaries
> no verification that work was actually done correctly
> no accountability chain when things fail
> no trust calibration based on track record

the whole thing runs on hope and well-structured prompts.
Read 13 tweets
Feb 24
Google Research just proved you can boost llm accuracy by up to 76 percentage points with zero extra output tokens, zero latency increase, and zero fine-tuning 🤯

the technique: paste your prompt twice.

that's it. that's the paper.

but WHY it works reveals something important about how every llm you use actually reads your input:Image
every major llm processes text left to right. each token can only attend to tokens that came before it. never forward.

this means when you write a prompt like:

[long context] → [question at the end]

the context tokens were processed without any awareness of what question was coming.

the model reads your setup blind, then answers with whatever representations it already locked in.

your question arrives too late to reshape how the context was understood.
the paper's solution is almost absurdly simple.

instead of sending , send .

when the model hits the second copy, every token now attends to the full first copy. the question has already been seen. the context gets reprocessed with complete awareness.

you're essentially giving a unidirectional model a form of bidirectional attention. without changing the architecture. without any new training. just by repeating yourself.
Read 10 tweets
Feb 23
Deepseek just broke the one rule every transformer has followed for a decade 🤯

x + f(x). the residual connection.

if you don't know what that means, here's the simple version: every time a neural network processes your input through a layer, it keeps a copy of the original and adds it back at the end. like a safety net. if the layer screws up, the original signal survives.

gpt-4 uses it. claude uses it. gemini uses it. every major model since 2015 treats this as sacred. nobody touches it.

Deepseek touched it.

instead of 1 stream carrying your data forward, they split it into 4 parallel streams. each stream carries different aspects of the information. and learned mixing matrices decide how those streams talk to each other at every layer.

more lanes on the highway. smarter traffic control. same computational cost.

sounds perfect on paper. here's where it breaks:Image
ByteDance actually tried this first. they published "hyper-connections" (HC) and it looked incredible on small models. faster convergence. better benchmarks. the theory was sound.

then they tried to scale it.

at 27B parameters, things went wrong. the mixing matrices that control how the 4 streams blend together have no guardrails. nothing stops them from amplifying signals.

imagine a game of telephone, but instead of the message getting quieter, it gets louder at every step. by the time it passes through 60 layers, the signal has been amplified ~3000x.

that's not a slow degradation. that's an explosion.

Deepseek saw it happen in real time: a loss spike at training step 12,000. gradient norms shot through the roof. the model wasn't learning anymore. it was screaming.

most teams would have abandoned the idea. Deepseek asked a different question.
their insight was clean:

the problem isn't giving the model multiple streams. the problem is nobody told the streams how to behave.

unconstrained mixing means any matrix value is fair game. positive, negative, huge, tiny. multiply those across 60 layers and you get chaos.

Deepseek's fix: force every mixing matrix to follow a strict rule.

it's called the Birkhoff polytope. fancy name, simple idea:

> every row must sum to 1
> every column must sum to 1
> every entry must be zero or positive

in plain english: information can be redistributed between streams, but it cannot be created or destroyed.

the analogy that clicks: imagine 4 glasses of water. you can pour between them however you want. any combination, any amount. but the total water across all 4 glasses must stay exactly the same.

no glass overflows. no glass runs dry. the system stays balanced no matter what you do.

that's the constraint. and it changes everything.
Read 8 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us!

:(