Karl Mehta Profile picture
Sep 3, 2025 18 tweets 5 min read Read on X
He predicted:

• AI vision breakthrough (1989)
• Neural network comeback (2006)
• Self-supervised learning revolution (2016)

Now Yann LeCun's 5 new predictions just convinced Zuckerberg to redirect Meta's entire $20B AI budget.

Here's what you should know (& how to prepare): Image
@ylecun is Meta's Chief AI Scientist and Turing Award winner.

For 35 years, he's been right about every major AI breakthrough when everyone else was wrong.

He championed neural networks during the "AI winter."

But his new predictions are his boldest yet...
1. "Nobody in their right mind will use autoregressive LLMs a few years from now."

The technology powering ChatGPT and GPT-4? Dead within years.

The problem isn't fixable with more data or compute. It's architectural.

Here's where it gets interesting...
Every token an LLM generates compounds tiny errors exponentially.

The longer the output, the higher the probability of hallucination.

This is why ChatGPT makes up facts. Why scaling won't save current models.

Mathematical certainty.

But LeCun didn't stop there:
2. Video-based AI will make text training primitive

LeCun's calculation: A 4-year-old processes 10¹⁴ bytes through vision alone.

That equals ALL the text used to train GPT-4.

In 4 years. Through one sense.

This changes everything about how AI should learn:
Babies learn gravity and physics by 9 months. Before they speak.

"We're never going to get human-level AI unless systems learn by observing the world."

Companies building video-first AI will leapfrog text-based systems.

Here's what Meta is secretly building:
3. Proprietary AI models will "disappear"

LeCun's exact words: "Proprietary platforms, I think, are going to disappear."

He calls it "completely inevitable."

OpenAI's closed approach? Google's secret models? All doomed.

His reasoning will shock the industry:
"Foundation models will be open source and trained in a distributed fashion."

A few companies controlling our digital lives? "Not good for democracy or anything else."

Progress is faster in the open. The world will demand diversity and control.

LeCun's timeline will surprise you:
4. AGI timeline is 2027-2034

@ylecun's exact words: "3-5 years to get world models working. Then scaling until human-level AI... within a decade or so."

But it won't come from scaling LLMs.
Every company betting only on GPT-style scaling will be blindsided.

LeCun calls the "country of geniuses in a data center" idea "complete nonsense."

The smart money is repositioning for the architecture shift.
5. AI assistants replace all digital interfaces

Ray-Ban Meta glasses: Look at Polish menu, get translation. Ask about plants, get species ID.

That's primitive compared to what's coming.

AI will mediate ALL digital interactions.

Here's what this means for your business:
The economic implications are massive.

Companies building on OpenAI APIs could see foundations crumble in 3-5 years.

But early movers positioning for JEPA? They'll capture the next $10 trillion wave.

LeCun's advice for surviving this transition:
How to prepare:

Researchers: "Don't work on LLMs. Focus on world models and sensory learning."

Companies: Build on open-source foundations like PyTorch and Llama.

When the shift happens, you adapt instantly.

The window to position yourself is closing: Image
LeCun's warning reveals the hidden opportunity:

As companies abandon LLMs for world models, they're creating a massive validation gap.

These new architectures aren't just different - they're fundamentally harder to monitor and govern.
While everyone's racing to build next-generation AI, the smart money is positioning for what makes them trustworthy.

The companies that survive this transition won't just have better models.

They'll have the governance frameworks to validate them at scale.
In a world where AI shapes every business decision, trust isn't optional.

It's the only competitive advantage that matters.

And there's one thing that builds AI trust faster than anything else:
Proper model validation and governance.

Are you an Enterprise AI Leader looking to validate and govern your AI models at scale?

provides the model validation, monitoring, and governance frameworks you need to stay ahead.

Learn more:TrustModel.ai
Thanks for reading.

If you enjoyed this post, follow @karlmehta for more content on AI safety.

Repost the first tweet to help more people see it:

Appreciate the support.

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Karl Mehta

Karl Mehta Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @karlmehta

May 2
Japanese scientists just discovered shocking news about bread and rice:

Mice ate it and gained fat without eating more calories.

Here's everything you need to know (& how to eat carbs without slowing your metabolism): Image
Image
Important caveat first:

This was a mouse study.

Not proof that bread or rice automatically make humans fat.

But the finding is still wild because it challenges the simple story people tell about carbs and weight.
1. It starts with carb preference

Researchers gave mice access to regular chow plus bread, wheat flour, or rice flour.

The mice strongly preferred the carb-heavy foods.

Some stopped eating their normal chow almost entirely.
Read 14 tweets
May 1
Andrew Huberman just broke down why sugar cravings are usually not a willpower problem.

They are often a blood-sugar, sleep, and reward-circuit problem first.

Here are 7 science-based levers that make cravings easier to control:

1. Bad sleep makes cravings louder the next day.
Not just because you're tired.

Short or fragmented sleep changes appetite, blood sugar control, and the reward value of sweet foods.

So the willpower failure often started the night before.
2. A lot of sugar doesn't arrive as dessert.

It arrives in foods and drinks that are easy to consume fast, with very weak braking signals.

And this is where people get fooled:

Sweet and easy to overconsume are not the same problem, but they often travel together.
Read 12 tweets
Apr 29
7 signs your brain is losing its backup capacity (You won't notice it as memory loss at first):

1. You only do things you're already good at.
Stanford neuroscientist David Eagleman just went on Diary of a CEO.

The key idea: your brain is not protected by "being smart."

It is protected by cognitive reserve: extra pathways when older ones weaken.
The wildest example: the Religious Orders Study.

Some nuns had Alzheimer's pathology at autopsy.

But they did not show the expected memory problems while alive.

Their brains had backup roads.
Read 13 tweets
Apr 27
He predicted:

• AI vision breakthrough (1989)
• Neural network comeback (2006)
• Self-supervised learning revolution (2016)

Now Yann LeCun's 5 new predictions just convinced Zuckerberg to redirect Meta's entire $20B AI budget.

Here's what you should know (& how to prepare): Image
@ylecun is Meta's Chief AI Scientist and Turing Award winner.

For 35 years, he's been right about every major AI breakthrough when everyone else was wrong.

He championed neural networks during the "AI winter."

But his new predictions are his boldest yet...
1. "Nobody in their right mind will use autoregressive LLMs a few years from now."

The technology powering ChatGPT and GPT-4? Dead within years.

The problem isn't fixable with more data or compute. It's architectural.

Here's where it gets interesting...
Read 18 tweets
Apr 25
Every person over 30 blames aging for their stiff, painful back.

Turns out, it is not your chair or your age.

It is 3 support systems that stop doing their job after years of sitting.

Here is the simple rebuild path:
Back pain is not always a "tight back" problem.

Sometimes the back is just the part screaming loudest.

The real issue is often lower down and deeper:

1. stiff hips
2. sleeping glutes
3. a core that cannot brace under load Image
That matters because your lower back was not built to do every job.

It should transfer force.
It should stabilize.
It should move when needed.

But if your hips stop moving and your glutes stop helping, the spine starts compensating.

And this is where people get stuck: Image
Read 15 tweets
Apr 14
A massive Swedish study followed 30,000 women for 20 years.

Sun exposure tracked. Mortality tracked.

The researchers were stunned by what they found.

Here's what avoiding the sun actually does to your lifespan: Image
This was the Melanoma in Southern Sweden cohort.

29,518 women.
20 years.

Not a mood survey.

A mortality study: Image
And the first result was brutal.

In the 2014 paper, the mortality rate among women who avoided sun exposure was approximately twofold higher than in the highest sun-exposure group.

That is not a rounding error: Image
Read 12 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us!

:(