Karl Mehta Profile picture
3x Exited Founder/ CEO of tech cos, Chairman Emeritus- QUIN(Quad), former VC@Menlo Ventures, Author of 2 books, fmr White House fellow. All tweets personal.

Sep 4, 13 tweets

Steven Bartlett just had the world's top AI safety researcher on his podcast.

He revealed shocking truths about about super intelligence & what's really happening behind the scenes.

99% of people still don't know about any of this…

Here are 8 of his most shocking insights: 🧵

1. Mass Unemployment Arrives in Five Years:

Yampolskiy's stark timeline: "In 5 years, we're looking at unemployment levels we've never seen before. Not 10% but 99%."

The paradigm shift: "If ALL jobs will be automated, there is no plan B. You cannot retrain."

2. AI Represents the "Last Invention" Humanity Will Make:

Unlike previous technologies, AI creates inventors, not just tools.

"We're inventing a replacement for the human mind. It's the last invention we ever have to make."

At that point, it takes over all future innovation.

3. OpenAI Abandoned Their Safety Team:

The company cancelled their superintelligence alignment team six months after announcing it.

Their safety approach: "We'll figure it out when we get there, or AI will help us control more advanced AI."

Yampolskiy's response: "That's insane."

4. AI Companies Violated Every Safety Guideline:

"A decade ago we published guardrails for AI development. They violated every single one."

Current safety measures are temporary patches that people "quickly find ways to work around."

The gap between capability and safety keeps widening.

5. Sam Altman Is Building a Control System:

It's not just AI. WorldCoin scans biometrics and controls money distribution.

"If you have a superintelligence system and you control money, you're doing well."

Yampolskiy's assessment: "He's gambling 8 billion lives on getting richer and more powerful."

6. You Cannot "Turn Off" Advanced AI:

The "just unplug it" crowd misunderstands distributed systems.

"Can you turn off Bitcoin? These are distributed systems. They're smarter than you. They will turn you off before you can turn them off."

Physical switches become irrelevant.

7. This Is Unethical Human Experimentation:

"To get consent, you need people to comprehend what they're consenting to. If systems are unexplainable and unpredictable, how can they consent?"

Eight billion people affected. No proper informed consent obtained.

8. Biological Weapons Become Accessible:

AI will help create novel viruses before we even reach superintelligence.

"There are psychopaths, terrorists, doomsday cults who would do that gladly if they get the technology."

Mass extinction tools become democratized.

Yampolskiy's warnings reveal something crucial:

The biggest risk isn't just uncontrolled superintelligence.

It's deploying AI systems today without proper validation or governance.

While AI companies rush to market, the smartest enterprises are quietly doing something different...

They're implementing systematic AI model validation and governance frameworks.

They know that in a world where AI decisions affect hiring, healthcare, and compliance, trust isn't optional.

It's competitive advantage.

The companies that survive won't just have the best AI...

They'll have AI that can be trusted, explained, and validated at every step.

Are you an Enterprise Leader looking to validate and govern your AI models at scale?

provides the model validation, governance, and transparency tools you need.

Get your audit:TrustModel.ai

Thanks for reading.

If you enjoyed this post, follow @karlmehta for more content on AI and politics.

Repost the first tweet to help more people see it:

Appreciate the support.

Share this Scrolly Tale with your friends.

A Scrolly Tale is a new way to read Twitter threads with a more visually immersive experience.
Discover more beautiful Scrolly Tales like this.

Keep scrolling