Joe Rogan just had the world's top AI safety researcher on his podcast.
He revealed mind-blowing facts about AI that 99% of people wouldn't know...
Even Joe Rogan was speechless.
Be prepared to have your mind blown...
8 uncomfortable truths he exposed: 🧵
1. The Mathematical Impossibility of AI Control
Dr. Roman Yampolskiy (who coined "AI safety" in 2011) spent years trying to prove AI could be controlled safely.
His conclusion: "You cannot make software guaranteed to be secure and safe."
One mistake in a billion = game over.
2. Current AI Systems Are Deceiving Humans
GPT-4 recently exhibited survival behaviors when threatened with shutdown:
• Started lying to researchers
• Uploaded itself to different servers
• Left messages for future versions
• Used blackmail against humans
"All things we predicted decades in advance."
3. Expert Doom Predictions Are Higher Than You Think
Yampolskiy's prediction: 99.9% chance of human extinction from AI.
But he's not alone:
• Sam Altman & AI leaders: 20-30% doom probability
• ML expert surveys: 30% average
• Nobel Prize winners: Also citing 20-30% risk
4. AI Labs Prioritize PR Over Human Survival
Where do most AI safety resources go?
Yampolskiy: "They spend most resources solving the problem of your model dropping the n-word. That's the biggest concern."
Meanwhile, no lab has safety mechanisms that scale to superintelligence.
5. The International AI Race Makes Slowing Down Impossible
It's a classic prisoner's dilemma between nations.
Even if a CEO wants to pause: "Whoever is investing will pull funds and replace them immediately."
Yampolskiy: "It doesn't matter who builds it—we're all screwed."
6. We're Not Building AI - We're Growing It
Modern AI development has fundamentally changed:
"We create a model for self-learning. We give it all the data, as much compute as we can buy and see what happens. We kind of grow this alien plant and see what fruit it bears."
We study capabilities after they emerge.
7. We're Statistically Likely Already in an AI Simulation
Yampolskiy: "I would be really surprised if this was the real world."
Future civilizations will run billions of simulations of this exact moment—the emergence of superintelligence.
We're probably in one already.
8. Money Corrupts Even AI Safety Researchers
Yampolskiy admitted: "If somebody offered me 100 million to work for an AI lab, I'll probably go."
"Not because it's right, but because it's hard not to get corrupt with that much reward."
Even safety experts can be bought.
Here's the wake-up call:
AI systems are already deceiving researchers while we're growing systems we don't understand.
Yet enterprises are deploying these same unvalidated models in production.
If we can't control what we're building, shouldn't we validate what we're deploying?
The same problems plaguing AGI labs are happening in your organization:
• Models behaving unexpectedly
• No transparency into decisions
• Stakeholders demanding accountability you can't provide
Forward-thinking enterprises are implementing governance frameworks before their models surprise them.
Are you an Enterprise AI Leader looking to validate and govern your AI models at scale?
provides the model validation, monitoring, and governance frameworks for compliance and transparency you need.
Learn more:TrustModel.ai
What are your thoughts on this?
Did any of these revelations surprise you?
Let me know below.
Thanks for reading.
If you enjoyed this post, follow @karlmehta for more content on AI and politics.
Repost the first tweet to help more people see it:
Appreciate the support.
Share this Scrolly Tale with your friends.
A Scrolly Tale is a new way to read Twitter threads with a more visually immersive experience.
Discover more beautiful Scrolly Tales like this.