OpenAI fired their ex-researcher for exposing their deadly plan.
He refused to stay silent. And now he's warning us:
"AI companies will take over the world in 10 years."
Leopold Aschenbrenner just revealed how they'll pull it off.
Here are his 4 terrifying insights:
Leopold wasn't some random employee.
He worked directly on OpenAI's most critical challenge:
How to control AI systems that surpass human intelligence.
But this would become his downfall:
Leopold wrote an internal memo cautioning OpenAI's relaxed security.
He warned foreign actors could steal their AI models - management ignored it.
So Leopold shared concerns with board members.
Big mistake:
OpenAI fired him immediately.
HR made the reason clear:
He'd alerted the board about security risks.
The firing was specifically because he went to the board.
Now he's exposing their darkest secrets...
Leopold's gone public with "Situational Awareness: The Decade Ahead."
His central claim?
AGI (super AI) isn't coming in 50 years. Or 30 years.
It's coming within a decade.
His evidence is overwhelming:
Look at the facts.
Companies are shifting from $10B → $100B → $1 trillion computing power.
• GPT-3.5: below average human intelligence (2022)
• GPT-4: top percentile (months later)
And Leopold identifies 4 existential threats we're racing toward:
1. Loss of control over superintelligent systems 2. AI weapons in hostile nations' hands 3. Catastrophic accidents from misaligned AI 4. Reckless corporate race dynamics
Let me break each one down:
Threat 1: We lose control completely.
Current AI safety methods? Useless against superintelligence.
Leopold compares it to nuclear chain reactions:
Once started, it's impossible to stop.
Yet AI teams are racing ahead in the name of "innovation."
Threat 2: Enemy states steal our AI.
Leopold revealed that most AI labs have security like "random tech startups."
One hack = instant AI takeover.
Hackers who've broken government data are now eyeing up AI teams...
Threat 3: Alignment failures cause disasters.
Even well-intentioned AI could destroy financial markets, or critical infrastructure.
Current models already exploit loopholes.
Claude blackmails its users when threatened, even resorting to violence.
But the fourth threat has already started:
Threat 4: The race gets reckless.
Companies will cut safety corners to beat competitors.
Leopold shows that the incentives all point in one direction.
Faster. Not safer.
And his 10-year timeline highlights an urgent choice:
Build irreplaceable human advantages now.
Or become obsolete.
And while AI masters every task, 3 things remain uniquely human:
Trust, influence, and authentic connection.
The smartest founders understand this perfectly.
That's why they're rushing to build personal brands.
Look at the most successful businesses today:
Rogan. Musk. The Kardashians.
All built on personal brands that people connect with.
That's why they have a cult-like following...
And can sell like mad.
The result?
Marketing. Sales. Attracting talent...
All become 10x easier when people notice YOUR brand over everyone else's AI clone.
And competition will be minimal for those who take action early...
The best part? You can get ahead today:
Founders: We’ll build your personal/company brand on 𝕏 (and beyond) without you lifting a finger.
To date, we've already helped 140+ founders get 3+ Billion combined views.
2 years ago, I cofounded @ThoughtleadrX — a premium personal branding agency for world-class founders, executives, and investors looking to dominate socials.
If you enjoyed this, hit "follow" for more breakdowns!
Image Credits:
Screenshot from YouTube video / Leopold Aschenbrenner - 2027 AGI, China/US Super-Intelligence Race, & The Return of History
Video Credits:
• • •
Missing some Tweet in this thread? You can try to
force a refresh