Karl Mehta Profile picture
Sep 19 18 tweets 5 min read Read on X
In 2024, OpenAI fired their youngest Superalignment researcher for asking too many questions.

He was 23 years old with 3 degrees and a terrifying secret about AI.

6 months later, his $1.5B fund is crushing Wall Street by 47%.

A thread about what he saw that others didn't... 🧵 Image
Image
Leopold Aschenbrenner entered Columbia University at age 15.

He graduated valedictorian at 19 with dual degrees in economics and mathematics-statistics.

OpenAI recruited him for their most sensitive project: controlling superintelligent AI systems.
The Superalignment team had one mission: solve AI alignment before machines surpass human intelligence.

Aschenbrenner discovered this timeline was shorter than anyone publicly acknowledged.

His internal analysis pointed to AGI by 2027.
April 2023: An external hacker breached OpenAI's internal systems.

The company kept the incident private.

Aschenbrenner recognized the security implications and wrote a classified memo to the board. Image
His memo detailed "egregiously insufficient" security protocols against state-actor threats.

OpenAI's model weights and core algorithmic secrets were vulnerable to foreign espionage.

The board received his analysis. Leadership did not appreciate the assessment.
HR issued an official warning within days.

They characterized his concerns about Chinese intelligence operations as inappropriate.

Meanwhile, Aschenbrenner's timeline projections showed accelerating development toward AGI.
His mathematical analysis was straightforward:

Computing power: +0.5 orders of magnitude annually
Algorithmic efficiency: +0.5 orders of magnitude annually

Capability improvements: Continuous advancement from chatbot to agent functionality Image
The implications were stark.

Current alignment methods like RLHF cannot scale to superintelligent systems.

When AI generates millions of lines of code, human oversight becomes impossible.
Aschenbrenner attempted to secure resources for safety research.

The Superalignment team was allocated 20% of compute resources in principle.

In practice, they received minimal allocation. Safety research remained deprioritized.
April 2024: OpenAI terminated Aschenbrenner's employment.

Official reason: sharing internal documents with external researchers.

The Superalignment team dissolved one month later when Ilya Sutskever and Jan Leike departed.
Aschenbrenner published "Situational Awareness: The Decade Ahead" - a 165-page technical analysis.

His thesis: AGI development will trigger unprecedented infrastructure investment.

Trillions in capital will flow toward AI supply chains by 2027.
June 2024: He launched the Situational Awareness Fund.

Initial investors included Stripe founders Patrick and John Collison, former GitHub CEO Nat Friedman, and Daniel Gross.

Strategy: Long AI infrastructure, short disrupted industries. Image
Assets under management reached $1.5 billion within 12 months.

First half 2025 performance:

• Situational Awareness Fund: +47%
• S&P 500 Index: +6%
• Technology hedge fund average: +7% Image
Aschenbrenner's investment thesis was built on insider knowledge of AI development trajectories.

He positioned capital ahead of market recognition of the AGI timeline.

The strategy has generated substantial alpha for sophisticated investors.
The case illustrates a pattern: AI safety researchers often possess the most accurate timeline assessments.

When these individuals transition to capital markets, their performance suggests unique informational advantages.

Market pricing may not reflect insider consensus on development speed.
This raises strategic questions for institutional investors and policymakers.

If Aschenbrenner's timeline proves accurate, current AI infrastructure valuations may be significantly underpriced.

The implications extend beyond financial markets to national security planning.
P.S. Are you an Enterprise AI Leader looking to validate and govern your AI models at scale?

provides the model validation, monitoring, and governance frameworks you need to deploy AI with confidence.

Learn more:TrustModel.ai
Thanks for reading.

If you enjoyed this post, follow @karlmehta for more content on AI and politics.

Repost the first tweet to help more people see it:

Appreciate the support.

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Karl Mehta

Karl Mehta Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @karlmehta

Sep 17
3 out of 4 Americans are chronically dehydrated.

It's causing chronic fatigue, brain fog, and irritability.

Drinking 8 glasses of water isn't enough.

Here's how to properly hydrate (& how much water to actually drink each day): 🧵 Image
First we have to understand how hydration works...

Everyone talks about "drinking 8 glasses" but no one knows what cellular hydration means.

It's not about water volume. It's about electrolyte balance.

Watch this video to learn in less than 50 seconds:
Now let's look at hydration's worst enemies:

• Drinking too much plain water (dilutes electrolytes)
• Sports drinks loaded with sugar
• Coffee & alcohol (diuretic effects)
• Forcing water when not thirsty
• Following arbitrary "8 glasses" rules

Do these long enough & your system crashes from the inside out:
Read 13 tweets
Sep 14
BREAKING: Norway's $2 trillion wealth fund ran a 12-month AI experiment.

They gave Claude access to their entire investment workflow.

Result: 213,000 hours saved. 20% productivity boost.

But what they found hiding in the data changed everything:

A Thread 🧵 Image
Image
Meet Nicolai Tangen, CEO of NBIM - the world's largest sovereign wealth fund.

700 employees managing $2 trillion in assets.

In 2022, he made a strategic decision that would reshape their entire operation.

But first, he had to address significant organizational resistance... Image
Tangen began systematically advocating for AI adoption across his 670-person team.

The challenge? Most investment professionals viewed AI as potentially disruptive to established workflows.

Traditional analysts were spending days on tasks that seemed impossible to automate.

Then Claude was introduced...
Read 15 tweets
Sep 12
BREAKING: KPMG surveyed 48,000 people about AI trust and found something shocking.

The more people use AI, the LESS they trust it.

This has never happened in tech history.

Here's what 48,000 people revealed:

(hint: this crisis is accelerating faster than anyone predicted) Image
AI adoption outpaces trust:

• 66% use AI regularly
• Only 46% trust it

This 20-point gap reveals a fundamental challenge: widespread usage without corresponding confidence in the technology. Image
When @KPMG compared their data to 2022 (before ChatGPT), they discovered something unprecedented:

• Trust in AI systems: 63% → 56% (DOWN)
• Worry about AI: 49% → 62% (UP)
• Yet AI usage exploded from 54% → 67%

We're using something we trust less each day. Image
Read 18 tweets
Sep 11
BREAKING: MIT just figured out why LLMs keep giving inconsistent answers to the same questions.

They ignore information based on WHERE it appears, not what it says.

Here's what 4 months of data revealed:

(hint: the smarter the AI gets, the worse this bias becomes) Image
MIT researchers identified a critical flaw in large language models that affects enterprise AI deployments.

"Position bias" - AI systematically ignores information based on location in documents.

This challenges basic assumptions about AI reliability. Image
The research was rigorous. MIT tested whether AI models consistently retrieve information regardless of its position within documents.

They systematically varied correct answer locations across text sequences.

The results were shocking.
Read 17 tweets
Sep 9
AI bias is silently destroying your customer base.

Millions of decisions happen daily & executives don't even see it.

Here's what it is, why it's happening, & how to fix it (before it tanks your revenue): 🧵
The numbers are staggering.

36% of companies reported direct negative impacts from AI bias in 2024.

62% lost revenue. 61% lost customers.

But here's what most executives don't realize... Image
Your AI systems aren't just making bad decisions.

They're making systematically biased decisions at scale.

What would take human recruiters years to accomplish in discriminatory impact, AI achieves in months.

And it's happening right now in your business.
Read 17 tweets
Sep 7
BREAKING: OpenAI researchers just proved why ChatGPT, Claude, and every AI model will NEVER stop hallucinating.

Turns out, every time we 'improve' AI performance, we're making the lying worse.

Here's what they discovered:

(hint: we've been measuring hallucination all wrong) Image
OpenAI researchers asked DeepSeek-V3 a simple question: "What's Adam Kalai's birthday?"

The AI gave three different confident answers: "03-07", "15-06", and "01-01"

All wrong. Even when told to only answer if certain.

The result ↓ Image
They discovered hallucinations aren't a bug to fix.

They're mathematically inevitable.

The researchers proved: (error rate) ≥ 2 × (classification mistakes)

Translation: If AI can't perfectly tell truth from lies, it will definitely hallucinate. Image
Read 17 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us!

:(