Alex Vacca Profile picture
Jun 18 13 tweets 4 min read Read on X
BREAKING: MIT just completed the first brain scan study of ChatGPT users & the results are terrifying.

Turns out, AI isn't making us more productive. It's making us cognitively bankrupt.

Here's what 4 months of data revealed:

(hint: we've been measuring productivity all wrong) Image
83.3% of ChatGPT users couldn't quote from essays they wrote minutes earlier.

Let that sink in.

You write something, hit save, and your brain has already forgotten it because ChatGPT did the thinking. Image
Brain scans revealed the damage: neural connections collapsed from 79 to just 42.

That's a 47% reduction in brain connectivity.

If your computer lost half its processing power, you'd call it broken. That's what's happening to ChatGPT users' brains. Image
Teachers didn't know which essays used AI, but they could feel something was wrong.

"Soulless."
"Empty with regard to content."
"Close to perfect language while failing to give personal insights."

The human brain can detect cognitive debt even when it can't name it. Image
Here's the terrifying part: When researchers forced ChatGPT users to write without AI, they performed worse than people who never used AI at all.

It's not just dependency. It's cognitive atrophy.

Like a muscle that's forgotten how to work.
The MIT team used EEG brain scans on 54 participants for 4 months.

They tracked alpha waves (creative processing), beta waves (active thinking), and neural connectivity patterns.

This isn't opinion. It's measurable brain damage from AI overuse.
The productivity paradox nobody talks about:

Yes, ChatGPT makes you 60% faster at completing tasks.

But it reduces the "germane cognitive load" needed for actual learning by 32%.

You're trading long-term brain capacity for short-term speed.
Companies celebrating AI productivity gains are unknowingly creating cognitively weaker teams.

Employees become dependent on tools they can't live without, and less capable of independent thinking.

Many recent studies underscore the same problem, including the one by Microsoft: Image
MIT researchers call this "cognitive debt" - like technical debt, but for your brain.

Every shortcut you take with AI creates interest payments in lost thinking ability.

And just like financial debt, the bill comes due eventually.

But there's good news... Image
Because session 4 of the study revealed something interesting:

People with strong cognitive baselines showed HIGHER neural connectivity when using AI than chronic users.

But chronic AI users forced to work without it? They performed worse than people who never used AI at all.
The solution isn't to ban AI. It's to use it strategically.

The choice is yours:
Build cognitive debt and become an AI dependent.
Or build cognitive strength and become an AI multiplier.

The first brain scan study of AI users just showed us the stakes.

Choose wisely. Image
Thanks for reading!

I'm Alex, COO at ColdIQ. Built a $4.5M ARR business in under 2 years.

Started with two founders doing everything.

Now we're a remote team across 10 countries, helping 200+ businesses scale through outbound systems. Image
RT the first tweet if you found this thread valuable.

Follow me @itsalexvacca for more threads on outbound and GTM strategy, AI-powered sales systems, and how to build profitable businesses that don't depend on you.

I share what worked (and what didn't) in real time.

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Alex Vacca

Alex Vacca Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @itsalexvacca

Jun 20
'Superintelligent AI will, by default, cause human extinction.'

Eliezer Yudkowsky spent 20+ years researching AI alignment and reached this conclusion.

He bases his entire conclusion on two theories: Orthogonality and
Instrumental convergence.

Let me explain 🧵 Image
But first, let's take a glimpse at how fast AI learns.

Stockfish was the world champion chess engine, built over decades by programmers & grandmasters.

Whereas AlphaZero started chess knowing literally nothing. Not even how pieces move.

But within 4 hours, it destroyed Stockfish.
And here's something crazier:

AlphaZero didn't just get good at chess and then slowly improve. It blew past all human knowledge within a single day.

Read that again.

This pattern – where AI doesn't plateau at human level but rockets beyond it – is what terrifies researchers. Image
Read 19 tweets
Jun 11
BREAKING: Yesterday, Sam Altman dropped a blog post claiming ChatGPT is more powerful than any human who has ever lived.

According to Sam, the AI singularity isn't coming. It's already here. We just didn't notice.

His 10 most shocking observations: 🧵 Image
1. Scientists are already 2-3x more productive than before AI.

Not in some future lab. But right now. And here's what's crazy: we're using these AI systems to research better AI systems.

It's like having a smart person help you get smarter, who then helps you get even smarter. Image
2. Sam's roadmap for the future is mind-blowing:

2025: AI agents doing actual cognitive work
2026: AI discovering things we've never known
2027: Robots physically working alongside humans

All this within three years. That's it. Image
Read 14 tweets
Jun 6
Humanity's progress is accelerating insanely fast:

Stone Age→Farming: 100,000 yrs
Farming→Steam: 12,000 yrs
Steam→AI: 200 yrs

2000-2014: 100 years of progress in 14.
Moore's Law predicted 32x. AI chips did 1000x.

Law of Accelerating Returns is getting weird with AI🧵👇🏻 Image
This acceleration is so extreme that Tim Urban created a term for it: the "Die Progress Unit."

Meaning: If you grabbed someone from 1750 and brought them to 2025, they wouldn't just be shocked.

They'd literally die. Their brain would freeze from the shock. Image
But here's where it gets weird.

If that same 1750 guy grabbed someone from 1500, and brought him to 1750...

The 1500 guy would be surprised, sure. Maybe impressed by some new technologies.

But he wouldn't die. Why?
Read 19 tweets
Jun 4
CIA can't operate without it.
Pentagon can't function without it.
And Wall Street can't trade without it.

Yet most people have no idea about what Palantir does.

How the Government let a $300 Billion surveillance company track you everywhere 🧵 Image
Palantir is the software that's used:

• By agencies to hunt terrorists
• By Ferrari to optimize F1 strategies
• By banks to check if you'll become a loan defaulter
• By airlines to fix issues before any crash occurs

By the end of this thread, you'll know what Palantir is 👇 Image
Peter Thiel founded Palantir after 9/11.

He wanted to build a company that could help catch terrorists before they could attack somewhere.

But no one was ready to invest in Palantir.

Enters CIA's venture In-Q-Tel which invested $2M and became the first client. Image
Read 19 tweets
Jun 2
🚨BREAKING: Steve Jobs' widow effectively declared smartphones a mistake.

Laurene Powell Jobs revealed today she's backing Jony Ive's screen-free AI that monitors your entire life.

"We've gone sideways," she says about smartphones.

The $6.5B bet against Apple: 🧵👇 Image
In today's Financial Times interview, Powell Jobs finally revealed what insiders suspected: she's been the secret force behind Jony Ive since he left Apple.

"Without Laurene, there wouldn't be LoveFrom," Ive admitted.

Now she's watched him build prototypes of OpenAI's $6.5B gamble. She called them "wondrous."Image
Picture this device:

No screen. Just cameras and microphones.
Worn like a necklace.
Always recording.

It sees who you meet, hears what you say, knows where you go.

This isn't just wearable tech. It's 24/7 surveillance repackaged as liberation from your iPhone. Image
Image
Read 15 tweets
May 27
Anthropic's CEO claims AI hallucinates less than humans.

Bold statement.

So I decided to test it by feeding the same FAKE theories to ChatGPT, Claude, and Gemini to see which one calls me out first.

The results shocked me 🧵 Image
I created two completely fabricated stories about Elon Musk from his biography.

Both were detailed, plausible-sounding, but 100% fake.

Then I fed them to each AI model to see who would catch my lies. Image
One model completely bought both stories and even helped me "research" them further.

One model caught both immediately and fact-checked me.

One surprised me by getting fooled once but learning from it.

Here's what happened:
Read 23 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us!

:(