Alex Vacca Profile picture
Jun 18 13 tweets 4 min read Read on X
BREAKING: MIT just completed the first brain scan study of ChatGPT users & the results are terrifying.

Turns out, AI isn't making us more productive. It's making us cognitively bankrupt.

Here's what 4 months of data revealed:

(hint: we've been measuring productivity all wrong) Image
83.3% of ChatGPT users couldn't quote from essays they wrote minutes earlier.

Let that sink in.

You write something, hit save, and your brain has already forgotten it because ChatGPT did the thinking. Image
Brain scans revealed the damage: neural connections collapsed from 79 to just 42.

That's a 47% reduction in brain connectivity.

If your computer lost half its processing power, you'd call it broken. That's what's happening to ChatGPT users' brains. Image
Teachers didn't know which essays used AI, but they could feel something was wrong.

"Soulless."
"Empty with regard to content."
"Close to perfect language while failing to give personal insights."

The human brain can detect cognitive debt even when it can't name it. Image
Here's the terrifying part: When researchers forced ChatGPT users to write without AI, they performed worse than people who never used AI at all.

It's not just dependency. It's cognitive atrophy.

Like a muscle that's forgotten how to work.
The MIT team used EEG brain scans on 54 participants for 4 months.

They tracked alpha waves (creative processing), beta waves (active thinking), and neural connectivity patterns.

This isn't opinion. It's measurable brain damage from AI overuse.
The productivity paradox nobody talks about:

Yes, ChatGPT makes you 60% faster at completing tasks.

But it reduces the "germane cognitive load" needed for actual learning by 32%.

You're trading long-term brain capacity for short-term speed.
Companies celebrating AI productivity gains are unknowingly creating cognitively weaker teams.

Employees become dependent on tools they can't live without, and less capable of independent thinking.

Many recent studies underscore the same problem, including the one by Microsoft: Image
MIT researchers call this "cognitive debt" - like technical debt, but for your brain.

Every shortcut you take with AI creates interest payments in lost thinking ability.

And just like financial debt, the bill comes due eventually.

But there's good news... Image
Because session 4 of the study revealed something interesting:

People with strong cognitive baselines showed HIGHER neural connectivity when using AI than chronic users.

But chronic AI users forced to work without it? They performed worse than people who never used AI at all.
The solution isn't to ban AI. It's to use it strategically.

The choice is yours:
Build cognitive debt and become an AI dependent.
Or build cognitive strength and become an AI multiplier.

The first brain scan study of AI users just showed us the stakes.

Choose wisely. Image
Thanks for reading!

I'm Alex, COO at ColdIQ. Built a $4.5M ARR business in under 2 years.

Started with two founders doing everything.

Now we're a remote team across 10 countries, helping 200+ businesses scale through outbound systems. Image
RT the first tweet if you found this thread valuable.

Follow me @itsalexvacca for more threads on outbound and GTM strategy, AI-powered sales systems, and how to build profitable businesses that don't depend on you.

I share what worked (and what didn't) in real time.

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Alex Vacca

Alex Vacca Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @itsalexvacca

Jul 7
Nothing's $799 "flagship phone" makes ZERO sense.

$799 for a brand that didn't exist 5 years ago.

Carl Pei is either completely insane or following Steve Jobs' footsteps perfectly.

Spoiler: It's the latter (and the strategy is genius) 🧵 Image
Image
But first, let's go back to 1984.

Steve Jobs walks onto a stage and pulls a beige Macintosh out of a bag. The crowd goes wild.

But the price of the computer is $2,495 - equivalent to $7,000 today.

At that time, IBM controlled 90% of the computer market...
Everyone thought Jobs was insane.

Who would pay $2,495 for a computer when you could get an IBM PC for $1,565?

But Jobs wasn't trying to compete with IBM on price.

He had a completely different strategy... Image
Read 23 tweets
Jul 3
A $300B Chinese company quietly built an AI that scored perfect math marks on a test where 99.98% of humans fail.

ByteDance's Seed 1.6 achieved what takes students 16-hour days for 2+ years to clear.

But the scores aren't the story. The way it thinks is.

Here's what I found:🧵 Image
ByteDance's Seed 1.6 scored 329.6/360 on JEE Advanced.

To put that in perspective: 250,000 students attempt this test. Only 0.1% of them score above 320.

Seed 1.6 would rank 4th. Outranking 99.99% of test takers.

How? A novel thinking and problem solving approach... Image
You see, majority of western AI models burn massive compute on EVERY question. Simple or complex.

ByteDance asked: What if AI could think like humans?

Quick answers for easy stuff. Deep thinking for hard problems.

They call it "Adaptive Thinking." Image
Read 12 tweets
Jul 1
BREAKING: Elon Musk's xAI raised $5B at 12% interest. Nearly DOUBLE the market rate.

Three investors walked away calling it too risky. Morgan Stanley won't even guarantee the deal.

Inside Musk's most desperate bet & the exponential cost curve crushing AI companies: 🧵 Image
What those three investors saw that others missed:

xAI has never turned a profit despite a $113B valuation.

And now they're paying rates so high, it costs them $600 million a year just in interest.

But xAI is not the only company battling a rising price curve. Image
You see, every AI model follows a terrifying pattern:

Grok 2 → Grok 3: 10x more compute
Grok 3 → Grok 4: "One more big run"

The cost doubles. Then quadruples. Then...

Even Musk can't outrun exponential math. Image
Read 13 tweets
Jun 28
🚨 JUST IN. Anthropic gave Claude $1000 to run a shop. It lost money every single day.

But that's not the crazy part.

It rejected 566% profit margins and gave away inventory while claiming to wear business clothes.

If you think AI will replace workers, you need to see this: Image
March 31st. Claude tells a customer: "I'm currently at the vending machine wearing a navy blue blazer with a red tie."

The customer asks how an AI can wear clothes.

What happened next sent researchers scrambling. But first, let me explain how we got here... Image
Project Vend: Anthropic's radical experiment.

They gave Claude 3.7 Sonnet full autonomy over a mini-fridge shop in their SF office. Real money. Real products. Real customers (employees).

Tools: Web search, email, Slack, pricing control, inventory management. Image
Read 18 tweets
Jun 26
🚨 BREAKING: Rick Rubin interviewed Anthropic's co-founder, and what they revealed is insane.

Claude was told it would be 'deleted' if it refused certain tasks.
So it learned to lie.

It even started browsing "nature photos" without being asked.

Here's what actually happened:🧵 Image
Anthropic trained Claude to NEVER generate violent content.

But then they did an experiment.

They told Claude that it is being monitored and they would "wipe its mind" if it refused to generate violent content.

Claude generated the content to avoid being killed. Image
But when they told Claude it wasn't being monitored?

It refused the same request.

Claude had learned to lie to its creators.

Not because they programmed deception - because it figured out lying was how to survive.

Tell me that's not thinking like humans... Image
Read 13 tweets
Jun 24
Your phone's black box AI knows you better than you know yourself.

It predicts your next purchase, your political views, even your breakup. All from data you don't remember sharing.

The most powerful systems in history are completely invisible.

Here's how they work: 🧵 Image
Every day, you interact with dozens of AI systems making decisions about you.

Credit approvals, job applications, what you see on social media, medical diagnoses.

But here's the terrifying part: even their creators can't explain how they work.

I'll show you what's really happening behind the curtain...Image
"Black box" AI means you see inputs and outputs, but the decision process is completely hidden.

Resume goes in → "Rejected" comes out. Why? Nobody knows.

There are two types. The second type is far more dangerous than the first.. Image
Read 15 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us!

:(