Apple has just published a paper with a devastating title: *The Illusion of Thinking*. And it's not a metaphor. What it demonstrates is that the AI models we use every day - yes, ones like ChatGPT - don't think. Not one bit. They just imitate doing so.
Let me explain: 🧵👇
The paper argues that those models, no matter how brilliant they may seem, do not understand what they are doing. They do not solve problems. They do not reason. They merely generate text word by word, trying to sound coherent. Real thought: zero.
To demonstrate this, Apple designed a series of experiments with logic puzzles: Tower of Hanoi, the river-crossing problem, stacked blocks, etc.
The same ones we use to see if a human or even a child can reason in steps.
In the first one, for example, they put the AI to solving the Tower of Hanoi. With 3 disks, it solves it perfectly. But as soon as you add more difficulty, more disks, the model starts to get confused. It repeats movements. It skips steps. It contradicts itself. It fails.
Was the solution too difficult?
No. Because in many cases, the researchers gave it *the correct algorithm* step by step, as a helping hand.
And you know what happened? It still couldn't follow it, not even by copying the homework.
Second example: the classic river problem. You have to cross a wolf, a goat, and a cabbage, without leaving them alone if one eats the other.
The AI does it well… until you add one more restriction. That's when it starts doing exactly what it shouldn't do.
But the most unsettling thing isn't that it makes mistakes. It's that when the problem becomes more complex… the AI "thinks" less.
Literally: it uses fewer tokens, takes fewer steps, explores fewer solutions. As if it were silently giving up.
Apple measured how many tokens the model dedicated to reasoning.
It found a very marked curve: when the problem gets difficult, the model starts to generate *less* reasoning.
Exactly the opposite of what a human would do.
Why does this happen?
Because the AI doesn't know if it's doing well or poorly.
It has no sense of an objective.
It doesn't correct. It doesn't compare. It doesn't evaluate.
It just completes text, as if it were writing without knowing what for.
This breaks a very widespread idea:
“If we keep giving it more data, more parameters, and more power, AI will become superintelligent.”
Apple's paper says: probably not.
Because *there is no real thinking to scale*.
What these models do is seem intelligent.
And that’s the most dangerous thing.
Because when they sound convincing, we believe they understand.
When they reason out loud, we believe they’re thinking.
But it’s pure theater.
What you see as reasoning is just an act.
The AI says: “first I do this, then that other thing…”
but it doesn’t *understand* the logic behind it.
It’s only imitating structures it saw in its training.
And when it doesn’t recognize them, it improvises poorly.
This does not mean that AI is useless.
But it does mean that we cannot treat it as if it had human capabilities:
it does not plan, it does not get frustrated, it does not improve its strategy.
It has no will, nor purpose, nor even awareness of error.
The real risk is not that it thinks too much.
It’s that it thinks *nothing*… and yet we still give it power.
Because the more convincing it sounds, the more likely we are to mistake it for something it’s not.
So the next time ChatGPT, Claude, or Gemini say to you:
“Let me think…”
Stop.
And remember:
they’re not thinking.
They’re guessing.
I gave Perplexity the same task every day for 90 days straight.
By day 30, I had replaced 3 software subscriptions.
By day 60, I automated half my workflow.
By day 90, I was earning $2K/month from systems Claude built me.
Here are the 12 prompts that made it all possible:
1. The "Second Brain" Strategy Prompt
"You are a senior business strategist. I'm going to describe my current workflow, tools, and recurring tasks. Analyze everything and give me:
- 5 tasks I should automate immediately
- 3 tools I'm paying for that you can replace
- A weekly system I can follow using only you
My workflow: [paste your daily/weekly routine]"
This one prompt saved me $147/month in software.
2. The "Content Machine" Prompt
"Act as a viral content strategist who has grown 10+ accounts past 100K followers. I'm going to give you my niche, audience, and voice.
Create a 30-day content calendar with:
- Daily post hooks
- Thread ideas (1/week)
- Engagement-bait tweets (2/week)
- A CTA strategy that builds my email list
Holy shit... researchers just proved that vibe coding is destroying the internet's visual diversity.
University of Washington studied AI-generated apps and found something terrifying:
The title? "Interrogating Design Homogenization in Web Vibe Coding."
And the findings are devastating.
What they found is simple:
Vibe coding isn't just making it easier to build apps.
It's making every app look exactly the same.
Not similar. Identical.
The web is losing its visual diversity faster than at any point in internet history.
To understand why, you need to know about something called the "fixation effect."
When an LLM generates your first design -- with its clean layout, rounded corners, and Tailwind defaults -- it looks SO polished that your brain stops pushing back.
So I gave Claude my resume + the job descriptions.
3 hours later, interview callbacks from 4 companies.
No career coach. No $500 resume service. Just 7 prompts that completely rewrote my job search:
1. Resume Surgeon
Prompt: "Here's my resume and the job description I'm applying for. Rewrite my resume to match this role's exact keywords, tone, and requirements without lying about my experience. Make every bullet point prove impact with numbers."
2. ATS Killer
Prompt: "Analyze this job posting and extract every keyword, skill, and qualification mentioned. Now compare it to my resume and tell me exactly what's missing, what to add, and what to rephrase to beat the ATS filter."