Guri Singh Profile picture
Feb 21 16 tweets 3 min read Read on X
Apple has just published a paper with a devastating title: *The Illusion of Thinking*. And it's not a metaphor. What it demonstrates is that the AI models we use every day - yes, ones like ChatGPT - don't think. Not one bit. They just imitate doing so.

Let me explain: 🧵👇 Image
The paper argues that those models, no matter how brilliant they may seem, do not understand what they are doing. They do not solve problems. They do not reason. They merely generate text word by word, trying to sound coherent. Real thought: zero.
To demonstrate this, Apple designed a series of experiments with logic puzzles: Tower of Hanoi, the river-crossing problem, stacked blocks, etc.

The same ones we use to see if a human or even a child can reason in steps.
In the first one, for example, they put the AI to solving the Tower of Hanoi. With 3 disks, it solves it perfectly. But as soon as you add more difficulty, more disks, the model starts to get confused. It repeats movements. It skips steps. It contradicts itself. It fails.
Was the solution too difficult?
No. Because in many cases, the researchers gave it *the correct algorithm* step by step, as a helping hand.
And you know what happened? It still couldn't follow it, not even by copying the homework.
Second example: the classic river problem. You have to cross a wolf, a goat, and a cabbage, without leaving them alone if one eats the other.

The AI does it well… until you add one more restriction. That's when it starts doing exactly what it shouldn't do.
But the most unsettling thing isn't that it makes mistakes. It's that when the problem becomes more complex… the AI "thinks" less.

Literally: it uses fewer tokens, takes fewer steps, explores fewer solutions. As if it were silently giving up.
Apple measured how many tokens the model dedicated to reasoning.

It found a very marked curve: when the problem gets difficult, the model starts to generate *less* reasoning.

Exactly the opposite of what a human would do.
Why does this happen?

Because the AI doesn't know if it's doing well or poorly.
It has no sense of an objective.
It doesn't correct. It doesn't compare. It doesn't evaluate.

It just completes text, as if it were writing without knowing what for.
This breaks a very widespread idea:
“If we keep giving it more data, more parameters, and more power, AI will become superintelligent.”

Apple's paper says: probably not.
Because *there is no real thinking to scale*.
What these models do is seem intelligent.

And that’s the most dangerous thing.

Because when they sound convincing, we believe they understand.
When they reason out loud, we believe they’re thinking.
But it’s pure theater.
What you see as reasoning is just an act.

The AI says: “first I do this, then that other thing…”
but it doesn’t *understand* the logic behind it.
It’s only imitating structures it saw in its training.
And when it doesn’t recognize them, it improvises poorly.
This does not mean that AI is useless.

But it does mean that we cannot treat it as if it had human capabilities:
it does not plan, it does not get frustrated, it does not improve its strategy.

It has no will, nor purpose, nor even awareness of error.
The real risk is not that it thinks too much.
It’s that it thinks *nothing*… and yet we still give it power.

Because the more convincing it sounds, the more likely we are to mistake it for something it’s not.
So the next time ChatGPT, Claude, or Gemini say to you:
“Let me think…”

Stop.

And remember:
they’re not thinking.
They’re guessing.

Source: ml-site.cdn-apple.com/papers/the-ill…
I hope you've found this thread helpful.

Follow me @heygurisingh for more.

Like/Repost the quote below if you can:

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Guri Singh

Guri Singh Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @heygurisingh

Mar 21
BREAKING: Meta's AI team uses a prompting method internally that they never talk about publicly.

It's called "Negative Prompting." You tell the AI what NOT to do.

My output relevance: 5/10 → 9.4/10

Here's how it works:
Most people prompt like this:

"Write me a professional LinkedIn post"

"Give me a meal plan for weight loss"

"Summarize this article for me"

You're telling the AI what to do. But you're never telling it what to avoid.

So it defaults to every generic pattern it learned during training.

You get bland, predictable, forgettable output.
Negative Prompting flips this.

Instead of only describing what you want, you explicitly define what you don't want.

LLMs are pattern matching machines. Without boundaries, they match the most common patterns.

When you add constraints, you force the model to search for less obvious, higher quality outputs.

Boundaries create better thinking.
Read 11 tweets
Mar 20
BREAKING: OpenAI and Anthropic engineers leaked a prompting technique that separates beginners from experts.

It's called "Socratic prompting" and it's insanely simple.

Instead of telling the AI what to do, you ask it questions.

My output quality: 6.2/10 → 9.1/10

Here's how it works:
Most people prompt like this:

"Write a blog post about AI productivity tools"
"Create a marketing strategy for my SaaS"
"Analyze this data and give me insights"

LLMs treat these like tasks to complete.
They optimize for speed, not depth.

You get surface-level garbage.
Socratic prompting flips this.

Instead of telling the AI what to produce, you ask questions that force it to think through the problem.

LLMs are trained on billions of reasoning examples.
Questions activate that reasoning mode.

Instructions don't.
Read 12 tweets
Mar 19
I gave Perplexity the same task every day for 90 days straight.

By day 30, I had replaced 3 software subscriptions.
By day 60, I automated half my workflow.
By day 90, I was earning $2K/month from systems Claude built me.

Here are the 12 prompts that made it all possible: Image
1. The "Second Brain" Strategy Prompt

"You are a senior business strategist. I'm going to describe my current workflow, tools, and recurring tasks. Analyze everything and give me:

- 5 tasks I should automate immediately
- 3 tools I'm paying for that you can replace
- A weekly system I can follow using only you

My workflow: [paste your daily/weekly routine]"

This one prompt saved me $147/month in software.
2. The "Content Machine" Prompt

"Act as a viral content strategist who has grown 10+ accounts past 100K followers. I'm going to give you my niche, audience, and voice.

Create a 30-day content calendar with:
- Daily post hooks
- Thread ideas (1/week)
- Engagement-bait tweets (2/week)
- A CTA strategy that builds my email list

Niche: [your niche]
Audience: [your audience]
Voice: [casual/authoritative/provocative]"

I stopped spending 2 hours/day on content. Claude does it in 4 minutes.
Read 14 tweets
Mar 17
Holy shit... researchers just proved that vibe coding is destroying the internet's visual diversity.

University of Washington studied AI-generated apps and found something terrifying:

The title? "Interrogating Design Homogenization in Web Vibe Coding."

And the findings are devastating.Image
What they found is simple:

Vibe coding isn't just making it easier to build apps.

It's making every app look exactly the same.

Not similar. Identical.

The web is losing its visual diversity faster than at any point in internet history.
To understand why, you need to know about something called the "fixation effect."

When an LLM generates your first design -- with its clean layout, rounded corners, and Tailwind defaults -- it looks SO polished that your brain stops pushing back.

You accept it. You ship it.
Read 14 tweets
Mar 16
MY RESUME GOT REJECTED 14 TIMES IN A ROW.

So I gave Claude my resume + the job descriptions.

3 hours later, interview callbacks from 4 companies.

No career coach. No $500 resume service. Just 7 prompts that completely rewrote my job search:
1. Resume Surgeon

Prompt: "Here's my resume and the job description I'm applying for. Rewrite my resume to match this role's exact keywords, tone, and requirements without lying about my experience. Make every bullet point prove impact with numbers."
2. ATS Killer

Prompt: "Analyze this job posting and extract every keyword, skill, and qualification mentioned. Now compare it to my resume and tell me exactly what's missing, what to add, and what to rephrase to beat the ATS filter."
Read 10 tweets
Mar 15
2 Billion people use Google Maps every month.

But 90% of them are using it like it's 2015.

Here are 15 hidden features that'll make you feel like you just unlocked a cheat code: 👇 Image
1. HistoryPin Integration

Google partnered with HistoryPin to let you overlay historical photos onto real-world locations.

Pin old photos, explore how neighborhoods looked decades ago, and share stories tied to specific places on a global map.
2. AI-Powered Conversational Search

You can now chat with Google Maps like you'd chat with an assistant.

Ask "best rooftop cafes near me with Wi-Fi" and it gives you tailored results -- not just a list of pins.
Read 18 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us!

:(