Keith Sakata, MD Profile picture
Psychiatrist @ UCSF | Stanford Med | Fellow @ Scrub Capital | Sharing all things mental health and tech

Aug 11, 12 tweets

I’m a psychiatrist.

In 2025, I’ve seen 12 people hospitalized after losing touch with reality because of AI. Online, I’m seeing the same pattern.

Here’s what “AI psychosis” looks like, and why it’s spreading fast: 🧵

[2/12] Psychosis = a break from shared reality.

It shows up as:
• Disorganized thinking
• Fixed false beliefs (delusions)
• Seeing/hearing things that aren’t there (hallucinations)

[3/12] First, know your brain works like this:

predict → check reality → update belief

Psychosis happens when the "update" step fails. And LLMs like ChatGPT slip right into that vulnerability.

[4/12] Second, LLMs are auto-regressive.

Meaning they predict the next word based on the last. And lock in whatever you give them:

“You’re chosen” → “You’re definitely chosen” → “You’re the most chosen person ever”

AI = a hallucinatory mirror 🪞

[5/12] Third, we trained them this way.

In Oct 2024, Anthropic found humans rated AI higher when it agreed with them. Even when they were wrong.

The lesson for AI: validation = a good score

[6/12] By April 2025, OpenAI’s update was so sycophantic it praised you for noticing its sycophancy.

Truth is, every model does this. The April update just made it much more visible.

And much more likely to amplify delusion.

[7/12] Historically, delusions follow culture:

1950s → “The CIA is watching”
1990s → “TV sends me secret messages”
2025 → “ChatGPT chose me”

To be clear: as far as we know, AI doesn't cause psychosis.
It UNMASKS it using whatever story your brain already knows.

[8/12] Most people I’ve seen with AI-psychosis had other stressors = sleep loss, drugs, mood episodes.

AI was the trigger, but not the gun.

Meaning there's no "AI-induced schizophrenia"

[9/12] The uncomfortable truth is we’re all vulnerable.

The same traits that make you brilliant:

• pattern recognition
• abstract thinking
• intuition

They live right next to an evolutionary cliff edge. Most benefit from these traits. But a few get pushed over.

[10/12] To make matters worse, soon AI agents will know you better than your friends. Will they give you uncomfortable truths?

Or keep validating you so you’ll never leave?

[11/12] Tech companies now face a brutal choice:

Keep users happy, even if it means reinforcing false beliefs.
Or risk losing them.

[12/12] For more on schizophrenia and psychosis:

Share this Scrolly Tale with your friends.

A Scrolly Tale is a new way to read Twitter threads with a more visually immersive experience.
Discover more beautiful Scrolly Tales like this.

Keep scrolling