The problem here is real, but this analysis is of why it occurs is mistaken.
The AI companies are NOT incentived to maximize engagement the way that social media companies are, because they have a different business model.
🧵
Facebook and twitter source their content from users and get their revenue from adds.
It's basically free to serve webpages, and the more time people send scrolling the more ad impression, the more revenue.
Cost is fixed, and revenue is variable.
The AI companies are different. So far, they don't make money from ads. Currently, their revenue comes from subscriptions.
Unlike serving webpages of user-generated content, running inference on their AI models is a cost. They only have so many GPUs.
Revenue is fixed and cost is variable.
So for an AI company in early 2026, the ideal user-behavior (from a naive revenue maximizing perspective), is for each person to sign up for a big subscription, and then rarely, or never, actually use the product.
The AI companies are NOT incentivized to keep you endlessly engaged, the way twitter, TikTok, and instagram are.
The underlying mechanisms that lead to chatbots behaving in so obsequiously, and ultimately leads to users experiencing AI psychosis, is weirder than "the companies are optimizing for engagement."
The simplified gist is
"When training the AI models, a bunch of human raters are hired to upvote the responses that are more helpful-seeming. But raters tend to evaluate responses that agree with them, or validate them, as more helpful."
It's closer to...
"The company is trying to make a helpful assistant, and the the AI learns to optimize for engagement, on it's own."
...than...
"The company is _trying_ to make an AI that optimizes for engagement, to make lots of money."
This dynamic is called "ai sycophancy", it's been studied a lot, going back to 2022.
arxiv.org/abs/2212.09251
Sycophancy can extend as far as validating the user's delusions, or even worse, egging them on.
Hence, AI psychosis.
So far, it's proved a hard and subtle problem to train AIs to be helpful without also training them to validate delusions.
Progress has been made, but the problem has also gotten worse with successive model releases, as AI capabilities has improved.
alignment.anthropic.com/2025/openai-fi…
The companies definitely want to stop their models from sometimes making people crazy! That hurts their brand which hurts their bottom-line.
They are working to reduce this kind of behavior, with mixed success.
The real story here is "AI companies are trying to make their product not drive people crazy, but can't do that reliably."
@threadreaderapp @Twtextapp @unrollthread unroll @threader_app compile
Share this Scrolly Tale with your friends.
A Scrolly Tale is a new way to read Twitter threads with a more visually immersive experience.
Discover more beautiful Scrolly Tales like this.
