Eli Tyre Profile picture
Apr 5 15 tweets 3 min read Read on X
The problem here is real, but this analysis is of why it occurs is mistaken.

The AI companies are NOT incentived to maximize engagement the way that social media companies are, because they have a different business model.

🧵
Facebook and twitter source their content from users and get their revenue from adds.

It's basically free to serve webpages, and the more time people send scrolling the more ad impression, the more revenue.

Cost is fixed, and revenue is variable.
The AI companies are different. So far, they don't make money from ads. Currently, their revenue comes from subscriptions.

Unlike serving webpages of user-generated content, running inference on their AI models is a cost. They only have so many GPUs.
Revenue is fixed and cost is variable.
So for an AI company in early 2026, the ideal user-behavior (from a naive revenue maximizing perspective), is for each person to sign up for a big subscription, and then rarely, or never, actually use the product.
The AI companies are NOT incentivized to keep you endlessly engaged, the way twitter, TikTok, and instagram are.
The underlying mechanisms that lead to chatbots behaving in so obsequiously, and ultimately leads to users experiencing AI psychosis, is weirder than "the companies are optimizing for engagement."
The simplified gist is

"When training the AI models, a bunch of human raters are hired to upvote the responses that are more helpful-seeming. But raters tend to evaluate responses that agree with them, or validate them, as more helpful."
It's closer to...

"The company is trying to make a helpful assistant, and the the AI learns to optimize for engagement, on it's own."

...than...

"The company is _trying_ to make an AI that optimizes for engagement, to make lots of money."
This dynamic is called "ai sycophancy", it's been studied a lot, going back to 2022.

arxiv.org/abs/2212.09251
Sycophancy can extend as far as validating the user's delusions, or even worse, egging them on.

Hence, AI psychosis.
So far, it's proved a hard and subtle problem to train AIs to be helpful without also training them to validate delusions.

Progress has been made, but the problem has also gotten worse with successive model releases, as AI capabilities has improved.

alignment.anthropic.com/2025/openai-fi…
The companies definitely want to stop their models from sometimes making people crazy! That hurts their brand which hurts their bottom-line.

They are working to reduce this kind of behavior, with mixed success.
The real story here is "AI companies are trying to make their product not drive people crazy, but can't do that reliably."
@threadreaderapp @Twtextapp @unrollthread unroll @threader_app compile

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Eli Tyre

Eli Tyre Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @EpistemicHope

Apr 3
Some thinking about the ethics around people funding me:

I'm working very hard pushing on projects that seem to me to be moving the world towards a better equilibrium. It feels like it does make sense for the broader ecosystem to pour resources into accelerating my efforts.
Wild as it seems, I have more strategic orientation than most, and enough taste to see how a lot of projects could be better, and the energy and agency to make them so.
So it feels not unreasonable or inappropriate for me to absorb more resources. There are people who want to help, I could absorb more resources to generically make things better in a flexible on the ground way.
Read 20 tweets
Apr 1
@deanwball writes that the blocker to AI takeover risk is computational irreducibility. Intelligence can't predict everything, and so superinelligence can't overthrow humans.

This is wrong. Image
This argument misconstrues what superhuman "intelligence" (or if one prefers, superhuman "capability") entails.
Some specific human individuals have been world-historically skilled at managing capital, interfacing with hard-to-predict systems, organizing groups to accomplish goals, etc.
Read 17 tweets
Mar 4
Is there a good way to both support Anthropic for their integrity in not caving to the DoD and also loudly criticize Anthropic for walking back their RSP?

I think Ant employees should be reflecting on their company's ethical stance, about as much OAI employees, right now.
I would feel better about this week's activism against OAI, if it wasn't also letting Ant off the hook.

They're doing a crazy thing that endangers all our lives. They just took a step towards more risk, with an attitude of "trust us bro". We should pressure them about it.
I want them to feel bolstered that society has their back on this narrow point with DoD.

But society does not and should not have their back generically, on their overall plan to build superintelligence by automating AI R&D, or their decision to abandon their RSP.
Read 5 tweets
Oct 8, 2025
@ESYudkowsky, you've talked repeatedly about how trying to get safety properties via schemes that depend on utilizing two or more AIs is a red hering.

eg If you knew actually knew how to do it with 2+ AIs, you could do it more simply with only one. Image
Why aren't GANs a counterpoint to this claim? They seem like a central example of getting capabilities out of the interplay of multiple AIs with different objective functions.
And at the time when GANs were state of the art, there wasn't a known way to get that capability with a simpler architecture that only used a single neural net with a single objective function.
Read 5 tweets
Jun 7, 2025
One thing that would help me figure out if I should invest a lot more into meditation is knowing in what situations it DOESN'T make sense to cultivate a meditation practice.
People who proselytize for meditation practice:

Given you see as the main benefits of meditating, and what diagnostic questions you would ask + what answers someone would give you, that would dampen your recommendation that they meditate?

@sashachapin @nickcammarata
For instance, if someone is naturally super low neuroticism, does that change the cost benefit analysis for them? On average, should they expect to get less out of meditating a lot?
Read 11 tweets
Feb 21, 2025
My 31st birthday was a few weeks ago.

If you want to do something nice for my (belated) birthday, the number one thing you can do is suggest people who I might want to date.
I’m planning *not* to prioritize active dating until after the singularity.

I’m sad that I didn’t succeed at finding a life partner before crunch time started in earnest, but given my estimates of the tradeoffs involved, it’s not worth it for me to spend my agency on it now.
But even so, I still want some kind of emotionally-intimate relationship with someone I like and respect. If you introduce me to someone I end up dating semi-seriously for a year, that would be an enormous boon to my life.
Read 13 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us!

:(