Luiza Jarovsky, PhD Profile picture
Sep 17 18 tweets 4 min read Read on X
OpenAI has recently released the paper "How People Use ChatGPT," the largest study to date of consumer ChatGPT usage.

Some of the paper's methodological choices obfuscate risky AI chatbot use cases and complicate legal oversight. Further research is needed to scrutinize them.

A thread🧵:Image
The goal of the paper is to justify the chatbot's economic benefits, as the company's blog post makes clear: Image
An untold goal of this paper is to downplay risky AI use cases, especially therapy and companionship, which have led to suicides, murder, AI-led psychosis, spiritual delusions, emotional dependence, unhealthy attachment, and more.

These use cases create bad headlines for OpenAI, and they significantly increase the company's legal risks:
First, from an EU data protection perspective, to benefit from the "legitimate interest" provision in the GDPR, a company must demonstrate, among other things, that the data processing will not cause harm to the data subjects.

Suicides and psychoses would complicate this claim.
Second, these ChatGPT-related harms create a major liability problem for OpenAI, which is already being sued by one of the victim's families (Raine vs. OpenAI).

If the risky use cases are mainstream (and not exceptions), judges are likely to side with the victims, given the company's lack of safety provisions.
Third, last week, the U.S. Federal Trade Commission issued 6(b) orders against various AI chatbot developers, including OpenAI, initiating major inquiries to understand what steps these companies have taken to prevent the negative impacts that AI chatbots can have on children.

OpenAI is interested in downplaying therapy/companionship to avoid further scrutiny and enforcement.
Fourth, Illinois has recently enacted a law banning AI for mental health or therapeutic decision-making without oversight by licensed clinicians. Other U.S. states and countries are considering similar laws.

OpenAI wants to downplay these use cases to avoid bans and further scrutiny.
Fifth, risk-based legal frameworks, such as the EU AI Act, treat AI systems like ChatGPT as general-purpose AI systems, which are generally outside of any specific high-risk category.

If it becomes clear that therapy/companionship is actually the most popular use case, it could lead to risk reassessments, increase the compliance burden.
Now, back to the paper.

Here's how it expressly tries to minimize AI therapy and companionship:

First, it strategically names the category "Relationships and Personal Reflection," which, according to the paper's methodology, accounts for only 1.9% of ChatGPT messages: Image
"(...) we find the share of messages related to companionship or social-emotional issues is fairly small: only 1.9% of ChatGPT messages are on the topic of Relationships and Personal Reflection (...) In contrast, Zao-Sanders (2025) estimates that Therapy /Companionship is the most prevalent use case for generative AI."
The problem is that therapy/companionship often manifests through the type of language and usage intensity (e.g., consulting ChatGPT multiple times a day, asking it information about various specific topics throughout the day).
In a horrifying example, Adam Raine, before committing suicide, asked ChatGPT if the noose he was building could hang a human (screenshot from the family's lawsuit below): Image
This extremely risky interaction, which should never have happened, would likely have fallen into either "specific info" or "how to advice" in the paper's terminology, obfuscating the fact that this interaction is part of a broader "companionship" pattern.
OpenAI's paper fails to show that a good part of ChatGPT interactions are happening in a context of emotional attachment, intensive usage, and personal dependence, which have been at the core of recent AI chatbot-related tragedies.
To properly govern AI chatbots, we need more studies (preferably NOT written by OpenAI or AI chatbot developers) showing the extent to which highly anthropomorphic AI systems negatively impact people, creating situations of emotional manipulation.
As I wrote above, this paper was written with the goal of showing investors, lawmakers, policymakers, regulators, and the public how 'democratizing' and 'economically valuable' ChatGPT is.
We need more studies that analyze how people can be negatively affected by AI chatbots through various usage patterns, with a focus on making AI chatbots safer and protecting users from AI-related harm.
If you are interested in the legal and ethical challenges of AI, including AI chatbot governance, join my newsletter's 78,000+ subscribers: luizasnewsletter.com

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Luiza Jarovsky, PhD

Luiza Jarovsky, PhD Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @LuizaJarovsky

Jun 5, 2023
🔥Regulate us, but not really

I've just left an in-person event with Sam Altman and Ilya Sutskever (OpenAI's CEO & Chief Scientist) at Tel Aviv University, and these were my impressions: Image
- there was a disproportional time talking about the risk of a super powerful and perhaps dangerous Artificial General Intelligence (AGI). It felt like part of a PR move to increase the hype and interest in AI-based applications >>
- there was no mention of OpenAI's role or plans in mitigating existing AI-related problems, such as bias, disinformation, discrimination, deepfakes, non-compliance with data protection rules, etc. It looked like talking about a future AGI was a way to distract from reality>>
Read 12 tweets
Jun 4, 2023
🚫 Say NO to sharenting - protect children's privacy

Why? Read this: Image
If you have kids (or take care of kids), it is a bad idea to document their lives on social media. This behavior is called sharenting, and it can have negative consequences for the child. >>
Most adults don't realize they are sharing their child's pictures online to get the dopamine hit that comes with likes, comments & shares. There is no positive outcome for the child to be seen by the parent's online connections (or strangers). >>
Read 13 tweets
May 30, 2023
🔥 The intersection between privacy & AI

A quick summary of @CNIL's action plan >> Image
The French Data Protection Authority - @CNIL - has recently published its four-step action plan on AI, and it is the best official document I have seen so far that deals with the intersection between data protection and AI.

The four steps highlighted in this action plan are: >>
1. Understanding the functioning of AI systems and their impacts on people;
2. Allowing and guiding the development of AI that respects personal data;
3. Federal and support innovative players in the AI ecosystem in France and Europe; and >>
Read 6 tweets
Jan 28, 2023
Today is officially Data Privacy Day!

We are celebrating by spreading #privacy awareness and sharing below The Privacy Whisperer's top 10 articles, which have discussed topics from children's privacy to Privacy UX: Image
The TOP 21 Books in Privacy & Data Protection That You Must Read ASAP

theprivacywhisperer.com/p/the-top-21-b…
Your child’s privacy is worth more than likes

theprivacywhisperer.com/p/your-childs-…
Read 12 tweets
Dec 28, 2022
Are you new to #privacy & #dataprotection? Looking for book recommendations? Check out my list with the top 21 books in privacy & data protection that you must read ASAP. (The list is not in order of preference)
1- Privacy’s Blueprint: The Battle to Control the Design of New Technologies by @hartzog. [To understand how technology - software, hardware, algorithm & design - is not neutral: it can easily manipulate us and negatively affect our privacy].

amazon.com/Privacys-Bluep…
2- Re-Engineering Humanity by @BrettFrischmann & @EvanSelinger. [To understand what happens when we get too fascinated by big data, predictive analytics, and artificial intelligence and forget the importance of human autonomy and freedom].

amazon.com/Re-Engineering…
Read 23 tweets
Oct 29, 2022
If you have kids (or take care of kids), it is a bad idea to document their lives on social media. This behavior is called sharenting, and it can have negative consequences for the child.

An uncomfortable thread about children's #privacy:

(1/10)
Most adults don't realize they are sharing the child's pictures online to get the dopamine hit that comes with likes, comments & shares. There is no positive outcome for the child to be seen by the parent's online connections (or strangers).

(2/10)

businessinsider.com/what-happens-t…
There is also the problem of the lack of consent, as children are sometimes too small to understand what is going on, and even when they can understand and consent, they are frequently not consulted by the parent.

(3/10)

timesofindia.indiatimes.com/life-style/par…
Read 10 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us!

:(