OpenAI has recently released the paper "How People Use ChatGPT," the largest study to date of consumer ChatGPT usage.
Some of the paper's methodological choices obfuscate risky AI chatbot use cases and complicate legal oversight. Further research is needed to scrutinize them.
A thread🧵:
The goal of the paper is to justify the chatbot's economic benefits, as the company's blog post makes clear:
An untold goal of this paper is to downplay risky AI use cases, especially therapy and companionship, which have led to suicides, murder, AI-led psychosis, spiritual delusions, emotional dependence, unhealthy attachment, and more.
These use cases create bad headlines for OpenAI, and they significantly increase the company's legal risks:
First, from an EU data protection perspective, to benefit from the "legitimate interest" provision in the GDPR, a company must demonstrate, among other things, that the data processing will not cause harm to the data subjects.
Suicides and psychoses would complicate this claim.
Second, these ChatGPT-related harms create a major liability problem for OpenAI, which is already being sued by one of the victim's families (Raine vs. OpenAI).
If the risky use cases are mainstream (and not exceptions), judges are likely to side with the victims, given the company's lack of safety provisions.
Third, last week, the U.S. Federal Trade Commission issued 6(b) orders against various AI chatbot developers, including OpenAI, initiating major inquiries to understand what steps these companies have taken to prevent the negative impacts that AI chatbots can have on children.
OpenAI is interested in downplaying therapy/companionship to avoid further scrutiny and enforcement.
Fourth, Illinois has recently enacted a law banning AI for mental health or therapeutic decision-making without oversight by licensed clinicians. Other U.S. states and countries are considering similar laws.
OpenAI wants to downplay these use cases to avoid bans and further scrutiny.
Fifth, risk-based legal frameworks, such as the EU AI Act, treat AI systems like ChatGPT as general-purpose AI systems, which are generally outside of any specific high-risk category.
If it becomes clear that therapy/companionship is actually the most popular use case, it could lead to risk reassessments, increase the compliance burden.
Now, back to the paper.
Here's how it expressly tries to minimize AI therapy and companionship:
First, it strategically names the category "Relationships and Personal Reflection," which, according to the paper's methodology, accounts for only 1.9% of ChatGPT messages:
"(...) we find the share of messages related to companionship or social-emotional issues is fairly small: only 1.9% of ChatGPT messages are on the topic of Relationships and Personal Reflection (...) In contrast, Zao-Sanders (2025) estimates that Therapy /Companionship is the most prevalent use case for generative AI."
The problem is that therapy/companionship often manifests through the type of language and usage intensity (e.g., consulting ChatGPT multiple times a day, asking it information about various specific topics throughout the day).
In a horrifying example, Adam Raine, before committing suicide, asked ChatGPT if the noose he was building could hang a human (screenshot from the family's lawsuit below):
This extremely risky interaction, which should never have happened, would likely have fallen into either "specific info" or "how to advice" in the paper's terminology, obfuscating the fact that this interaction is part of a broader "companionship" pattern.
OpenAI's paper fails to show that a good part of ChatGPT interactions are happening in a context of emotional attachment, intensive usage, and personal dependence, which have been at the core of recent AI chatbot-related tragedies.
To properly govern AI chatbots, we need more studies (preferably NOT written by OpenAI or AI chatbot developers) showing the extent to which highly anthropomorphic AI systems negatively impact people, creating situations of emotional manipulation.
As I wrote above, this paper was written with the goal of showing investors, lawmakers, policymakers, regulators, and the public how 'democratizing' and 'economically valuable' ChatGPT is.
We need more studies that analyze how people can be negatively affected by AI chatbots through various usage patterns, with a focus on making AI chatbots safer and protecting users from AI-related harm.
If you are interested in the legal and ethical challenges of AI, including AI chatbot governance, join my newsletter's 78,000+ subscribers: luizasnewsletter.com
• • •
Missing some Tweet in this thread? You can try to
force a refresh
I've just left an in-person event with Sam Altman and Ilya Sutskever (OpenAI's CEO & Chief Scientist) at Tel Aviv University, and these were my impressions:
- there was a disproportional time talking about the risk of a super powerful and perhaps dangerous Artificial General Intelligence (AGI). It felt like part of a PR move to increase the hype and interest in AI-based applications >>
- there was no mention of OpenAI's role or plans in mitigating existing AI-related problems, such as bias, disinformation, discrimination, deepfakes, non-compliance with data protection rules, etc. It looked like talking about a future AGI was a way to distract from reality>>
🚫 Say NO to sharenting - protect children's privacy
Why? Read this:
If you have kids (or take care of kids), it is a bad idea to document their lives on social media. This behavior is called sharenting, and it can have negative consequences for the child. >>
Most adults don't realize they are sharing their child's pictures online to get the dopamine hit that comes with likes, comments & shares. There is no positive outcome for the child to be seen by the parent's online connections (or strangers). >>
The French Data Protection Authority - @CNIL - has recently published its four-step action plan on AI, and it is the best official document I have seen so far that deals with the intersection between data protection and AI.
The four steps highlighted in this action plan are: >>
1. Understanding the functioning of AI systems and their impacts on people; 2. Allowing and guiding the development of AI that respects personal data; 3. Federal and support innovative players in the AI ecosystem in France and Europe; and >>
We are celebrating by spreading #privacy awareness and sharing below The Privacy Whisperer's top 10 articles, which have discussed topics from children's privacy to Privacy UX:
The TOP 21 Books in Privacy & Data Protection That You Must Read ASAP
Are you new to #privacy & #dataprotection? Looking for book recommendations? Check out my list with the top 21 books in privacy & data protection that you must read ASAP. (The list is not in order of preference)
1- Privacy’s Blueprint: The Battle to Control the Design of New Technologies by @hartzog. [To understand how technology - software, hardware, algorithm & design - is not neutral: it can easily manipulate us and negatively affect our privacy].
2- Re-Engineering Humanity by @BrettFrischmann & @EvanSelinger. [To understand what happens when we get too fascinated by big data, predictive analytics, and artificial intelligence and forget the importance of human autonomy and freedom].
If you have kids (or take care of kids), it is a bad idea to document their lives on social media. This behavior is called sharenting, and it can have negative consequences for the child.
An uncomfortable thread about children's #privacy:
(1/10)
Most adults don't realize they are sharing the child's pictures online to get the dopamine hit that comes with likes, comments & shares. There is no positive outcome for the child to be seen by the parent's online connections (or strangers).
There is also the problem of the lack of consent, as children are sometimes too small to understand what is going on, and even when they can understand and consent, they are frequently not consulted by the parent.