OpenAI has recently released the paper "How People Use ChatGPT," the largest study to date of consumer ChatGPT usage.
Some of the paper's methodological choices obfuscate risky AI chatbot use cases and complicate legal oversight. Further research is needed to scrutinize them.
A thread🧵:
The goal of the paper is to justify the chatbot's economic benefits, as the company's blog post makes clear:
An untold goal of this paper is to downplay risky AI use cases, especially therapy and companionship, which have led to suicides, murder, AI-led psychosis, spiritual delusions, emotional dependence, unhealthy attachment, and more.
These use cases create bad headlines for OpenAI, and they significantly increase the company's legal risks:
I've just left an in-person event with Sam Altman and Ilya Sutskever (OpenAI's CEO & Chief Scientist) at Tel Aviv University, and these were my impressions:
- there was a disproportional time talking about the risk of a super powerful and perhaps dangerous Artificial General Intelligence (AGI). It felt like part of a PR move to increase the hype and interest in AI-based applications >>
- there was no mention of OpenAI's role or plans in mitigating existing AI-related problems, such as bias, disinformation, discrimination, deepfakes, non-compliance with data protection rules, etc. It looked like talking about a future AGI was a way to distract from reality>>
🚫 Say NO to sharenting - protect children's privacy
Why? Read this:
If you have kids (or take care of kids), it is a bad idea to document their lives on social media. This behavior is called sharenting, and it can have negative consequences for the child. >>
Most adults don't realize they are sharing their child's pictures online to get the dopamine hit that comes with likes, comments & shares. There is no positive outcome for the child to be seen by the parent's online connections (or strangers). >>
The French Data Protection Authority - @CNIL - has recently published its four-step action plan on AI, and it is the best official document I have seen so far that deals with the intersection between data protection and AI.
The four steps highlighted in this action plan are: >>
1. Understanding the functioning of AI systems and their impacts on people; 2. Allowing and guiding the development of AI that respects personal data; 3. Federal and support innovative players in the AI ecosystem in France and Europe; and >>
We are celebrating by spreading #privacy awareness and sharing below The Privacy Whisperer's top 10 articles, which have discussed topics from children's privacy to Privacy UX:
The TOP 21 Books in Privacy & Data Protection That You Must Read ASAP
Are you new to #privacy & #dataprotection? Looking for book recommendations? Check out my list with the top 21 books in privacy & data protection that you must read ASAP. (The list is not in order of preference)
1- Privacy’s Blueprint: The Battle to Control the Design of New Technologies by @hartzog. [To understand how technology - software, hardware, algorithm & design - is not neutral: it can easily manipulate us and negatively affect our privacy].
2- Re-Engineering Humanity by @BrettFrischmann & @EvanSelinger. [To understand what happens when we get too fascinated by big data, predictive analytics, and artificial intelligence and forget the importance of human autonomy and freedom].