⚡️Breaking news from the EU & #GDPR land: the Italian Data Protection Authority #GarantePrivacy issued an order today against OpenAI, effectively blocking #ChatGPT in Italy (ordering it not to use personal data of Italians). Here is a deep dive into the short order - 1/
What did the Italian DPA #GarantePrivacy found problematic with ChatGPT4?
- Lack of transparency
- Absence of a lawful ground for processing
- Not respecting accuracy
- Lack of age verification
- Overall breach of Data Protection by Design
Why for each? A short explainer:
2/
A) Lack of transparency
Pursuant verification, the #GarantePrivacy found that no information is provided to users of the service, nor generally to people whose data have been collected by OpenAI & processed through the ChatGPT service.
=> a breach of Art. 13 #GDPR
3/
So what does this mean?
It means that the Garante was looking to see whether OpenAI informed users and people whose personal data are processed through #ChatGPT4 about what personal data they process, for what purpose, on what legal basis, do they transfer it outside the EU? 4/
This level of transparency is required by the #GDPR. If the data is not collected directly from users, the sources of where the data comes from must be disclosed in a data protection notice. The Garante found there was no transparency for Italian users in this sense. 5/
B) Absence of a lawful ground for processing
No appropriate legal basis was identified for "the collection of personal data and their processing *for the purpose of training the algorithms underlying the operation of ChatGPT*"
=> a breach of Art 6 GDPR 6/🧵
This means that the Garante looked into whether OpenAI asked the consent of people whose personal data it used to train the algorithms behind #ChatGPT, or if they used Legitimate Interests, or maybe contract - otherwise processing the data would be illegal per Art 6 #GDPR 7/🧵
C) Not respecting the principle of accuracy
#GarantePrivacy noted that the processing of personal data does not respect the accuracy principle, as the information provided by #ChatGPT does not always correspond to accurate personal data
=> breach of Art 5 GDPR 8/🧵
Now this is fascinating and it goes back to one of the core issues that Data Protection law was created from its beginning in the '70s to tackle: keeping computerized personal files of people accurate and up to date.
Did you notice half-true bios of people from #ChatGPT? 9/🧵
Well, the #GDPR has a principle of accuracy which mandates that personal data must be accurate & kept up to date + every reasonable step must be taken to ensure that inaccurate data, are erased or rectified without delay.
Pretty big deal I would say. 10/🧵
D) Lack of age verification
Finally, the #Garante also took issue w "the absence of any verification of the age of users in relation to the ChatGPT which, according to the terms published by OpenAI, is reserved for subjects who are at least 13 years old" 11/🧵
#GarantePrivacy further noted that "the absence of filters for children under the age of 13 exposes them to absolutely unsuitable answers with respect to the degree of development and self-awareness".
Garante refers to a breach of Art 8 #GDPR 12/🧵
There is no age verification mandate per se in the #GDPR. However, it provides in Art 8 that parental consent is required for children who use IT services where consent is the lawful ground.
This might suggest Garante thinks consent is the legal basis for using data by #LLMs 13/
Very interestingly, when quoting all the articles that the #Garante suspects have been breached, Art 25 GDPR - Data Protection by Design and by Default - is enumerated. There is no explanation as to why, but one who knows that provision can deduct why. 14/🧵
What this technically means is that complying with all these requirements should have been embedded in the service from the outset, from when it was buying built, as opposed to being an afterthought.
What now? What does the order for a ban actually say?
15/🧵
Well, #GarantePrivacy is using specific powers that have been granted to *all DPAs* under the #GDPR, in Art 58(2)(f) "to impose a temporary or definitive limitation including a ban on processing".
This is a temporary ban. OpenAI has 20 days to respond to these concerns. 16/🧵
An interesting consideration here: As noted in the Order, #OpenAI doesn't have a main establishment in the EU, but it has a representative for #GDPR purposes. This means that it can't enjoy One Stop Shop mechanism, leaving it exposed to ALL EU Data Protection Authorities 17/🧵
Last but not least - this is a wake up call that #GDPR, #Article8 Charter, data protection law in general & particularly in the EU IS APPLICABLE TO AI SYSTEMS today, right now, and it has important guardrails in place, if they are understood & applied. 18/🧵
Almost 5 years after the GDPR came into force, this is probably the most significant enforcement decision to date - following complaints made on May 25, 2018 (!), the day the GDPR came into force. The Irish DPC fined Meta 390 million euros, but this is not about the fine. 1/
I'm reading now through the detailed press release - have not found yet the text of the decision published. Let's go:
The fine is split - 210 million euro for breaches related to Facebook, and 180 million euro for breaches related to Instagram. 2/
But as I was mentioning from the get go, this is not about the fine. This is about the changes that Meta will need to make to the services provided. "Meta Ireland has also been directed to bring its data processing operations into compliance within a period of 3 months." 3/
At long last, the European Commission published today the draft Adequacy Decision for the Transatlantic Data Privacy Framework of the U.S., including the new EO and DOJ Regulations & the Privacy Shield Principles. Final Decision expected in 5-6 months 1/ commission.europa.eu/document/e5a39…
My first meta-comment: it looks like the "data privacy" denomination finally entered official terrain in the EU-US data protection world. The framework for which adequacy is granted is officially called the "EU-US Data Privacy Framework" or the "DPF" 2/
The Privacy Shield Principles are now called the "EU-U.S. Data Privacy Framework Principles" and are enshrined in Annex I of the adequacy decision - I'll have a look at them to see if any notable changes were made, but I suspect not. 3/
Is data localization coming to the EU? Maybe. The EDPB & EDPS in their latest joint Opinion - on the EU Health Data Space Proposal, make the argument that they can't fully exercise their powers if personal data is not localized in the EU 1/x edps.europa.eu/system/files/2…
They say that the “control of compliance with the requirements of protection and security by an independent supervisory authority *cannot be fully ensured in the absence of a requirement to retain the data in question within the EU*” (para 102 of the Opinion). 2/
They further link this to a failure to meet the standard in Article 8(3) Charter. They rely on two CJEU findings in Digital Rights Ireland and Tele2 cases, both part of the data retention saga (so on the intersection of commercial data and law enforcement spectrum of things). 3/
Now that we have the text of the #DMA published, let me point out a couple of outstanding provisions that have data protection implications & that show why this Regulation concerns all businesses & platform users, not only gatekeepers. Let's go 🧵 1/? consilium.europa.eu/media/56086/st…
First of all, check out the list of Core Platform Services that may pull a business into the gatekeeper class (Art 2). Notably including web browsers, virtual assistants, & *online advertising services*, e.g. Exchanges, as long as they are provided by a business offering a CPS 2/
But this is not a thread about the threshold to become a gatekeeper (check Art 3). It just points out data protection implications of the #DMA. Of note, "consent", "profiling" in the #DMA are defined as in the #GDPR. Bonus: "non-personal" data & "data" are also defined 3/
Per art. 1 the draft reg covers:
- placing on the market
- putting into service
- use of AI systems in the Union
Does this leave out training of AI? Possibly. But when they're trained w personal data, no worries. The GDPR applies.
2/
Other rules in scope of the regulation:
- prohibitions of certain AI systems (!)
- requirements for high-risk AI systems
- transparency rules for AI intended to interact w people
- rules on market monitoring and surveillance. 3/
Time to pay close attention to #China & #India's comprehensive #DataProtection bills. Why? Because they are coming probably by the end of 2021, they are giving 'data subject' rights to app 2.7 billion people & they legislate DP where the US is absent: 1/ linkedin.com/posts/iapp---i…
In this panel that opened the #GPS2021 online sessions for @PrivacyPros, I explore with Barbara Li and Malavika Raghavan @teninthemorning some of the context & background leading to these two legislative developments in China and India, as well as the burning topics of ... 2/
...data localization, international data transfers, private rights of action and enforcement. There was so much more to talk about - we promise to be back with a follow-up and a deeper dive into individual data subject rights and other practical topics. Why the time pressure? 3/