And it dropped! Here it is, the official proposal of the @EU_Commission for an AI Regulation:
Per art. 1 the draft reg covers:
- placing on the market
- putting into service
- use of AI systems in the Union
Does this leave out training of AI? Possibly. But when they're trained w personal data, no worries. The GDPR applies.
Other rules in scope of the regulation:
- prohibitions of certain AI systems (!)
- requirements for high-risk AI systems
- transparency rules for AI intended to interact w people
- rules on market monitoring and surveillance. 3/
The draft reg is broadly extraterritorial:
- it applies to providers placing or putting into service AI systems in EU, irrespective of where they are established in the world
- providers & users of AI systems located in a 3rd country, where the output produced is used in EU 4/
Important: high risk AI systems that are safety components of products and systems (e.g. in transportation) are outside the scope, they are regulated by other acts, and only Art. 84 of the draft proposal applies to them - to do w evaluation and review of what is high risk 5/
AI systems 4 military purposes are outside the scope, as expected.
And surprise❗️it specifically excludes public authorities in a 3rd country and international orgs that would normally fall under the extraterritorial rules, if they use the AI systems as part of int agreements 6
Intermediary liability rules from the current eCommerce Directive to be replaced by the #DSA will prevail if there are conflicts with liability rules in the draft AI reg. 7
The definition of an AI system did not seem to suffer modifications from the leak we've seen. It focuses on software, refers to techniques in Annex I, which includes a broad spectrum from supervised & unsupervised ML to statistical approaches & search & optimization methods ...8/
... and has as core parts:
- human-defined objectives
- and generating outputs "such as" (so no closed list) content, predictions, recommendations, decisions that influence the environment they interact with (like results of an election? asking for a friend). 9/
So the draft #AIReg is also an encyclopedia of the AI universe: it has 44 definitions! It's like we need an AI to help us process all this content 😅 Some examples: "publicly accessible space", "biometric data" (& it's different from the GDPR's), "input data", "training data" 10/
There are definitions for "substantial modification", "performance of an AI system", "withdrawal of an AI system" - this is too much to go through. Let's focus on the meaty part: it makes a difference between "remote biometric identification system" and "real-time RBIS". Hmm 11/
So the difference between RBIS and real-time RBIS is something called "without a significant delay", which is relevant for the time between capturing of biometrics, comparison w central database and identification of a person. While a "'post' RBIS" is "not a real-time RBIS" 12/
And now the big reveal: the list of "AI practices" that are prohibited.
1) AI that deploys "subliminal techniques" to manipulate behavior in a manner that "causes or is likely to cause" physical or psychological harm to self or others. 13/
2) AI that exploits vulnerabilities of a group due to their age and disability in order to manipulate ("materially distort") behavior of a person in a manner that causes or is likely to cause physical or psychological harm to self or others 14/
3) AI used by public authorities or on their behalf for social scoring, where i) the social scoring leads to detrimental or unfavorable treatment in social contexts different than the contexts where the data was collected or ii) same, where the treatment is unjustified. ‼️ 15
Wait, what? Doesn't that seem too narrow? So social scoring based on "evaluation or classification of the trustworthiness of individuals" based on predicted personal characteristics and their social behavior will be allowed in all other contexts than above?! 16/
4) The use of "real time remote biometric identification systems in publicly accessible spaces for the purpose of law enforcement", with a bunch of exceptions, detailed over several paragraphs.
That is all. This is the whole list of banned AI systems.
The draft reg continues with rules for high risk AI systems, which are defined either as AI listed in Annex III or AI intended to be used as a safety component of a product required to undergo 3rd party conformity assessments and covered by regs listed in Annex II 18/
The Annex listing High Risk AI systems has some interesting parts, like:
- AI used for assessing students and test takes for admission to educational institutions (IB scandal?)
- AI used in employment, workers management and access to self-employment (gig economy?) 19/
I'm sorry, I keep thinking of the short list of banned AI systems. Could it be at this point that Art. 22 GDPR has a broader prohibition in place for software that meets the AI system definition and supports automated decision-making having a significant or legal effect? /20
Going back to examples of high risk AI from Annex III:
- credit scoring
- eligibility for social benefits
- AI to be used for law enforcement
- AI for administration of justice and democratic processes
- AI for migration, asylum & border control
Pretty exhaustive. 21/
So what are the rules high risk AI systems need to follow?
- establish a "risk management system" to run through the entire lifecycle of the AI system (Art. 9)
- training, validation & testing data must be subject to "appropriate data governance & management practices" Art.10 22/
Interestingly - 10(3) requires training, validation and testing data to be "relevant, representative, free of errors and complete". I am no computer scientist, but I can imagine this looks like a dream data set. 23/
Here is something of interest for GDPRheads like me: these datasets "shall have the appropriate statistical properties, including, where applicable, as regards the persons or groups of persons on which the high-risk AI system is intended to be used" aaaaand: 24/
Processing of sensitive personal data is allowed "to the extent that it is strictly necessary for the purpose of ensuring bias monitoring, detection and correction in relation to the high-risk AI", subject to following safeguards: 25/
i) "appropriate safeguards" for the fundamental rights and freedoms of natural persons, including
ii) technical limitations on the re-use of the data
iii) use of state-of-the-art security and privacy preserving measures, such as.... 26/
pseudonymisation or encryption "where anonymisaton may significantly affect the purpose pursued" (which is countering bias I think) - if you want to look deeper into this, dr. Heng Xu wrote about statistical disparities that may be caused by anonymizing sensitive data sets. 27/
Going back to the list of obligations for high risk AI:
- technical documentation Art. 11
- record-keeping Art. 12
- transparency & provision of information to users Art. 13
- human oversight Art. 14
- Accuracy, robustness & cybersecurity Art. 15 28/
If you are wondering who has these obligations, good question. The provisions use a broad passive, e.g. "high risk AI systems shall be designed and developed" taking into account x or y. So whoever is designing and developing them.
The transparency and human oversight articles are very complex (read: long) - I will file them for later enjoyment. I think some of the most interesting provisions are here and possibly a justification of why the transparency rules in the GDPR were indeed not enough. 30/
The entire following chapter contains a lot of obligations specifically targeted to "providers and users of high risk AI systems", from quality management to drawing up technical documentation and conformity assessments. One point of note for my US based friends: /31
Providers established outside the EU shall appoint by written mandate an authorized representative which is in the EU, and has specific obligations, including keeping a copy of declarations of conformity and such. Does not seem to have liability, though, just like in the GDPR /32
As one can easily imagine, I am completely lost in the following two chapters, which are all about notification of conformity assessments, certification, registration, CE marking of conformity. This is all still the Title on High Risk AI. We are at Art. 51. /33
Moving on from High Risk AI to "Certain AI systems", under a new Title of the draft reg: Transparency obligations for certain AI systems. Only one article in this Title:
- AI re: emotion recognition, deep fakes and those interacting w people have clear transparency obligations 34
Title V has "Measures in support of Innovation", starting with AI Regulatory sandboxes. (UK may have exited, but some legacy remains - I'm thinking here of the ICO's sandbox project). Interesting, the @EU_EDPS is specifically nominated as possible organizers of such sandboxes /35
The other possible organizers are "one or more Member States competent authorities". These could be any authorities, not necessarily the DPAs. Member States will have to nominate them. But when personal data is involved, DPAs will need to be "associated" to the operation /36
Art. 54 contains rules on the "further processing of personal data for developing certain AI systems in the public interest in the AI regulatory sandbox". Art 55 encourages States to prioritize start-ups and SMEs for these sandboxes. /37
Finally, the Governance of this behemoth. Yet another board is created: European Artificial Intelligence Board. EAIB. We will need an AI to keep track of all the Boards too, I think. EAIB "shall provide advice and assistance to the Commission". So it does not enforce. /38
This new Board is set up similarly to the EDPB, to be composed of national supervisory authorities, represented by the head of equivalent high level official, and the @EU_EDPS. The big difference is that the EAIB will be chaired by the Commission & its Secretariat by COM too /39
At national level it seems there will be more "national competent authorities" to be established or designated, as well as one "supervisory authority" chosen from the several competent authorities. It will act as market surveillance authority & notifying authority /40
(Where will the EU come up with so many AI specialists, ethicists, administrative law specialists, data protection lawyers and experts, I wonder? How do you staff for these authorities?) There is an obligation to "ensure national authorities are provided w adequate resources" /41
The draft reg proposes the creation of an EU database for stand-alone high risk AI systems, to be administered by the European Commission. Title VIII deals with post-market monitoring, sharing information on malfunctioning and market surveillance. /42
As for sanctions and penalties, Member States are left to lay down rules within their administrative procedure systems. However, the level of penalties is set in the draft reg: 30 million EUR or 6% of global annual turnover for non-compliance w the prohibitions in Art. 5 & /43
the data quality and data governance requirements for training, validation and testing data in Art. 10 for high risk AI systems. To be clear, that is "UP TO" 30 million or 6% of the global annual turnover in the past year. /44
Smaller penalties for non-compliance w the rest of the obligations in the draft regulation: up to 20 mil euro or 4%.
Novelty: Administrative fines (up to 500k) are proposed for EU institutions, agencies and bodies that do not follow these rules! To be enforced by the @EU_EDPS /45
The entry into force is envisioned with no grace period (the GDPR had 2 years), so "on the 20th day following that of its publication". But a loooong process towards adoption starts today. Thank you all for accompanying me on this read! 46/END
And since you are all here, my brilliant colleagues from @futureofprivacy are organizing this training on understanding the data flows and some technical aspects of data governance of AI, on May 20:…

• • •

Missing some Tweet in this thread? You can try to force a refresh

Keep Current with Dr. Gabriela Zanfir-Fortuna

Dr. Gabriela Zanfir-Fortuna Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!


Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @gabrielazanfir

20 Apr
Time to pay close attention to #China & #India's comprehensive #DataProtection bills. Why? Because they are coming probably by the end of 2021, they are giving 'data subject' rights to app 2.7 billion people & they legislate DP where the US is absent: 1/…
In this panel that opened the #GPS2021 online sessions for @PrivacyPros, I explore with Barbara Li and Malavika Raghavan @teninthemorning some of the context & background leading to these two legislative developments in China and India, as well as the burning topics of ... 2/ localization, international data transfers, private rights of action and enforcement. There was so much more to talk about - we promise to be back with a follow-up and a deeper dive into individual data subject rights and other practical topics. Why the time pressure? 3/
Read 5 tweets
18 Apr
A couple of things I would keep in mind on this saga:
1) The 1st Constitutional Courts which declared unconstitutional the data retention laws transposing the defunct directive, did so in 2009, 2010 & 2011: 1st, the Romanian Const Court ❤️, then the German and Czech Const Courts.
2) Before them, the Bulgarian Supreme Administrative Court annulled a provision of the data retention national law in 2008.
What do these countries have in common? A history of suffering under surveillance states & no rule of law. Maybe they know this leads to bad stuff?
3) The ECJ tried to avoid the problem in a couple of cases, looking at formal issues & competence of the EU to act, when 1st looking at the 2006 Directive.
It couldn’t avoid it any longer when 2 other Constitutional-level tribunals sent it Qs : Austria & Ireland.
Read 10 tweets
15 Dec 20
I see a bit more interesting interaction between data protection rules and the #DigitalMarketsAct. Two points: (1) the obligation for gatekeepers to refrain from combining personal data from any other services offered by the gatekeeper or w PD from 3rd-party services, unless 1/
"unless the end user has been presented with the specific choice and provided consent in the sense of the GDPR" (Art. 5(a) of the proposal). And 2) the obligation for gatekeepers to submit to COM an annual independent audit w a description of the user profiling techniques 2/ #DMA
There are also data sharing obligations with third parties, including personal data, which are quite interesting. In fact, one of them speaks of "continuous and real time access" offered to business users (Art. 6(1)(i)) #DSA 3/
Read 5 tweets
14 Dec 20
And the text fo the long awaited #DigitalServicesAct Proposal is here! One day early, thanks to @SamuelStolton and his sources. One key thing to note is that the DSA is clearly without prejudice to both the GDPR and the ePrivacy Directive...… 1/n #DSA
which technically means that it applies on top of them and in case of conflict, the provisions in the #GDPR and the ePrivacy Directive prevail. There are 2 areas of interaction that immediately pop-up. First, the rules on recommender systems and online advertising 2/n #DSA
Both of these certainly rely on processing of personal data. But it seems there is broad convergence between the existing #EUDataP regime and the proposed #DSA, especially in relation to transparency and rights to explanation 3/n #DSA
Read 13 tweets
25 Nov 20
Momentous development in EU law for the digital market: the EU Commission is expected to publish today the #DataGovernanceAct proposal for a Regulation. From a new European Board, to fiduciary duties, to data intermediaries, data cooperatives (!) and data altruism… 1/
There are plenty of things to look out for! Here is my top list of hot topics, based on the leaked version that circulated among Brussels tech media a couple of weeks back. First: lots of “data sovereignty” undertones to key rules, sometimes sliding into data localization … 2/n
Exhibit A: The title regulating the re-use of data held by public sector bodies allows such re-use by different actors “within the Union”, with an additional specification that “the processing of such data shall be limited to the European Union” 3/
Read 15 tweets
24 Nov 20
Big thanks to @ddoneda @rafa_zanatta @brunobioni @RenatoLeiteM and Laura Schertel Mendes for enlightening us at @futureofprivacy about the complexity of the Brazilian jurisdictional system and the wondrous ways in which the #LGPD takes a life of its own ... 1/n
... within the federalized legal system, where consumer protection agencies, big and small, have a strong tradition of enforcing consumer rights, where Prosecutors from the Public Ministry - federal and regional, have the power to bring #LGPD breaches to Court ... 2/n
... where there is a long tradition of class actions, with actually very few barriers to proceed in Court from an admissibility and costs perspective, where the Supreme Constitutional Court recognized this year an autonomous fundamental right to data protection... 3/n
Read 5 tweets

Did Thread Reader help you today?

Support us! We are indie developers!

This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!

Follow Us on Twitter!