Today the European Commission officially presented its proposal for harmonised rules on artificial intelligence that has the ambition to create a framework for #artificialintelligence that respects European values. Is it fit for purpose? At first sight: No. [Thread] 1/23
First of all, very few applications are prohibited. Among them, the ban of remote biometric identification is in. In theory. In practice the exceptions granted to law enforcement agencies are too broad to call this provision a ban. #reclaimyourface 2/23
There is no ban for gender identification and recognition of sexual orientation either. Subject to a ban are subliminal techniques beyond a person’s consciousness, but only if they cause physical or psychological harm to that person. 3/23
That means that #deepfakes endangering our democracy are perfectly fine as long as they don’t cause you to jump from a building. Same goes for #AI systems that exploit vulnerabilities of a specific group of persons due to their age, physical or mental disability. 4/23
They’re only banned if they cause physical or psychological harm. This is a high bar. Material harm is not considered. Also, #socialscoring is prohibited, good! But frankly, this is the bare minimum. 5/23
Next category: High-risk systems. Here we have a longer list of applications in the area of biometric identification, education, employment, essential private services and public services and 6/23
... benefits, among them welfare, credit and emergency services, law enforcement, migration, asylum and border management, administration of justice. Missing: medical services. 7/23
In the migration area, systems used by public authorities to detect the emotional state of a natural person are classified high-risk, but allowed. This sounds dystopian to me. Is the @eu_commission sure this is compatible with European fundamental rights and values? 8/23
On the bright side: access to public services+benefits is high-risk. Global+European scandals over AI in welfare have provided ample evidence of how disproportionate risks are for affected population groups. Would have preferred an outright ban, but it is better than nothing 9/23
Politically interesting is the question why AI systems are used to detect welfare fraud (low volumes, high collateral damage for affected population groups) rather than tax fraud? But that’s not for the @eu_commission to decide. 10/23
Fun fact: Systems used by law enforcement to detect deep fakes are considered high-risk, deep fakes themselves are only subject to transparency obligations. Does that make sense? 11/23
Lost opportunity: Regulate societal harms like indiscriminate surveillance, AI endangering democracy and environmental harms. The January leak was much more progressive. What happened in the meantime? #Lobbying by any chance? 12 /23
Data sets: Sound obligations, but for bias only “examination in view of possible biases” are required. That’s quite different from saying that data sets have to be checked for bias and eventually corrected. Or did I miss something? Still can’t believe that’s missing. 13/23
Interesting: Training, validation+testing data sets shall take into account characteristics that are particular to specific geographical, behavioural or functional setting within which the high-risk AI system is intended to be used. A competitive advantage for EU companies? 14/23
It will be interesting to see who’s going to lobby against this provision.
On the bright side: The logging and record-keeping obligation seems to be a clear NO to the black-box-claim made by many providers. 15/23
Shout-out to technical experts: Is that enough? Or should we require more? @nettwerkerin 16/23
Oversight: Oversight is basically privatized. Providers may in some cases use an internal control procedures, in others conformity will be assessed by a notified body. So AI regulation is considered a technical standard. 17/23
... Quite scary considering these systems affect millions of people and will change society in the long term.
Only parties having a legitimate interest can appeal against their decisions. What about NGOs representing women, people of colour or consumers? 18/23
And no word is said about the asymmetry of knowledge between AI providers and persons affected by AI.
What about reversal of burden of proof? What about remedies for affected persons? This is not human-centric, this is company-centric. 19/23
Scary scenario: A provider offering sexual orientation identification obtains an EU declaration of conformity in Poland and sells that product legally all over Europe. How is that prevented by the proposal @vestager? Or is it compatible with European values? 20/23
Transparency: AI systems intended to interact with natural persons are designed and developed in such a way that natural persons are informed that they are interacting with an AI system. This is the bare minimum. But: 21/23
Emotion recognition systems and deep fakes are explicitly permitted and only subject to blunt transparency obligations. Is that really compatible with European values? And does anybody believe European citizens want to be spied on and deceived? Is this “human-centric”?? 22/23
Conclusion: The gap between the philosophical ambition outlined by @vestager and the reality of this legislative proposal is huge. Let’s work in @Europarl_EN to improve it. 23/23

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Alexandra Geese

Alexandra Geese Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @AlexandraGeese

14 Apr
The European Commission presents an interesting proposal to regulate high risk AI systems. Hey, we’re on our way to become a global standard setter and to align AI with democratic European values. Or aren’t we? Here are some of my preliminary observations:
#AI #womenintech
AI systems that manipulate human behavior “to the detriment” of the persons using the systems shall be banned. That is a great idea. But the devil is in the detail. What does “detriment” exactly mean and how can those persons know and prove they have been manipulated?
AI systems used for “indiscriminate surveillance” shall also be banned – if applied “in a generalized manner to all natural persons without differentiation”. Does that qualify as a ban of targeted (surveillance) advertising? Asking for a #DSA shadow rapporteur.
Read 19 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!

Follow Us on Twitter!