The European Commission presents an interesting proposal to regulate high risk AI systems. Hey, we’re on our way to become a global standard setter and to align AI with democratic European values. Or aren’t we? Here are some of my preliminary observations:
#AI #womenintech
AI systems that manipulate human behavior “to the detriment” of the persons using the systems shall be banned. That is a great idea. But the devil is in the detail. What does “detriment” exactly mean and how can those persons know and prove they have been manipulated?
AI systems used for “indiscriminate surveillance” shall also be banned – if applied “in a generalized manner to all natural persons without differentiation”. Does that qualify as a ban of targeted (surveillance) advertising? Asking for a #DSA shadow rapporteur.
High-risk AI systems shall not be prohibited, but subject to strict rules. Very good! Among the harms that make an AI systems fall under those rules are not only injury or death, but also
“systemic adverse impacts for society at large, including by endangering the functioning of democratic processes and institutions” (Capitol Hill storm saying hi?) and “the environment” (🌎YEAH) - still lacking: gender equality and racial discrimination as a societal harm.
Fundamental rights as enshrined in the Charter are covered - but anti-discrimination law requires you to prove personal discrimination. Does that apply here? What about translation tools that decide all doctors are male and all nurses female? Is that discrimination?
On the bright side: adverse impact against entire groups of persons “based on race, sex, sexual orientation, nationality, ethnic origin, profession, political opinions, religious or philosophical beliefs” is in. Quite exhaustive. But what about handicap?
Also considered is “the degree of dependency of people on the outcome produced by an AI system”, if an opt-out from the outcome is not “factually or legally“ possible. Can you factually consistently opt out of Google without being shut out of public discourse?
Provisions on data sets look good: intentional and unintentional bias covered, data must be representative, free of errors and complete, bias due to feedback loops must be addressed. Docs: logs must be automatically generated, kept and made accessible to competent authorities.
Transparency: Users must be able to understand and CONTROL how the high-risk AI system produces its output. Sounds gut. Is that doable? Fellow Germans reading: Let's control #Schufa!
Human oversight: This is not a human in the loop, but a real super(wo)man, who “fully understands”, “has the expertise needed to operate the AI system”, “does not automatically rely or over rely on output” and “can decide not to use the high-risk AI system”. YES.
Oversight is attributed to notified bodies who are subject to national competent authorities. Requirements are clearly outlined. This might inspire me for the #DSA 😇
On the downside: NO BAN on biometric identification in publicly accessible places – only a prior authorization system. This clearly needs work. #reclaimYourFace
AI regulatory sandboxes. Hmmm. Quite sceptical. Google calls its #FLoC surveillance technology “privacy sandbox”. Would that qualify? But the measures to reduce burden for SMEs make sense. BTW: Google's rep didn't answer my question in committee whether its legal in Europe.
On governance: A European Board is a sensible idea, unfortunately it’s toothless. The expert group is good, but what we need is full-fledged agency capable of attracting top-notch experts to give advice to national authorities, especially on the global players.
The Union safeguard procedure (in case a measure of a national authority is contested by another member state or the Commission) might be a safeguard against funny member states.
But what if the national authority fails to act? Would that article still work? Doesn’t look like it to me. Any lawyers here, any opinions? What I really mean: Ireland is a beautiful country, that must be the reason why big tech loves it.
That's my 2 cents so far. More to come. Have a look for yourself! Happy to hear your feedback:

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Alexandra Geese

Alexandra Geese Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!

Follow Us on Twitter!