New 📰: There's more to the EU AI regulation than meets the eye: big loopholes, private rulemaking, powerful deregulatory effects. Analysis needs connection to broad—sometimes pretty arcane—EU law

@fborgesius & I have done it so you don't have to: long 🧵
osf.io/preprints/soca… Demystifying the Draft EU Artificial Intelligence Act In Apr
The Act (new trendy EU name for a Regulation) is structured by risk: from prohibitions to 'high risk' systems to 'transparency risks'. So far so good. Let's look at the prohibitions first.
The Act prohibits some types of manipulative systems. The EC itself admits these have to be pretty extreme — a magic AI Black Mirror sound that makes workers work far beyond the Working Time Directive, and an artificially intelligent Chucky doll. Would it affect anything real?
The def of manipulation hits a big bump: a harm requirement. It ignores growing legal understanding that manipulation can be slow-burn patterns (eg coercive control). Manipulation definitions usually consider whether manipultators' ends are furthered (@MarijnSax), but not here.
If this wouldn't already exclude almost all online manipulation, where populations are enclosed and their behaviour managed to extract value, the Act goes on to exclude manipulation that arises from a user-base interacting w AI, e.g. systems partly based on ratings & reputation.
Finally, manipulation prohibitions draw wording from the Unfair Commercial Practices Directive, but are in practice mostly weaker than its existing ban — so what exactly will this provision even do? It also heavily emphasises intent to manipulate. Which vendors will admit that?
The Act prohibits social scoring in/for the public sector (despite the growing role of private infrastructures). It has an exemption for scoring based on 'same-context' data, but is Experian financial data the 'same-context' as the datafied welfare state? (@LinaDencik)
The Act also prohibits ‘real-time’, ‘remote’ biometric identification systems except for specific (but broad) law enforcement purposes if accompanied by an independent authorisation regime. This has been criticised a lot already: too narrow & legitimises the infrastructure.
We add that EU companies can still sell illegal tools abroad (as many do), as unlike manipulation/social scoring, sale is allowed. Furthemore, while 'individual use' has to be authorised, in practice these may be broad 'thematic warrants'.
Interestingly the current provisions will heavily annoy many MS, notably the French, as a body must be independent (e.g. not the parquet) and must bind the executive (e.g. not like the surveillance oversight body CNCTR), creating more 'lumpy' law like data retention (@hugoroyd).
We now get to the main course of the AI Act: high-risk systems. This regime is based on the 'New Approach' (now New Legislative Framework or NLF), a tried-and-tested approach in product safety since ~1980s. It is far, far from new. Most parts are copy-pasted from a 2008 Decision.
The idea of the NLF is the opposite of pharmaceutical regulation. There, you give the EMA/FDA docs to analyse. For the NLF, you do all the analysis yourself, and sometimes are required to have a third party certification firm ('notified body') check your final docs.
High-risk systems are either AI systems that are in one of the areas in the photo AND in a sub-area of it the Commission has indicated in an Annex to the Act, OR are a key part of a product already regulated by a list of EU regulation (and so AI just becomes another req).
Providers of high-risk systems have to meet a range of 'essential requirements' that are pretty sensible and general. Some have been misrepresented by readers (e.g. datasets have to be 'free from errors' but only 'sufficiently' & 'in view of the intended purpose of the system').
To help with fairness assessment, there is a way to lift the GDPR art 9 prohibition on the use of sensitive data only for high-risk providers, for that purpose, with safeguards. Non-high risk providers cannot benefit from this.
There are three levels of 'transparency' - to the public, to the users, and through documentation that only regulators/notified bodies see. We include a table to show who can see what, and to what extent.
Humans-in-the-loop always attract attention. In the Act, users are free of liability if they do what providers tell them in general instructions. The leaked Act said this should ensure users organisationally can disagree w system. The proposal does a volte-face.
Now, FORGET EVERYTHING I TOLD YOU about essential requirements. Why? Because the EC wants no provider to ever have to read any of them. To understand why, we have to dig into the New Legislative Framework & the EC's clever scheme since the 1980s to avoid pan-European paralysis.
Before the Act becomes enforceable, the EC plans to ask two private organisations, CEN (European Committee for Standardisation) and CENELEC (European Committee for Electrotechnical Standardisation) to turn the essential requirements into a paid-for European 'harmonised standard'
If a provider applies this standard, they benefit from a presumption of conformity. No need to even open the essential requirements. Standards are not open access; they cost 100s of EUR to buy (from your national standards body).
This is controversial. Standards bodies are heavily lobbied, can significantly drift from 'essential requirements'. Civil society struggles to get involved in these arcane processes. The European Parliament has no veto over standards, despite an alternative route to compliance.
CEN/CENELEC have no fundamental rights experience. Ask @C___CS @nielstenoever about standards and human rights. Interestingly, CJEU is slowly moving towards making standards justiciable, but if they do so, the EC is illegally delegating rule-making power! (Meroni). What a mess!
But surely this will be helped by third party scrutiny from notified bodies! Nope. In the vanilla version of the Act, *only* biometric systems require third party checks of the documentation. No other standalone high-risk areas invented in the AI Act do.
AND when harmonised standards are made by CEN/CENELEC covering biometric systems (even general AI ones), providers no longer even need 3rd party assessment then! Not a single org other than the provider pre-checks that a product meets the Act's requirements/harmonised standards.
Yes, that's right. The AI Act sets up a specific regime for third party assessment bodies for standalone high risk AI systems but is subtly designed so that they may *never, ever be required*. Not even for facial recognition.
Now, the general transparency obligations for *all* AI systems.
1. a rule for providers to make bots disclose their artificiality. Not CA-style law checking if a user is a bot (w risks of deanonmyisation @rcalo). However, 'designing in' disclosure hard for e.g. GPT-3.
2. Professional *users* of emotion recognition/biometric categorisation systems must inform people that they are being scanned. Totally unclear what this adds to existing DP law unless EC think you can scan without processing personal data.
Plus, legitimises phrenology!
3. Professional users of 'deep fake' systems must label them as fake. Exemptions for FoE/arts sciences/crime prevention. Again, unclear what this adds to UCPD law. Perhaps helps stop dodgy CSI superresolution forensics systems, but applies to the fooled users not the sellers.
Protecting individuals makes sense, though persona protection not new (although is disclosure really enough). But this law applies to 'existing... objects, places or other entities'. Existing objects! What exactly is the mischief?
Could be safety reasons, but they fall apart when analysis. Appears to require disclosure on eg artificial stock images by a company. Fine, but unclear what interest that is really protecting. Plus, applies to users, so has the problem of how do you investigate putitative fakes?
Now we move to an important and underemphasised part of the law: pre-emption and maximum harmonisation. When an EU instrument maximally harmonises an area, Member States cannot act in that area and must disapply conflicting law. This is a BIG DEAL for the future of AI policy.
This is complex. Bad for tweets. The core problem is that the AI Act's scope is all 'AI systems' even though it mainly puts requirements on 'high risk' AI systems. The paper has a lot more detail but essentially this means that Member States LOSE ability to regulate normal AI.
Remember the outcry re how broad the def of AI systems is? Statistical and logical appraoches?!
*The main role of the breadth of that definition is to prohibit Member States from applying law regarding the marketing or use of anything in that broad area*.
Not to regulate them.
Under the AI Act, Member States are forbidden, for example, to require AI systems to only be allowed to be sold in their country, or made in their country, if they are carbon-friendly (cc @mer__edith @roeldobbe @mrchrisadams)
Under the AI Act, Member States are also unlikely to be able to freely to regulate the *use* of in-scope AI. This is a REALLY poorly drafted aspect of the legislation, and geeks should look at the paper itself for the detailed analysis. Internal market law nerds, come!
France arguably may have to disapply its laws around the public sector requiring more detailed transparency fo automated systems, for example (in the French Digital Republic Act). The AI Act may override it.
You've made it this far?! Let's talk enforcement. There are big fines, sure—6% of global turnover for breaching prohibitions/data quality requirements. But these are issued by Market Surveillance Authorities (MSAs), who are often non-independent government depts from NLF-land.
Non-independent depts cannot effectively oversee the police, the welfare state, the public sector. Individuals have no newly created right of action under the AI Act. There are no complaint mechanisms like in the GDPR. MSAs might consider your information, but can ignore.
Market Surveillance Authorities have *never before* regulated *users*. Never. Yet suddenly, they are expected to. The EC estimates Member States will need between 1-25 new people to enforce the AI Act, including all the synthetic content and bot requirements! (@ChrisTMarsden)
There IS an interesting innovation: a public dataset, based on EUDAMED in the Medical Devices Regulation, which includes among other things instructions for use of most standalone high-risk systems. This could be great for scrutiny. But without a complaint mechanism...
The database might be most shocking for firms that develop and apply AI in house for high risk purposes. They might have to disclose it! The Act is clumsy around user-providers. They'll scream trade secrets. And are probably lobbying as we speak...
Concluding. The EU AI Act is sewn like Frankenstein's monster of 1980s product regs. It's nice it separates by risk. Its prohibitions & transparency provisions make little sense. Enforcement is lacklustre. High risk systems are self-assessed, rules privatised.
Work is needed.
Despite the manic screenshotting there is still loads more in the paper, and we tried to write it tightly and for all types of audiences. The above is of course twitter-detail, please refer to paper for exact language. Have a look and send us any feedback. osf.io/preprints/soca…
Thread on standards and the AI Act from @C___CS who has a PhD on this kind of thing so you should listen to her

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Michael Veale

Michael Veale Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @mikarv

1 Jun
Concerned with platforms' power to map & reconfigure the world w/ ambient sensing? I'm *hiring* a 2-year Research Fellow (postdoc) @UCLLaws. Think regulating Apple AirTags (UWB); Amazon Sidewalk (LoRa), and—yes—Bluetooth contact tracing. (please RT!) 1/ atsv7.wcn.co.uk/search_engine/… ImageImage
You'll join a deeply interdisciplinary team of critical privacy engineers (@carmelatroncoso @sedyst); sensor experts (@SrdjanCapkun); epidemiologists and medical devices experts (@marcelsalathe @klausscho); and systems and security whizzes (@gannimo @JamesLarus @ebugnion) 2/
Just as platforms wanted to be the only ones who could sell access to populations based on how they use devices, they want to determine and extract value from how physical space is used and configured. There is huge public value from this knowledge, and huge public risk. 3/
Read 10 tweets
27 May
Hey Microsoft Research people who think that constant facial emotion analysis might not be a great thing (among others), what do you think of this proposed Teams feature published at CHI to spotlight videos of audience members with high affective ‘scores’? microsoft.com/en-us/research…
Requires constantly pouring all face data on Teams through Azure APIs. Especially identifies head gestures and confusion to pull audience members out to the front, just in case you weren’t policing your face enough during meetings already.
Also note that Microsoft announced on Tuesday that it is opening up its Teams APIs to try to become a much wider platform to eat all remote work, so even if Teams didn’t decide to implement this directly, employers could through third party integration! protocol.com/newsletters/so…
Read 8 tweets
26 May
Big UK GDPR case: Court of Appeal rules in favour of the @OpenRightsGroup @the3million: Immigration Exemption to SARs is incompatible with Art 23 GDPR. This is a new exemption from 2018 the Home Office uses to withhold data rights info in 59% of cases. bailii.org/ew/cases/EWCA/…
Warby LJ is sympathetic to CJEU jurisprudence that 'the legal basis which permits the interference with those rights must itself define the scope of the limitation', noting that the Immigration E is highly discretionary, and the DPA18 does not contain limits on its scope.
However, Warby LJ judges the case more narrowly on a reading of Article 23(2), which permits Member States to restrict a GDPR right for public interest only if a 'legislative measure' contains limiting provisions.
Read 15 tweets
25 May
Big Brother Watch now out. Looking at the dissents, it does not look good for anti-surveillance campaigners: 'with the present judgment the Strasbourg Court has just opened the gates for an electronic “Big Brother” in Europe' hudoc.echr.coe.int/eng?i=001-2100…
and we go live to Strasbourg
Going to post some interesting pieces (not a judgment summary!) here. Firstly, that Contracting States can transfer Convention-compliant bulk intercept material to non-Contracting states that only have minimal protections (e.g. on keeping it secure/confidential). AKA the USA.
Read 21 tweets
14 May
thank you for all the nice comments about the @BBCNewsnight interview! I tried to communicate infrastructure's importance. if new to you, here is a 🧵of some (not all!) academic work by others which highlights the power of technical infrastructure (rather than eg data).
on power and Internet infrastructure below the application layer (eg websites, apps) @nielstenoever's thesis nielstenoever.net/wp-content/upl…; @C___CS doi.org/10.1016/j.telp…; @LauraDeNardis and @ChrisTMarsden's books e.g. oapen.org/record/622853
on enclosure by platforms, @julie17usc's book doi.org/10.1093/oso/97…; on security as driving control, @zittrain's paper dash.harvard.edu/bitstream/hand…;
Read 5 tweets
4 May
The Luca QR code Covid app, (for-profit system flogged to 🇩🇪 Länder) has been compromised (in a way that the official CoronaWarnApp’s QR system can’t be), through a website that lets you check in any phone number to wherever you want—even regional prime ministers! 🧵 on the saga:
While hard to believe, Luca was adopted by Länder after huge lobbying from hospitality who convinced them that a hasty app w a 6 mo free trial for venues & big cost for health authorities would i) allow reopening, ii) help Länder win upcoming 🗳 by making national gov look slow
Luca’s slick PR campaign, where they became mostly known to health authorities by aggressive marketing w celebrities, meant that no-one discussed or scrutinised the technical details. Politicians have even admitted this, and DPAs accepted statements of ‘encryption’ as secure.
Read 10 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!

Follow Us on Twitter!

:(