New 📰: There's more to the EU AI regulation than meets the eye: big loopholes, private rulemaking, powerful deregulatory effects. Analysis needs connection to broad—sometimes pretty arcane—EU law
The Act (new trendy EU name for a Regulation) is structured by risk: from prohibitions to 'high risk' systems to 'transparency risks'. So far so good. Let's look at the prohibitions first.
The Act prohibits some types of manipulative systems. The EC itself admits these have to be pretty extreme — a magic AI Black Mirror sound that makes workers work far beyond the Working Time Directive, and an artificially intelligent Chucky doll. Would it affect anything real?
The def of manipulation hits a big bump: a harm requirement. It ignores growing legal understanding that manipulation can be slow-burn patterns (eg coercive control). Manipulation definitions usually consider whether manipultators' ends are furthered (@MarijnSax), but not here.
If this wouldn't already exclude almost all online manipulation, where populations are enclosed and their behaviour managed to extract value, the Act goes on to exclude manipulation that arises from a user-base interacting w AI, e.g. systems partly based on ratings & reputation.
Finally, manipulation prohibitions draw wording from the Unfair Commercial Practices Directive, but are in practice mostly weaker than its existing ban — so what exactly will this provision even do? It also heavily emphasises intent to manipulate. Which vendors will admit that?
The Act prohibits social scoring in/for the public sector (despite the growing role of private infrastructures). It has an exemption for scoring based on 'same-context' data, but is Experian financial data the 'same-context' as the datafied welfare state? (@LinaDencik)
The Act also prohibits ‘real-time’, ‘remote’ biometric identification systems except for specific (but broad) law enforcement purposes if accompanied by an independent authorisation regime. This has been criticised a lot already: too narrow & legitimises the infrastructure.
We add that EU companies can still sell illegal tools abroad (as many do), as unlike manipulation/social scoring, sale is allowed. Furthemore, while 'individual use' has to be authorised, in practice these may be broad 'thematic warrants'.
Interestingly the current provisions will heavily annoy many MS, notably the French, as a body must be independent (e.g. not the parquet) and must bind the executive (e.g. not like the surveillance oversight body CNCTR), creating more 'lumpy' law like data retention (@hugoroyd).
We now get to the main course of the AI Act: high-risk systems. This regime is based on the 'New Approach' (now New Legislative Framework or NLF), a tried-and-tested approach in product safety since ~1980s. It is far, far from new. Most parts are copy-pasted from a 2008 Decision.
The idea of the NLF is the opposite of pharmaceutical regulation. There, you give the EMA/FDA docs to analyse. For the NLF, you do all the analysis yourself, and sometimes are required to have a third party certification firm ('notified body') check your final docs.
High-risk systems are either AI systems that are in one of the areas in the photo AND in a sub-area of it the Commission has indicated in an Annex to the Act, OR are a key part of a product already regulated by a list of EU regulation (and so AI just becomes another req).
Providers of high-risk systems have to meet a range of 'essential requirements' that are pretty sensible and general. Some have been misrepresented by readers (e.g. datasets have to be 'free from errors' but only 'sufficiently' & 'in view of the intended purpose of the system').
To help with fairness assessment, there is a way to lift the GDPR art 9 prohibition on the use of sensitive data only for high-risk providers, for that purpose, with safeguards. Non-high risk providers cannot benefit from this.
There are three levels of 'transparency' - to the public, to the users, and through documentation that only regulators/notified bodies see. We include a table to show who can see what, and to what extent.
Humans-in-the-loop always attract attention. In the Act, users are free of liability if they do what providers tell them in general instructions. The leaked Act said this should ensure users organisationally can disagree w system. The proposal does a volte-face.
Now, FORGET EVERYTHING I TOLD YOU about essential requirements. Why? Because the EC wants no provider to ever have to read any of them. To understand why, we have to dig into the New Legislative Framework & the EC's clever scheme since the 1980s to avoid pan-European paralysis.
Before the Act becomes enforceable, the EC plans to ask two private organisations, CEN (European Committee for Standardisation) and CENELEC (European Committee for Electrotechnical Standardisation) to turn the essential requirements into a paid-for European 'harmonised standard'
If a provider applies this standard, they benefit from a presumption of conformity. No need to even open the essential requirements. Standards are not open access; they cost 100s of EUR to buy (from your national standards body).
This is controversial. Standards bodies are heavily lobbied, can significantly drift from 'essential requirements'. Civil society struggles to get involved in these arcane processes. The European Parliament has no veto over standards, despite an alternative route to compliance.
CEN/CENELEC have no fundamental rights experience. Ask @C___CS@nielstenoever about standards and human rights. Interestingly, CJEU is slowly moving towards making standards justiciable, but if they do so, the EC is illegally delegating rule-making power! (Meroni). What a mess!
But surely this will be helped by third party scrutiny from notified bodies! Nope. In the vanilla version of the Act, *only* biometric systems require third party checks of the documentation. No other standalone high-risk areas invented in the AI Act do.
AND when harmonised standards are made by CEN/CENELEC covering biometric systems (even general AI ones), providers no longer even need 3rd party assessment then! Not a single org other than the provider pre-checks that a product meets the Act's requirements/harmonised standards.
Yes, that's right. The AI Act sets up a specific regime for third party assessment bodies for standalone high risk AI systems but is subtly designed so that they may *never, ever be required*. Not even for facial recognition.
Now, the general transparency obligations for *all* AI systems. 1. a rule for providers to make bots disclose their artificiality. Not CA-style law checking if a user is a bot (w risks of deanonmyisation @rcalo). However, 'designing in' disclosure hard for e.g. GPT-3.
2. Professional *users* of emotion recognition/biometric categorisation systems must inform people that they are being scanned. Totally unclear what this adds to existing DP law unless EC think you can scan without processing personal data.
Plus, legitimises phrenology!
3. Professional users of 'deep fake' systems must label them as fake. Exemptions for FoE/arts sciences/crime prevention. Again, unclear what this adds to UCPD law. Perhaps helps stop dodgy CSI superresolution forensics systems, but applies to the fooled users not the sellers.
Protecting individuals makes sense, though persona protection not new (although is disclosure really enough). But this law applies to 'existing... objects, places or other entities'. Existing objects! What exactly is the mischief?
Could be safety reasons, but they fall apart when analysis. Appears to require disclosure on eg artificial stock images by a company. Fine, but unclear what interest that is really protecting. Plus, applies to users, so has the problem of how do you investigate putitative fakes?
Now we move to an important and underemphasised part of the law: pre-emption and maximum harmonisation. When an EU instrument maximally harmonises an area, Member States cannot act in that area and must disapply conflicting law. This is a BIG DEAL for the future of AI policy.
This is complex. Bad for tweets. The core problem is that the AI Act's scope is all 'AI systems' even though it mainly puts requirements on 'high risk' AI systems. The paper has a lot more detail but essentially this means that Member States LOSE ability to regulate normal AI.
Remember the outcry re how broad the def of AI systems is? Statistical and logical appraoches?!
*The main role of the breadth of that definition is to prohibit Member States from applying law regarding the marketing or use of anything in that broad area*.
Not to regulate them.
Under the AI Act, Member States are forbidden, for example, to require AI systems to only be allowed to be sold in their country, or made in their country, if they are carbon-friendly (cc @mer__edith@roeldobbe@mrchrisadams)
Under the AI Act, Member States are also unlikely to be able to freely to regulate the *use* of in-scope AI. This is a REALLY poorly drafted aspect of the legislation, and geeks should look at the paper itself for the detailed analysis. Internal market law nerds, come!
France arguably may have to disapply its laws around the public sector requiring more detailed transparency fo automated systems, for example (in the French Digital Republic Act). The AI Act may override it.
You've made it this far?! Let's talk enforcement. There are big fines, sure—6% of global turnover for breaching prohibitions/data quality requirements. But these are issued by Market Surveillance Authorities (MSAs), who are often non-independent government depts from NLF-land.
Non-independent depts cannot effectively oversee the police, the welfare state, the public sector. Individuals have no newly created right of action under the AI Act. There are no complaint mechanisms like in the GDPR. MSAs might consider your information, but can ignore.
Market Surveillance Authorities have *never before* regulated *users*. Never. Yet suddenly, they are expected to. The EC estimates Member States will need between 1-25 new people to enforce the AI Act, including all the synthetic content and bot requirements! (@ChrisTMarsden)
There IS an interesting innovation: a public dataset, based on EUDAMED in the Medical Devices Regulation, which includes among other things instructions for use of most standalone high-risk systems. This could be great for scrutiny. But without a complaint mechanism...
The database might be most shocking for firms that develop and apply AI in house for high risk purposes. They might have to disclose it! The Act is clumsy around user-providers. They'll scream trade secrets. And are probably lobbying as we speak...
Concluding. The EU AI Act is sewn like Frankenstein's monster of 1980s product regs. It's nice it separates by risk. Its prohibitions & transparency provisions make little sense. Enforcement is lacklustre. High risk systems are self-assessed, rules privatised.
Work is needed.
Despite the manic screenshotting there is still loads more in the paper, and we tried to write it tightly and for all types of audiences. The above is of course twitter-detail, please refer to paper for exact language. Have a look and send us any feedback. osf.io/preprints/soca…
Thread on standards and the AI Act from @C___CS who has a PhD on this kind of thing so you should listen to her
How do and should model marketplaces hosting user-uploaded AI systems like @HuggingFace @GitHub & @HelloCivitai moderate models & answer takedown requests? In a new paper, @rgorwa & I provide case studies of tricky AI platform drama & chart a way forward. osf.io/preprints/soca…
@huggingface @github @HelloCivitai @rgorwa There are a growing number of model marketplaces (Table). They can be hosting models that can create clear legal liability (e.g. models that can output terrorist manuals or CSAM). They are also hosting AI that may be used harmfully, and some are already trying to moderate this.
@huggingface @github @HelloCivitai @rgorwa Models can memorise content and reproduce it. They can also piece together new illegal content that has never been seen before. To this end, they can be (and some regimes would) equate them with that illegal content. But how would marketplaces assess such a takedown request?
Int’l students are indeed used to subsidise teaching. High quality undergraduate degrees cost more than £9250 to run (always have in real terms), but were been subsidised by both govs (now rarely) & academic pay cuts. If int’l students capped, what fills the gap @halfon4harlowMP?
Tuition fees are a political topic because they’re visible to students, but the real question is ‘how is a degree funded’? The burden continues to shift from taxation into individual student debt, precarious reliance on int’l students, and lecturer pay.
Universities like Oxford distort the narrative too. College life is largely, often subsidised by the college endowment and assets, by the past. The fact so much of the political class went to a university with a non replicable funding model compounds issues hugely.
Users of the Instagram app should today send a subject access request email to Meta requesting a copy of all this telemetry ‘tap’ data. It is not provided in the ‘Download Your Information’ tool. Users of other apps in the thread that do this (eg TikTok) can do the same.
Form: m.facebook.com/help/contact/5…
Say you are using Art 15 GDPR to access a copy of data from in-app browsers, including all telemetry and click data for all time. Say it is not in ‘Download your Information’. Link to Krause’s post for clarity. Mention your Instagram handle.
The Data Protection and Digital Information Bill contains a lot of changes. Some were previewed in the June consultation response. Others weren't. Some observations: 🧵
Overshadowing everything is an ability for the Secretary of State to amend anything they feel like about the text of the UK GDPR through regulations, circumventing Parliamentary debate. This should not happen in a parliamentary democracy, is an abuse of powers, and must not pass.
Article 22, around automated decision-making, is gone, replaced by three articles which in effect say that normal significant, automated decisions are never forbidden but get some already-present safeguards; decisions based on ethnicity, sexuality, etc require a legal basis.
No legislation envisaged, just v general "cross-sectoral principles on a non-statutory footing". UK gov continues its trend of shuffling responsibility for developing a regulatory approach onto the regulators themselves, while EU shuffles it onto private standards bodies.
Meanwhile, regulators are warned not to actually do anything, and care about unspecified, directionless innovation most of all, as will be clearer this afternoon as the UK's proposed data protection reforms are perhaps published in a Bill.
By my calculations, @officestudents' "unexpected" first class degrees model they calc grade inflation with uncritically expects a white, non-mature/disabled female law student to have a 40.5% chance of a First; the same Black student to have 15.4% chance. theguardian.com/education/2022…
The data is hidden in Table 6 of the annex to the report here officeforstudents.org.uk/publications/a… (requires you to add up the model estimates, do inverse log odds)
(I also used the 2020-21 academic year but you can choose your own)