The Council presidency compromise text on the draft EU AI Act has some improvements, some big steps back, ignores some huge residual problems and gives a *giant* handout to Google, Amazon, IBM, Microsoft and similar. Thread follows. 🧵
The Good:
G1. The manipulation provisions are slightly strengthened by a weakening of intent and a consideration of reasonable likelihood. The recital also has several changes which actually seem like they have read our AIA paper, on sociotechnical systems and accumulated harms…
G2. Social scoring now also prohibited by private actors, which is a pretty big deal.
G3. Was unsure whether to add this into ‘good’, but I don’t really mind the changes to the scope of the act, I think ‘modelling’ is still broad and includes complex spreadsheets and things (cc @mireillemoret), w/o including ones simply automating obvious actions. Jury still out.
G4. Some changes to high risk in Annex III to include certain critical infrastructures, insurance premium setting, and to clarify that contractors and their ilk included in law enforcement obligations
The Bad.
B1. While the EU regulating national security *users* would be controversial, the text exempts systems developed solely for national security. NSO-equivalent firms just… fall out of the Act and its requirements, as long as they don’t sell more broadly. Even when selling abroad!
B3. The proposal does little to stop the huge pre-emption of any national rules on use of AI, besides the reduction in scope of the AI definition which reduces the pre-empted scope slightly because not absolutely everything can be claimed to be ‘use of software’.
B4. A huge removal of a high risk system is to remove systems modelling and searching through giant crime databases. Likely because unlike many Annex III technologies, these are commonly used in MSs… In theory EC could propose its return one day but wouldn’t hold breath.
B5. The presidency thinks it is solving a great value chain problem by addressing general purpose systems, like APIs sold by Google, Microsoft, OpenAI etc. But it fails hugely here, and these companies will shriek with joy.
New 📰: There's more to the EU AI regulation than meets the eye: big loopholes, private rulemaking, powerful deregulatory effects. Analysis needs connection to broad—sometimes pretty arcane—EU law
The Act (new trendy EU name for a Regulation) is structured by risk: from prohibitions to 'high risk' systems to 'transparency risks'. So far so good. Let's look at the prohibitions first.
The Act prohibits some types of manipulative systems. The EC itself admits these have to be pretty extreme — a magic AI Black Mirror sound that makes workers work far beyond the Working Time Directive, and an artificially intelligent Chucky doll. Would it affect anything real?
Concerned with platforms' power to map & reconfigure the world w/ ambient sensing? I'm *hiring* a 2-year Research Fellow (postdoc) @UCLLaws. Think regulating Apple AirTags (UWB); Amazon Sidewalk (LoRa), and—yes—Bluetooth contact tracing. (please RT!) 1/ atsv7.wcn.co.uk/search_engine/…
Just as platforms wanted to be the only ones who could sell access to populations based on how they use devices, they want to determine and extract value from how physical space is used and configured. There is huge public value from this knowledge, and huge public risk. 3/
Hey Microsoft Research people who think that constant facial emotion analysis might not be a great thing (among others), what do you think of this proposed Teams feature published at CHI to spotlight videos of audience members with high affective ‘scores’? microsoft.com/en-us/research…
Requires constantly pouring all face data on Teams through Azure APIs. Especially identifies head gestures and confusion to pull audience members out to the front, just in case you weren’t policing your face enough during meetings already.
Also note that Microsoft announced on Tuesday that it is opening up its Teams APIs to try to become a much wider platform to eat all remote work, so even if Teams didn’t decide to implement this directly, employers could through third party integration! protocol.com/newsletters/so…
Big UK GDPR case: Court of Appeal rules in favour of the @OpenRightsGroup@the3million: Immigration Exemption to SARs is incompatible with Art 23 GDPR. This is a new exemption from 2018 the Home Office uses to withhold data rights info in 59% of cases. bailii.org/ew/cases/EWCA/…
Warby LJ is sympathetic to CJEU jurisprudence that 'the legal basis which permits the interference with those rights must itself define the scope of the limitation', noting that the Immigration E is highly discretionary, and the DPA18 does not contain limits on its scope.
However, Warby LJ judges the case more narrowly on a reading of Article 23(2), which permits Member States to restrict a GDPR right for public interest only if a 'legislative measure' contains limiting provisions.
Big Brother Watch now out. Looking at the dissents, it does not look good for anti-surveillance campaigners: 'with the present judgment the Strasbourg Court has just opened the gates for an electronic “Big Brother” in Europe' hudoc.echr.coe.int/eng?i=001-2100…
and we go live to Strasbourg
Going to post some interesting pieces (not a judgment summary!) here. Firstly, that Contracting States can transfer Convention-compliant bulk intercept material to non-Contracting states that only have minimal protections (e.g. on keeping it secure/confidential). AKA the USA.