Significant news for the AI Act from the Commission as it proposes its new Standardisation Strategy, involving amending the 2012 Regulation. Remember: private bodies making standards (CEN/CENELEC/ETSI) are the key entities in the AI Act that determine the final rules. 🧵
Firstly, the Commission acknowledges that standards are increasingly touching not on technical issues but on European fundamental rights (although doesn’t highlight the AI Act here). This has long been an elephant in the room: accused private delegation of rule making by the EC.
They point to CJEU case law James Elliot in that respect (see 🖼), where the Court has brought the interpretation of harmonised standards (created by private bodies!) within the scope of preliminary references. Could have also talked about Fra.Bo and Comm v DE.
The EC note governance in these European Standardisation Bodies is outrageous. They point out that in ETSI, which deals with telecoms and more, industry capture is built in society votes are “barely countable” and Member States have “circa 2%” of the vote: the rest is industry.
The Commission has not officially confirmed who will be mandated to make the standards for the AI Act (I’ve heard them lean more towards CEN/CENELEC in slides), but ETSI certainly want to be in on the technology.
Next there’s a big geopolitical change. The Commission propose an amendment in a short revision to the reg that has the effect of excluding non EU/EEA standards bodies from voting on any standards relating to the AI Act. Sorry, @BSI_UK — no more formal influence post Brexit.
This also has the effect of pushing out the limited formal influence of European Civil Society orgs, even the three that are mandated by the Commission and paid to be involved in standardisation, from consumer, social/trade union, and env fields, @anectweet, @ETUI_org, and ECOS.
In order to have a say on the AI Act, the Commission thus now relies on *Member State standardisation bodies* to be sufficiently representative of societal interests that in turn are sufficiently attuned to highly technological and European policy processes. This seems a stretch.
The EC does threaten a Sword of Damocles on european standards bodies: sort your house out and start to fix your wonky and broken governance or we’ll regulate you more directly, but I doubt this will have a significant effect, and this isn’t the first time they’ve done this.
In New Legislative Framework regulations like the AI Act (where standards can be used to demonstrate compliance), the EC can usually substitute standards for delegated acts called Common Specifications, but rarely does.
While EC executive action wouldn’t mean democratic, and a serious coregulatory process would be needed, the common specification process seems better suited to deal with fundamental rights charged areas of the AI Act than delegating to industry captured standards bodies.
The EC proposes to develop a framework for when it uses common specifications or not. This seems an opportunity for the Parliament to push for a inclusive process in building them in the case of standards touching on fundamental rights and freedoms, rather than delegating to ESOs
Admittedly, the Chamber at the end says it wasn't really trying to anonymise.
So, the EDAA runs a site called "Your Online Choices", an incredibly little used, awkward & archiaic self regulatory initiative of the ad industry to try and claim that people have online choices in the absence of them. This website is linked to by ads, and itself places cookies.
B3. The proposal does little to stop the huge pre-emption of any national rules on use of AI, besides the reduction in scope of the AI definition which reduces the pre-empted scope slightly because not absolutely everything can be claimed to be ‘use of software’.
B4. A huge removal of a high risk system is to remove systems modelling and searching through giant crime databases. Likely because unlike many Annex III technologies, these are commonly used in MSs… In theory EC could propose its return one day but wouldn’t hold breath.
B5. The presidency thinks it is solving a great value chain problem by addressing general purpose systems, like APIs sold by Google, Microsoft, OpenAI etc. But it fails hugely here, and these companies will shriek with joy.
The Council presidency compromise text on the draft EU AI Act has some improvements, some big steps back, ignores some huge residual problems and gives a *giant* handout to Google, Amazon, IBM, Microsoft and similar. Thread follows. 🧵
The Good:
G1. The manipulation provisions are slightly strengthened by a weakening of intent and a consideration of reasonable likelihood. The recital also has several changes which actually seem like they have read our AIA paper, on sociotechnical systems and accumulated harms…
New 📰: There's more to the EU AI regulation than meets the eye: big loopholes, private rulemaking, powerful deregulatory effects. Analysis needs connection to broad—sometimes pretty arcane—EU law
The Act (new trendy EU name for a Regulation) is structured by risk: from prohibitions to 'high risk' systems to 'transparency risks'. So far so good. Let's look at the prohibitions first.
The Act prohibits some types of manipulative systems. The EC itself admits these have to be pretty extreme — a magic AI Black Mirror sound that makes workers work far beyond the Working Time Directive, and an artificially intelligent Chucky doll. Would it affect anything real?
Concerned with platforms' power to map & reconfigure the world w/ ambient sensing? I'm *hiring* a 2-year Research Fellow (postdoc) @UCLLaws. Think regulating Apple AirTags (UWB); Amazon Sidewalk (LoRa), and—yes—Bluetooth contact tracing. (please RT!) 1/ atsv7.wcn.co.uk/search_engine/…
Just as platforms wanted to be the only ones who could sell access to populations based on how they use devices, they want to determine and extract value from how physical space is used and configured. There is huge public value from this knowledge, and huge public risk. 3/
Hey Microsoft Research people who think that constant facial emotion analysis might not be a great thing (among others), what do you think of this proposed Teams feature published at CHI to spotlight videos of audience members with high affective ‘scores’? microsoft.com/en-us/research…
Requires constantly pouring all face data on Teams through Azure APIs. Especially identifies head gestures and confusion to pull audience members out to the front, just in case you weren’t policing your face enough during meetings already.
Also note that Microsoft announced on Tuesday that it is opening up its Teams APIs to try to become a much wider platform to eat all remote work, so even if Teams didn’t decide to implement this directly, employers could through third party integration! protocol.com/newsletters/so…