Lots of selected thoughts on the draft leaked EU AI regulation follow. Not a summary but hopefully useful. 🧵
Blacklisted art 4 AI (except general scoring) exempts include state use for public security, including by contractors. Tech designed to ‘manipulate’ ppl ‘to their detriment’, to ‘target their vulnerabilities’ or profile comms metadata in indiscriminate way v possible for states. ImageImage
This is clearly designed in part not to eg further upset France in the La Quadrature du Net case, where black boxes algorithmic systems inside telcos were limited. Same language as CJEU used in Art 4(c). Clear exemptions for orgs ‘on behalf’ of state to avoid CJEU scope creep.
Some could say that by allowing a pathway for manipulation technologies to be used by states, the EU is making a ‘psyops carve out’.
Given this regulation applies to putting AI systems on the market too, it’s unclear to me how Art 4(2) would work for vendors who are in the EU, sell these systems to the public sector, but don’t yet have a customer. Could be drafted more clearly.
Article 8 considers training data. It only applies when the actual resultant system is high risk. It does not include risks from experimentation on populations or similar through infrastructures to train them. AI systems that don’t pose use harms can still pose upstream harm. ImageImage
Article 8(8) introduces a GDPR legal basis for processing special category data strictly necessary for debiasing. @RDBinns and I wrote about this challenge back in 2017. Some national measures had similar provisions already eg UK DP Act sch 1 para 8 journals.sagepub.com/doi/10.1177/20…
Article 8(9) also extends dataset style provisions mutadis mutandis to eg expert systems and federated learning/multiparty computation, which is sensible. Image
Logging requirements are interesting in Article 9, and important. The Police DP Directive has similar, and they matter. @jennifercobbe @jatinternet @cnorval have usefully written on decision provenance in automated systems here export.arxiv.org/pdf/1804.05741 Image
User transparency for high risk AI systems resembling labels in other sectors in Art 10. Some reqs on general logics and assumptions but nothing too onerous. You’d expect most of this to be provided by vendors in most sectors already to enable clients to write DPIAs Image
Art 11(c) is interesting, placing organisational requirements to ensure human oversight is meaningful. It responds clearly to @lilianedwards and I in 2018 commenting on the A29WP ADM guidelines. [...] Image
In that paper (sciencedirect.com/science/articl…) we pointed out that ensuring ‘authority and competence’ was an organisational challenge. Image
(I elaborated on this with @InaBrass in an OUP chapter on Administration by Algorithm, pointing out the accountability challenges of such organisational requirements for authority and competence) michae.lv/static/papers/… ImageImage
General obligations for robustness and security in Article 12. Does not cover issues of model inversion and data leakage from models (see @RDBinns @lilianedwards and myself linked, I will stop gratuitous self plugging soon sorry) see royalsocietypublishing.org/doi/10.1098/rs…
The logging provision has a downside. Art 13 obliges providers to keep logs. This assumes they are run as a service, with all the surveillance downsides @sedyst and @jorisvanhoboken have lairs out in Privacy After the Agile Turn osf.io/preprints/soca… Image
Importer obligations in article 15 seem particularly difficult to enforce given the upstream nature of these challenges.
Monitoring obligations for users are good but quite vague and don’t seem to impose very rigourous obligations Image
I won’t go in detail into the conformity assessment apparatus which is seen in other EU law areas. Suffice to highlight a few things. Firstly, the Article 40 registration database is useful for journalists and civil society tracking vendors and high risk systems across Europe Image
Some have already studied using other registration databases for transparency in this field (eg @levendowski papers.ssrn.com/sol3/papers.cf…) but that was with trademark law, so clearly flawed compared to a registration database of actual high risk AI systems.
Also, there are several parts where conformity is assumed under certain conditions. See eg 35(2) which seems to assume all of Europe is the same place for phenomena captured in data. Wishful thinking! Ever closer data distribution. ImageImage
Article 41 applies to all AI systems
- notification requirements for if you’re taking to a human-sounding machine (@MargotKaminski this was in CCPA too? I think?).

Important given Google’s proposed voice assistant as robotic process automation thing, calling up restaurants etc Image
Article 41(2) creates a notification requirement for emotion recognition systems (@damicli @luke_stark @digi_ad). This is important as some might (arguably) not trigger GDPR if designed using transient data (academic.oup.com/idpl/article-a…) Image
Disclosure obligations for deep fake users (cc @lilianedwards @daniellecitron) — but with what penalty? Might stop businesses but regime likely flounders against individuals. ImageImage
EC moved from facial recognition ban to authorisation system. This is all very much in draft and could still disappear I bet. ‘Serious crime’ not ‘crime’ requirement will be a sticking point with member states. Not much point analysing this until it’s in the proposed version. Image
Article 45 claims to reduce burdens on SMEs by giving them some access to euro initiatives like a regulatory sandbox (Art 44). There’ll be a big push to make this a scale based regime of applicability as these aren’t many concessions. Image
Interesting glimpse of something in another piece of unannounced regulation “Digital Hubs and Testing Experimentation Facilities”. Could be interesting. Keep eyes out. Image
Not another board! And this time with special provision to presumably grandfather in the EU HLEG on AI as advisors under Article 49. Nice deal if you can get it. My criticism of that group here: osf.io/preprints/lawa… ImageImageImage
The post market monitoring system could be interesting. But heavily up to providers to determine how much they will do (ie next to none). Could also be used to say ‘we have to deliver this as an API, we can’t give it to you’. More pointless servitisation. Image
Article 59 presents a weird regime saying that if a member state still thinks a system presents a risk despite being in compliance, they can take action. This could be a used (eg freedom of expression) but there are checks built into it with the Commission. Image
And of course the list of high risk AI in full (can be added to by powers in the reg). Hiring, credit, welfare, policing and tech for the judiciary are all notable. Very little that is delivered by the tech giants as part of their core businesses, that’s clearly in DSA/DMA world ImageImage

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Michael Veale

Michael Veale Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @mikarv

6 Jul
New 📰: There's more to the EU AI regulation than meets the eye: big loopholes, private rulemaking, powerful deregulatory effects. Analysis needs connection to broad—sometimes pretty arcane—EU law

@fborgesius & I have done it so you don't have to: long 🧵
osf.io/preprints/soca… Demystifying the Draft EU Artificial Intelligence Act In Apr
The Act (new trendy EU name for a Regulation) is structured by risk: from prohibitions to 'high risk' systems to 'transparency risks'. So far so good. Let's look at the prohibitions first.
The Act prohibits some types of manipulative systems. The EC itself admits these have to be pretty extreme — a magic AI Black Mirror sound that makes workers work far beyond the Working Time Directive, and an artificially intelligent Chucky doll. Would it affect anything real?
Read 44 tweets
1 Jun
Concerned with platforms' power to map & reconfigure the world w/ ambient sensing? I'm *hiring* a 2-year Research Fellow (postdoc) @UCLLaws. Think regulating Apple AirTags (UWB); Amazon Sidewalk (LoRa), and—yes—Bluetooth contact tracing. (please RT!) 1/ atsv7.wcn.co.uk/search_engine/… ImageImage
You'll join a deeply interdisciplinary team of critical privacy engineers (@carmelatroncoso @sedyst); sensor experts (@SrdjanCapkun); epidemiologists and medical devices experts (@marcelsalathe @klausscho); and systems and security whizzes (@gannimo @JamesLarus @ebugnion) 2/
Just as platforms wanted to be the only ones who could sell access to populations based on how they use devices, they want to determine and extract value from how physical space is used and configured. There is huge public value from this knowledge, and huge public risk. 3/
Read 10 tweets
27 May
Hey Microsoft Research people who think that constant facial emotion analysis might not be a great thing (among others), what do you think of this proposed Teams feature published at CHI to spotlight videos of audience members with high affective ‘scores’? microsoft.com/en-us/research…
Requires constantly pouring all face data on Teams through Azure APIs. Especially identifies head gestures and confusion to pull audience members out to the front, just in case you weren’t policing your face enough during meetings already.
Also note that Microsoft announced on Tuesday that it is opening up its Teams APIs to try to become a much wider platform to eat all remote work, so even if Teams didn’t decide to implement this directly, employers could through third party integration! protocol.com/newsletters/so…
Read 8 tweets
26 May
Big UK GDPR case: Court of Appeal rules in favour of the @OpenRightsGroup @the3million: Immigration Exemption to SARs is incompatible with Art 23 GDPR. This is a new exemption from 2018 the Home Office uses to withhold data rights info in 59% of cases. bailii.org/ew/cases/EWCA/…
Warby LJ is sympathetic to CJEU jurisprudence that 'the legal basis which permits the interference with those rights must itself define the scope of the limitation', noting that the Immigration E is highly discretionary, and the DPA18 does not contain limits on its scope.
However, Warby LJ judges the case more narrowly on a reading of Article 23(2), which permits Member States to restrict a GDPR right for public interest only if a 'legislative measure' contains limiting provisions.
Read 15 tweets
25 May
Big Brother Watch now out. Looking at the dissents, it does not look good for anti-surveillance campaigners: 'with the present judgment the Strasbourg Court has just opened the gates for an electronic “Big Brother” in Europe' hudoc.echr.coe.int/eng?i=001-2100…
and we go live to Strasbourg
Going to post some interesting pieces (not a judgment summary!) here. Firstly, that Contracting States can transfer Convention-compliant bulk intercept material to non-Contracting states that only have minimal protections (e.g. on keeping it secure/confidential). AKA the USA.
Read 21 tweets
14 May
thank you for all the nice comments about the @BBCNewsnight interview! I tried to communicate infrastructure's importance. if new to you, here is a 🧵of some (not all!) academic work by others which highlights the power of technical infrastructure (rather than eg data).
on power and Internet infrastructure below the application layer (eg websites, apps) @nielstenoever's thesis nielstenoever.net/wp-content/upl…; @C___CS doi.org/10.1016/j.telp…; @LauraDeNardis and @ChrisTMarsden's books e.g. oapen.org/record/622853
on enclosure by platforms, @julie17usc's book doi.org/10.1093/oso/97…; on security as driving control, @zittrain's paper dash.harvard.edu/bitstream/hand…;
Read 5 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!

Follow Us on Twitter!

:(