Michael Veale Profile picture
Jan 21 21 tweets 7 min read
The French presidency of the Council send around a compromise text last week on arts 8-15 of the EU AI Act (some of the requirements for high risk systems). My analysis of them below: 🧵 1/
Remember that the AI Act hinges on proprietary, privately determined standards from CEN/CENELEC. The Commission always holds these are optional, but the proposal goes (further) in making it impossible to comply without buying them (~100 EUR) and referring to them. Image
Scholars have long said that harmonised standards are not simply a substitute for the essential requirements laid down in legislation, but a de facto requirement. Note Art 9(3) of the AIA also makes reference to them universally compulsory. Law behind paywalls, made privately. ImageImage
Risk assessment changes further remove obligations to consider risks that emerge from 'off-label' use. This is important because in practice, AI systems may be sold for one purpose but commonly used for another, with users adopting legal risk but benefitting from weak regulators. Image
Here's a good one. The French presidency have invented the "reverse-technofix" — there is now no legal obligation in a risk management system to consider risks that can't be techno-magicked or information-provided away. This is certainly innovation, and it's horrifying. Image
Obligations on datasets now restrict "bias" to discrimination in Union law/health & safety — a small subset of issues that can result from misrepresentation in data. Many forms of bias that scholars have highlighted as harmful to groups do not cleanly fall within these categories Image
Dataset reqs. also adapt what was already in the recitals of the EC draft to make clear that data is only free of errors "to the best extent possible" — looks like big change but concerns from (largely non-legal commentators) w the orig. text were overhyped and decontextualised. Image
There was always a 'catch-all' provision which said if your AI system doesn't use training data, then apply the spirit of this section. Now just says ensure soundness of validation data. But this section also about *design choices* — these obligations fall out without a reason? Image
Some will say "but AI systems need data!!". But what about using pre-built models-as-APIs, piecing together larger general purpose models that the Council wants out of scope? There's no data in that stage of the process, and no ability for providers to go up the supply chain.
Presidency add an interesting obligation for the provider to consider not just data minimisation when they make the models (already law) but with regard to future model users (not necessarily an obligation as they may not be GDPR controller at that point). Image
I-Is there a European definition of a "start-up"?? This seems like a way for firms just to avoid making rigorous technical documentation — which note, is important, because that's what's uploaded in the public database for scrutiny and accountability. Image
Some changes to the logging requirements but mostly just clarification and refinement, I don't see huge differences in substance here. Image
Transparency provisions have been weakened in ways that will concern some: no longer an obligation to make 'interpretable output' from systems, just an obligation to make 'usable' and 'understandable' systems (not output). Image
Weakening of provision designed to provide information to users on performance metrics on subgroups of the population. Image
However, increase in transparency on "computational and hardware resources needed" which might allow better studies on the environmental impact of AI through the public database. Image
A proportionality test is introduced in the human oversight provisions — providers can now not provide human oversight functions if to do so would be disproportionate. Nice outcome if you can get it (and they will try). Image
Presidency however double down on the "four eyes" principle around biometric recognition: clarifying that systems for biometric recognition must be designed so that they are manually and *separately* verified by two natural persons. Image
However that is a design requirement remember — if the Presidency choose to weaken the way law enforcement have to rely on the system instructions, this doesn't mean anything.
Some careful and welcome clarification that feedback loops have to be considered even when the outputs are not just 'used' as new inputs, but where they 'influence' them. Image
That's all for now. I believe the text was given to POLITICO essentially but I keep trying subscribe to their PRO service and they won't even give me a quote. You can download it here. cloud.michae.lv/s/3rf8qyfDiENF…
And if you haven't read the original, the paper I wrote with @fborgesius on demystifying it all might help: osf.io/preprints/soca…

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Michael Veale

Michael Veale Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @mikarv

Feb 8
The French presidency of the Council sent a compromise text on arts 16-29 of the EU AI Act, leaked to @Contexte. My analysis below: Thread 1/
To get caught up, previous thread on Council changes tranche 1:
Read 10 tweets
Feb 8
Finally! @FD_Nieuws reports the Dutch DPA is telling all actors in NL to stop profiling users w/ real-time bidding & associated tracking architectures, after the Belgian DPA's ruling on structural inadequacies of the IAB Europe's 'cookie banner' fix, TCF. fd.nl/tech-en-innova…
They are not currently announcing an enforcement plan relating to publishers.
IAB Europe didn't comment, but have already said they think, according to hand-wavey legal logic, that enforcement against any RTB actor shouldn't be allowed while a *national* appeal concerning *them*, not any other actor, is pending in Belgium. Really? perma.cc/SS32-P6D9
Read 4 tweets
Feb 7
Scholars! Your regular reminder not to use Mendeley to manage refs; this Elsevier product force encrypts your local database (lying that it’s for GDPR) so you can’t migrate to eg Zotero, leaving the only export via an online API they can kill whenever. zotero.org/support/kb/men…
as the @zotero team notes, “Elsevier later stated that the change was required by new European privacy regulations — a bizarre claim, given that those regulations are designed to give people control over their data and guarantee data portability, not the opposite”.
I wonder why Elsevier wants to see, on their servers, copies of all the downloaded scholarly PDFs in the world…
Read 4 tweets
Feb 4
Significant news for the AI Act from the Commission as it proposes its new Standardisation Strategy, involving amending the 2012 Regulation. Remember: private bodies making standards (CEN/CENELEC/ETSI) are the key entities in the AI Act that determine the final rules. 🧵
Firstly, the Commission acknowledges that standards are increasingly touching not on technical issues but on European fundamental rights (although doesn’t highlight the AI Act here). This has long been an elephant in the room: accused private delegation of rule making by the EC.
They point to CJEU case law James Elliot in that respect (see 🖼), where the Court has brought the interpretation of harmonised standards (created by private bodies!) within the scope of preliminary references. Could have also talked about Fra.Bo and Comm v DE.
Read 13 tweets
Feb 1
Very detailed and wide-ranging decision of the Belgian DPA regarding cookie tracking in relation to (from inference, it's badly anonymised...) the @EDAATweets, the service that runes Your Online Choices (ht @PrivacyMatters) autoriteprotectiondonnees.be/publications/d…
Admittedly, the Chamber at the end says it wasn't really trying to anonymise. Image
So, the EDAA runs a site called "Your Online Choices", an incredibly little used, awkward & archiaic self regulatory initiative of the ad industry to try and claim that people have online choices in the absence of them. This website is linked to by ads, and itself places cookies. ImageImageImage
Read 11 tweets
Nov 30, 2021
B3. The proposal does little to stop the huge pre-emption of any national rules on use of AI, besides the reduction in scope of the AI definition which reduces the pre-empted scope slightly because not absolutely everything can be claimed to be ‘use of software’.
B4. A huge removal of a high risk system is to remove systems modelling and searching through giant crime databases. Likely because unlike many Annex III technologies, these are commonly used in MSs… In theory EC could propose its return one day but wouldn’t hold breath.
B5. The presidency thinks it is solving a great value chain problem by addressing general purpose systems, like APIs sold by Google, Microsoft, OpenAI etc. But it fails hugely here, and these companies will shriek with joy.
Read 16 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

:(