The French presidency of the Council send around a compromise text last week on arts 8-15 of the EU AI Act (some of the requirements for high risk systems). My analysis of them below: 🧵 1/
Remember that the AI Act hinges on proprietary, privately determined standards from CEN/CENELEC. The Commission always holds these are optional, but the proposal goes (further) in making it impossible to comply without buying them (~100 EUR) and referring to them. Image
Scholars have long said that harmonised standards are not simply a substitute for the essential requirements laid down in legislation, but a de facto requirement. Note Art 9(3) of the AIA also makes reference to them universally compulsory. Law behind paywalls, made privately. ImageImage
Risk assessment changes further remove obligations to consider risks that emerge from 'off-label' use. This is important because in practice, AI systems may be sold for one purpose but commonly used for another, with users adopting legal risk but benefitting from weak regulators. Image
Here's a good one. The French presidency have invented the "reverse-technofix" — there is now no legal obligation in a risk management system to consider risks that can't be techno-magicked or information-provided away. This is certainly innovation, and it's horrifying. Image
Obligations on datasets now restrict "bias" to discrimination in Union law/health & safety — a small subset of issues that can result from misrepresentation in data. Many forms of bias that scholars have highlighted as harmful to groups do not cleanly fall within these categories Image
Dataset reqs. also adapt what was already in the recitals of the EC draft to make clear that data is only free of errors "to the best extent possible" — looks like big change but concerns from (largely non-legal commentators) w the orig. text were overhyped and decontextualised. Image
There was always a 'catch-all' provision which said if your AI system doesn't use training data, then apply the spirit of this section. Now just says ensure soundness of validation data. But this section also about *design choices* — these obligations fall out without a reason? Image
Some will say "but AI systems need data!!". But what about using pre-built models-as-APIs, piecing together larger general purpose models that the Council wants out of scope? There's no data in that stage of the process, and no ability for providers to go up the supply chain.
Presidency add an interesting obligation for the provider to consider not just data minimisation when they make the models (already law) but with regard to future model users (not necessarily an obligation as they may not be GDPR controller at that point). Image
I-Is there a European definition of a "start-up"?? This seems like a way for firms just to avoid making rigorous technical documentation — which note, is important, because that's what's uploaded in the public database for scrutiny and accountability. Image
Some changes to the logging requirements but mostly just clarification and refinement, I don't see huge differences in substance here. Image
Transparency provisions have been weakened in ways that will concern some: no longer an obligation to make 'interpretable output' from systems, just an obligation to make 'usable' and 'understandable' systems (not output). Image
Weakening of provision designed to provide information to users on performance metrics on subgroups of the population. Image
However, increase in transparency on "computational and hardware resources needed" which might allow better studies on the environmental impact of AI through the public database. Image
A proportionality test is introduced in the human oversight provisions — providers can now not provide human oversight functions if to do so would be disproportionate. Nice outcome if you can get it (and they will try). Image
Presidency however double down on the "four eyes" principle around biometric recognition: clarifying that systems for biometric recognition must be designed so that they are manually and *separately* verified by two natural persons. Image
However that is a design requirement remember — if the Presidency choose to weaken the way law enforcement have to rely on the system instructions, this doesn't mean anything.
Some careful and welcome clarification that feedback loops have to be considered even when the outputs are not just 'used' as new inputs, but where they 'influence' them. Image
That's all for now. I believe the text was given to POLITICO essentially but I keep trying subscribe to their PRO service and they won't even give me a quote. You can download it here. cloud.michae.lv/s/3rf8qyfDiENF…
And if you haven't read the original, the paper I wrote with @fborgesius on demystifying it all might help: osf.io/preprints/soca…

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Michael Veale @mikarv@someone.elses.computer

Michael Veale @mikarv@someone.elses.computer Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @mikarv

Nov 20, 2023
How do and should model marketplaces hosting user-uploaded AI systems like @HuggingFace @GitHub & @HelloCivitai moderate models & answer takedown requests? In a new paper, @rgorwa & I provide case studies of tricky AI platform drama & chart a way forward. osf.io/preprints/soca…
Moderating Model Marketplaces: Platform Governance Puzzles for AI Intermediaries. The AI development community is increasingly making use of hosting intermediaries such as Hugging Face that provide easy access to user-uploaded models and training data. These model marketplaces lower technical deployment barriers for hundreds of thousands of users, yet can be used in numerous potentially harmful and illegal ways. In this article, we argue that AI models, which can both `contain' content and be open-ended tools, present one of the trickiest platform governance challenges seen to date. We prov...
@huggingface @github @HelloCivitai @rgorwa There are a growing number of model marketplaces (Table). They can be hosting models that can create clear legal liability (e.g. models that can output terrorist manuals or CSAM). They are also hosting AI that may be used harmfully, and some are already trying to moderate this. Image
@huggingface @github @HelloCivitai @rgorwa Models can memorise content and reproduce it. They can also piece together new illegal content that has never been seen before. To this end, they can be (and some regimes would) equate them with that illegal content. But how would marketplaces assess such a takedown request?
Read 23 tweets
Aug 19, 2022
Int’l students are indeed used to subsidise teaching. High quality undergraduate degrees cost more than £9250 to run (always have in real terms), but were been subsidised by both govs (now rarely) & academic pay cuts. If int’l students capped, what fills the gap @halfon4harlowMP?
Tuition fees are a political topic because they’re visible to students, but the real question is ‘how is a degree funded’? The burden continues to shift from taxation into individual student debt, precarious reliance on int’l students, and lecturer pay.
Universities like Oxford distort the narrative too. College life is largely, often subsidised by the college endowment and assets, by the past. The fact so much of the political class went to a university with a non replicable funding model compounds issues hugely.
Read 4 tweets
Aug 19, 2022
Users of the Instagram app should today send a subject access request email to Meta requesting a copy of all this telemetry ‘tap’ data. It is not provided in the ‘Download Your Information’ tool. Users of other apps in the thread that do this (eg TikTok) can do the same.
Form: m.facebook.com/help/contact/5…
Say you are using Art 15 GDPR to access a copy of data from in-app browsers, including all telemetry and click data for all time. Say it is not in ‘Download your Information’. Link to Krause’s post for clarity. Mention your Instagram handle.
If you have trouble getting it (you will) you can return and ask for tips here, or read our thoughts & regulators’ views on flawed common refusals:
@Jausl00s + me, ‘Researching with Data Rights’: techreg.org/article/view/1…
@EU_EDPB (consultation doc) edpb.europa.eu/system/files/2…
Read 4 tweets
Jul 18, 2022
The Data Protection and Digital Information Bill contains a lot of changes. Some were previewed in the June consultation response. Others weren't. Some observations: 🧵
Overshadowing everything is an ability for the Secretary of State to amend anything they feel like about the text of the UK GDPR through regulations, circumventing Parliamentary debate. This should not happen in a parliamentary democracy, is an abuse of powers, and must not pass.
Article 22, around automated decision-making, is gone, replaced by three articles which in effect say that normal significant, automated decisions are never forbidden but get some already-present safeguards; decisions based on ethnicity, sexuality, etc require a legal basis.
Read 15 tweets
Jul 18, 2022
No legislation envisaged, just v general "cross-sectoral principles on a non-statutory footing". UK gov continues its trend of shuffling responsibility for developing a regulatory approach onto the regulators themselves, while EU shuffles it onto private standards bodies.
Meanwhile, regulators are warned not to actually do anything, and care about unspecified, directionless innovation most of all, as will be clearer this afternoon as the UK's proposed data protection reforms are perhaps published in a Bill.
Read 5 tweets
May 13, 2022
By my calculations, @officestudents' "unexpected" first class degrees model they calc grade inflation with uncritically expects a white, non-mature/disabled female law student to have a 40.5% chance of a First; the same Black student to have 15.4% chance. theguardian.com/education/2022…
The data is hidden in Table 6 of the annex to the report here officeforstudents.org.uk/publications/a… (requires you to add up the model estimates, do inverse log odds) Image
(I also used the 2020-21 academic year but you can choose your own)
Read 6 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us!

:(