The French presidency of the Council send around a compromise text last week on arts 8-15 of the EU AI Act (some of the requirements for high risk systems). My analysis of them below: 🧵 1/
Remember that the AI Act hinges on proprietary, privately determined standards from CEN/CENELEC. The Commission always holds these are optional, but the proposal goes (further) in making it impossible to comply without buying them (~100 EUR) and referring to them.
Scholars have long said that harmonised standards are not simply a substitute for the essential requirements laid down in legislation, but a de facto requirement. Note Art 9(3) of the AIA also makes reference to them universally compulsory. Law behind paywalls, made privately.
Risk assessment changes further remove obligations to consider risks that emerge from 'off-label' use. This is important because in practice, AI systems may be sold for one purpose but commonly used for another, with users adopting legal risk but benefitting from weak regulators.
Here's a good one. The French presidency have invented the "reverse-technofix" — there is now no legal obligation in a risk management system to consider risks that can't be techno-magicked or information-provided away. This is certainly innovation, and it's horrifying.
Obligations on datasets now restrict "bias" to discrimination in Union law/health & safety — a small subset of issues that can result from misrepresentation in data. Many forms of bias that scholars have highlighted as harmful to groups do not cleanly fall within these categories
Dataset reqs. also adapt what was already in the recitals of the EC draft to make clear that data is only free of errors "to the best extent possible" — looks like big change but concerns from (largely non-legal commentators) w the orig. text were overhyped and decontextualised.
There was always a 'catch-all' provision which said if your AI system doesn't use training data, then apply the spirit of this section. Now just says ensure soundness of validation data. But this section also about *design choices* — these obligations fall out without a reason?
Some will say "but AI systems need data!!". But what about using pre-built models-as-APIs, piecing together larger general purpose models that the Council wants out of scope? There's no data in that stage of the process, and no ability for providers to go up the supply chain.
Presidency add an interesting obligation for the provider to consider not just data minimisation when they make the models (already law) but with regard to future model users (not necessarily an obligation as they may not be GDPR controller at that point).
I-Is there a European definition of a "start-up"?? This seems like a way for firms just to avoid making rigorous technical documentation — which note, is important, because that's what's uploaded in the public database for scrutiny and accountability.
Some changes to the logging requirements but mostly just clarification and refinement, I don't see huge differences in substance here.
Transparency provisions have been weakened in ways that will concern some: no longer an obligation to make 'interpretable output' from systems, just an obligation to make 'usable' and 'understandable' systems (not output).
Weakening of provision designed to provide information to users on performance metrics on subgroups of the population.
However, increase in transparency on "computational and hardware resources needed" which might allow better studies on the environmental impact of AI through the public database.
A proportionality test is introduced in the human oversight provisions — providers can now not provide human oversight functions if to do so would be disproportionate. Nice outcome if you can get it (and they will try).
Presidency however double down on the "four eyes" principle around biometric recognition: clarifying that systems for biometric recognition must be designed so that they are manually and *separately* verified by two natural persons.
However that is a design requirement remember — if the Presidency choose to weaken the way law enforcement have to rely on the system instructions, this doesn't mean anything.
Some careful and welcome clarification that feedback loops have to be considered even when the outputs are not just 'used' as new inputs, but where they 'influence' them.
That's all for now. I believe the text was given to POLITICO essentially but I keep trying subscribe to their PRO service and they won't even give me a quote. You can download it here. cloud.michae.lv/s/3rf8qyfDiENF…
How do and should model marketplaces hosting user-uploaded AI systems like @HuggingFace @GitHub & @HelloCivitai moderate models & answer takedown requests? In a new paper, @rgorwa & I provide case studies of tricky AI platform drama & chart a way forward. osf.io/preprints/soca…
@huggingface @github @HelloCivitai @rgorwa There are a growing number of model marketplaces (Table). They can be hosting models that can create clear legal liability (e.g. models that can output terrorist manuals or CSAM). They are also hosting AI that may be used harmfully, and some are already trying to moderate this.
@huggingface @github @HelloCivitai @rgorwa Models can memorise content and reproduce it. They can also piece together new illegal content that has never been seen before. To this end, they can be (and some regimes would) equate them with that illegal content. But how would marketplaces assess such a takedown request?
Int’l students are indeed used to subsidise teaching. High quality undergraduate degrees cost more than £9250 to run (always have in real terms), but were been subsidised by both govs (now rarely) & academic pay cuts. If int’l students capped, what fills the gap @halfon4harlowMP?
Tuition fees are a political topic because they’re visible to students, but the real question is ‘how is a degree funded’? The burden continues to shift from taxation into individual student debt, precarious reliance on int’l students, and lecturer pay.
Universities like Oxford distort the narrative too. College life is largely, often subsidised by the college endowment and assets, by the past. The fact so much of the political class went to a university with a non replicable funding model compounds issues hugely.
Users of the Instagram app should today send a subject access request email to Meta requesting a copy of all this telemetry ‘tap’ data. It is not provided in the ‘Download Your Information’ tool. Users of other apps in the thread that do this (eg TikTok) can do the same.
Form: m.facebook.com/help/contact/5…
Say you are using Art 15 GDPR to access a copy of data from in-app browsers, including all telemetry and click data for all time. Say it is not in ‘Download your Information’. Link to Krause’s post for clarity. Mention your Instagram handle.
The Data Protection and Digital Information Bill contains a lot of changes. Some were previewed in the June consultation response. Others weren't. Some observations: 🧵
Overshadowing everything is an ability for the Secretary of State to amend anything they feel like about the text of the UK GDPR through regulations, circumventing Parliamentary debate. This should not happen in a parliamentary democracy, is an abuse of powers, and must not pass.
Article 22, around automated decision-making, is gone, replaced by three articles which in effect say that normal significant, automated decisions are never forbidden but get some already-present safeguards; decisions based on ethnicity, sexuality, etc require a legal basis.
No legislation envisaged, just v general "cross-sectoral principles on a non-statutory footing". UK gov continues its trend of shuffling responsibility for developing a regulatory approach onto the regulators themselves, while EU shuffles it onto private standards bodies.
Meanwhile, regulators are warned not to actually do anything, and care about unspecified, directionless innovation most of all, as will be clearer this afternoon as the UK's proposed data protection reforms are perhaps published in a Bill.
By my calculations, @officestudents' "unexpected" first class degrees model they calc grade inflation with uncritically expects a white, non-mature/disabled female law student to have a 40.5% chance of a First; the same Black student to have 15.4% chance. theguardian.com/education/2022…
The data is hidden in Table 6 of the annex to the report here officeforstudents.org.uk/publications/a… (requires you to add up the model estimates, do inverse log odds)
(I also used the 2020-21 academic year but you can choose your own)