Lots of selected thoughts on the draft leaked EU AI regulation follow. Not a summary but hopefully useful. 🧵
Blacklisted art 4 AI (except general scoring) exempts include state use for public security, including by contractors. Tech designed to ‘manipulate’ ppl ‘to their detriment’, to ‘target their vulnerabilities’ or profile comms metadata in indiscriminate way v possible for states.
This is clearly designed in part not to eg further upset France in the La Quadrature du Net case, where black boxes algorithmic systems inside telcos were limited. Same language as CJEU used in Art 4(c). Clear exemptions for orgs ‘on behalf’ of state to avoid CJEU scope creep.
Some could say that by allowing a pathway for manipulation technologies to be used by states, the EU is making a ‘psyops carve out’.
Given this regulation applies to putting AI systems on the market too, it’s unclear to me how Art 4(2) would work for vendors who are in the EU, sell these systems to the public sector, but don’t yet have a customer. Could be drafted more clearly.
Article 8 considers training data. It only applies when the actual resultant system is high risk. It does not include risks from experimentation on populations or similar through infrastructures to train them. AI systems that don’t pose use harms can still pose upstream harm.
Article 8(8) introduces a GDPR legal basis for processing special category data strictly necessary for debiasing. @RDBinns and I wrote about this challenge back in 2017. Some national measures had similar provisions already eg UK DP Act sch 1 para 8 journals.sagepub.com/doi/10.1177/20…
Article 8(9) also extends dataset style provisions mutadis mutandis to eg expert systems and federated learning/multiparty computation, which is sensible.
User transparency for high risk AI systems resembling labels in other sectors in Art 10. Some reqs on general logics and assumptions but nothing too onerous. You’d expect most of this to be provided by vendors in most sectors already to enable clients to write DPIAs
Art 11(c) is interesting, placing organisational requirements to ensure human oversight is meaningful. It responds clearly to @lilianedwards and I in 2018 commenting on the A29WP ADM guidelines. [...]
In that paper (sciencedirect.com/science/articl…) we pointed out that ensuring ‘authority and competence’ was an organisational challenge.
(I elaborated on this with @InaBrass in an OUP chapter on Administration by Algorithm, pointing out the accountability challenges of such organisational requirements for authority and competence) michae.lv/static/papers/…
General obligations for robustness and security in Article 12. Does not cover issues of model inversion and data leakage from models (see @RDBinns@lilianedwards and myself linked, I will stop gratuitous self plugging soon sorry) see royalsocietypublishing.org/doi/10.1098/rs…
The logging provision has a downside. Art 13 obliges providers to keep logs. This assumes they are run as a service, with all the surveillance downsides @sedyst and @jorisvanhoboken have lairs out in Privacy After the Agile Turn osf.io/preprints/soca…
Importer obligations in article 15 seem particularly difficult to enforce given the upstream nature of these challenges.
Monitoring obligations for users are good but quite vague and don’t seem to impose very rigourous obligations
I won’t go in detail into the conformity assessment apparatus which is seen in other EU law areas. Suffice to highlight a few things. Firstly, the Article 40 registration database is useful for journalists and civil society tracking vendors and high risk systems across Europe
Some have already studied using other registration databases for transparency in this field (eg @levendowskipapers.ssrn.com/sol3/papers.cf…) but that was with trademark law, so clearly flawed compared to a registration database of actual high risk AI systems.
Also, there are several parts where conformity is assumed under certain conditions. See eg 35(2) which seems to assume all of Europe is the same place for phenomena captured in data. Wishful thinking! Ever closer data distribution.
Article 41 applies to all AI systems
- notification requirements for if you’re taking to a human-sounding machine (@MargotKaminski this was in CCPA too? I think?).
Important given Google’s proposed voice assistant as robotic process automation thing, calling up restaurants etc
Article 41(2) creates a notification requirement for emotion recognition systems (@damicli@luke_stark@digi_ad). This is important as some might (arguably) not trigger GDPR if designed using transient data (academic.oup.com/idpl/article-a…)
Disclosure obligations for deep fake users (cc @lilianedwards@daniellecitron) — but with what penalty? Might stop businesses but regime likely flounders against individuals.
EC moved from facial recognition ban to authorisation system. This is all very much in draft and could still disappear I bet. ‘Serious crime’ not ‘crime’ requirement will be a sticking point with member states. Not much point analysing this until it’s in the proposed version.
Article 45 claims to reduce burdens on SMEs by giving them some access to euro initiatives like a regulatory sandbox (Art 44). There’ll be a big push to make this a scale based regime of applicability as these aren’t many concessions.
Interesting glimpse of something in another piece of unannounced regulation “Digital Hubs and Testing Experimentation Facilities”. Could be interesting. Keep eyes out.
Not another board! And this time with special provision to presumably grandfather in the EU HLEG on AI as advisors under Article 49. Nice deal if you can get it. My criticism of that group here: osf.io/preprints/lawa…
The post market monitoring system could be interesting. But heavily up to providers to determine how much they will do (ie next to none). Could also be used to say ‘we have to deliver this as an API, we can’t give it to you’. More pointless servitisation.
Article 59 presents a weird regime saying that if a member state still thinks a system presents a risk despite being in compliance, they can take action. This could be a used (eg freedom of expression) but there are checks built into it with the Commission.
And of course the list of high risk AI in full (can be added to by powers in the reg). Hiring, credit, welfare, policing and tech for the judiciary are all notable. Very little that is delivered by the tech giants as part of their core businesses, that’s clearly in DSA/DMA world
• • •
Missing some Tweet in this thread? You can try to
force a refresh
How do and should model marketplaces hosting user-uploaded AI systems like @HuggingFace @GitHub & @HelloCivitai moderate models & answer takedown requests? In a new paper, @rgorwa & I provide case studies of tricky AI platform drama & chart a way forward. osf.io/preprints/soca…
@huggingface @github @HelloCivitai @rgorwa There are a growing number of model marketplaces (Table). They can be hosting models that can create clear legal liability (e.g. models that can output terrorist manuals or CSAM). They are also hosting AI that may be used harmfully, and some are already trying to moderate this.
@huggingface @github @HelloCivitai @rgorwa Models can memorise content and reproduce it. They can also piece together new illegal content that has never been seen before. To this end, they can be (and some regimes would) equate them with that illegal content. But how would marketplaces assess such a takedown request?
Int’l students are indeed used to subsidise teaching. High quality undergraduate degrees cost more than £9250 to run (always have in real terms), but were been subsidised by both govs (now rarely) & academic pay cuts. If int’l students capped, what fills the gap @halfon4harlowMP?
Tuition fees are a political topic because they’re visible to students, but the real question is ‘how is a degree funded’? The burden continues to shift from taxation into individual student debt, precarious reliance on int’l students, and lecturer pay.
Universities like Oxford distort the narrative too. College life is largely, often subsidised by the college endowment and assets, by the past. The fact so much of the political class went to a university with a non replicable funding model compounds issues hugely.
Users of the Instagram app should today send a subject access request email to Meta requesting a copy of all this telemetry ‘tap’ data. It is not provided in the ‘Download Your Information’ tool. Users of other apps in the thread that do this (eg TikTok) can do the same.
Form: m.facebook.com/help/contact/5…
Say you are using Art 15 GDPR to access a copy of data from in-app browsers, including all telemetry and click data for all time. Say it is not in ‘Download your Information’. Link to Krause’s post for clarity. Mention your Instagram handle.
The Data Protection and Digital Information Bill contains a lot of changes. Some were previewed in the June consultation response. Others weren't. Some observations: 🧵
Overshadowing everything is an ability for the Secretary of State to amend anything they feel like about the text of the UK GDPR through regulations, circumventing Parliamentary debate. This should not happen in a parliamentary democracy, is an abuse of powers, and must not pass.
Article 22, around automated decision-making, is gone, replaced by three articles which in effect say that normal significant, automated decisions are never forbidden but get some already-present safeguards; decisions based on ethnicity, sexuality, etc require a legal basis.
No legislation envisaged, just v general "cross-sectoral principles on a non-statutory footing". UK gov continues its trend of shuffling responsibility for developing a regulatory approach onto the regulators themselves, while EU shuffles it onto private standards bodies.
Meanwhile, regulators are warned not to actually do anything, and care about unspecified, directionless innovation most of all, as will be clearer this afternoon as the UK's proposed data protection reforms are perhaps published in a Bill.
By my calculations, @officestudents' "unexpected" first class degrees model they calc grade inflation with uncritically expects a white, non-mature/disabled female law student to have a 40.5% chance of a First; the same Black student to have 15.4% chance. theguardian.com/education/2022…
The data is hidden in Table 6 of the annex to the report here officeforstudents.org.uk/publications/a… (requires you to add up the model estimates, do inverse log odds)
(I also used the 2020-21 academic year but you can choose your own)