Lots of selected thoughts on the draft leaked EU AI regulation follow. Not a summary but hopefully useful. 🧵
Blacklisted art 4 AI (except general scoring) exempts include state use for public security, including by contractors. Tech designed to ‘manipulate’ ppl ‘to their detriment’, to ‘target their vulnerabilities’ or profile comms metadata in indiscriminate way v possible for states. ImageImage
This is clearly designed in part not to eg further upset France in the La Quadrature du Net case, where black boxes algorithmic systems inside telcos were limited. Same language as CJEU used in Art 4(c). Clear exemptions for orgs ‘on behalf’ of state to avoid CJEU scope creep.
Some could say that by allowing a pathway for manipulation technologies to be used by states, the EU is making a ‘psyops carve out’.
Given this regulation applies to putting AI systems on the market too, it’s unclear to me how Art 4(2) would work for vendors who are in the EU, sell these systems to the public sector, but don’t yet have a customer. Could be drafted more clearly.
Article 8 considers training data. It only applies when the actual resultant system is high risk. It does not include risks from experimentation on populations or similar through infrastructures to train them. AI systems that don’t pose use harms can still pose upstream harm. ImageImage
Article 8(8) introduces a GDPR legal basis for processing special category data strictly necessary for debiasing. @RDBinns and I wrote about this challenge back in 2017. Some national measures had similar provisions already eg UK DP Act sch 1 para 8 journals.sagepub.com/doi/10.1177/20…
Article 8(9) also extends dataset style provisions mutadis mutandis to eg expert systems and federated learning/multiparty computation, which is sensible. Image
Logging requirements are interesting in Article 9, and important. The Police DP Directive has similar, and they matter. @jennifercobbe @jatinternet @cnorval have usefully written on decision provenance in automated systems here export.arxiv.org/pdf/1804.05741 Image
User transparency for high risk AI systems resembling labels in other sectors in Art 10. Some reqs on general logics and assumptions but nothing too onerous. You’d expect most of this to be provided by vendors in most sectors already to enable clients to write DPIAs Image
Art 11(c) is interesting, placing organisational requirements to ensure human oversight is meaningful. It responds clearly to @lilianedwards and I in 2018 commenting on the A29WP ADM guidelines. [...] Image
In that paper (sciencedirect.com/science/articl…) we pointed out that ensuring ‘authority and competence’ was an organisational challenge. Image
(I elaborated on this with @InaBrass in an OUP chapter on Administration by Algorithm, pointing out the accountability challenges of such organisational requirements for authority and competence) michae.lv/static/papers/… ImageImage
General obligations for robustness and security in Article 12. Does not cover issues of model inversion and data leakage from models (see @RDBinns @lilianedwards and myself linked, I will stop gratuitous self plugging soon sorry) see royalsocietypublishing.org/doi/10.1098/rs…
The logging provision has a downside. Art 13 obliges providers to keep logs. This assumes they are run as a service, with all the surveillance downsides @sedyst and @jorisvanhoboken have lairs out in Privacy After the Agile Turn osf.io/preprints/soca… Image
Importer obligations in article 15 seem particularly difficult to enforce given the upstream nature of these challenges.
Monitoring obligations for users are good but quite vague and don’t seem to impose very rigourous obligations Image
I won’t go in detail into the conformity assessment apparatus which is seen in other EU law areas. Suffice to highlight a few things. Firstly, the Article 40 registration database is useful for journalists and civil society tracking vendors and high risk systems across Europe Image
Some have already studied using other registration databases for transparency in this field (eg @levendowski papers.ssrn.com/sol3/papers.cf…) but that was with trademark law, so clearly flawed compared to a registration database of actual high risk AI systems.
Also, there are several parts where conformity is assumed under certain conditions. See eg 35(2) which seems to assume all of Europe is the same place for phenomena captured in data. Wishful thinking! Ever closer data distribution. ImageImage
Article 41 applies to all AI systems
- notification requirements for if you’re taking to a human-sounding machine (@MargotKaminski this was in CCPA too? I think?).

Important given Google’s proposed voice assistant as robotic process automation thing, calling up restaurants etc Image
Article 41(2) creates a notification requirement for emotion recognition systems (@damicli @luke_stark @digi_ad). This is important as some might (arguably) not trigger GDPR if designed using transient data (academic.oup.com/idpl/article-a…) Image
Disclosure obligations for deep fake users (cc @lilianedwards @daniellecitron) — but with what penalty? Might stop businesses but regime likely flounders against individuals. ImageImage
EC moved from facial recognition ban to authorisation system. This is all very much in draft and could still disappear I bet. ‘Serious crime’ not ‘crime’ requirement will be a sticking point with member states. Not much point analysing this until it’s in the proposed version. Image
Article 45 claims to reduce burdens on SMEs by giving them some access to euro initiatives like a regulatory sandbox (Art 44). There’ll be a big push to make this a scale based regime of applicability as these aren’t many concessions. Image
Interesting glimpse of something in another piece of unannounced regulation “Digital Hubs and Testing Experimentation Facilities”. Could be interesting. Keep eyes out. Image
Not another board! And this time with special provision to presumably grandfather in the EU HLEG on AI as advisors under Article 49. Nice deal if you can get it. My criticism of that group here: osf.io/preprints/lawa… ImageImageImage
The post market monitoring system could be interesting. But heavily up to providers to determine how much they will do (ie next to none). Could also be used to say ‘we have to deliver this as an API, we can’t give it to you’. More pointless servitisation. Image
Article 59 presents a weird regime saying that if a member state still thinks a system presents a risk despite being in compliance, they can take action. This could be a used (eg freedom of expression) but there are checks built into it with the Commission. Image
And of course the list of high risk AI in full (can be added to by powers in the reg). Hiring, credit, welfare, policing and tech for the judiciary are all notable. Very little that is delivered by the tech giants as part of their core businesses, that’s clearly in DSA/DMA world ImageImage

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Michael Veale @mikarv@someone.elses.computer

Michael Veale @mikarv@someone.elses.computer Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @mikarv

Nov 20, 2023
How do and should model marketplaces hosting user-uploaded AI systems like @HuggingFace @GitHub & @HelloCivitai moderate models & answer takedown requests? In a new paper, @rgorwa & I provide case studies of tricky AI platform drama & chart a way forward. osf.io/preprints/soca…
Moderating Model Marketplaces: Platform Governance Puzzles for AI Intermediaries. The AI development community is increasingly making use of hosting intermediaries such as Hugging Face that provide easy access to user-uploaded models and training data. These model marketplaces lower technical deployment barriers for hundreds of thousands of users, yet can be used in numerous potentially harmful and illegal ways. In this article, we argue that AI models, which can both `contain' content and be open-ended tools, present one of the trickiest platform governance challenges seen to date. We prov...
@huggingface @github @HelloCivitai @rgorwa There are a growing number of model marketplaces (Table). They can be hosting models that can create clear legal liability (e.g. models that can output terrorist manuals or CSAM). They are also hosting AI that may be used harmfully, and some are already trying to moderate this. Image
@huggingface @github @HelloCivitai @rgorwa Models can memorise content and reproduce it. They can also piece together new illegal content that has never been seen before. To this end, they can be (and some regimes would) equate them with that illegal content. But how would marketplaces assess such a takedown request?
Read 23 tweets
Aug 19, 2022
Int’l students are indeed used to subsidise teaching. High quality undergraduate degrees cost more than £9250 to run (always have in real terms), but were been subsidised by both govs (now rarely) & academic pay cuts. If int’l students capped, what fills the gap @halfon4harlowMP?
Tuition fees are a political topic because they’re visible to students, but the real question is ‘how is a degree funded’? The burden continues to shift from taxation into individual student debt, precarious reliance on int’l students, and lecturer pay.
Universities like Oxford distort the narrative too. College life is largely, often subsidised by the college endowment and assets, by the past. The fact so much of the political class went to a university with a non replicable funding model compounds issues hugely.
Read 4 tweets
Aug 19, 2022
Users of the Instagram app should today send a subject access request email to Meta requesting a copy of all this telemetry ‘tap’ data. It is not provided in the ‘Download Your Information’ tool. Users of other apps in the thread that do this (eg TikTok) can do the same.
Form: m.facebook.com/help/contact/5…
Say you are using Art 15 GDPR to access a copy of data from in-app browsers, including all telemetry and click data for all time. Say it is not in ‘Download your Information’. Link to Krause’s post for clarity. Mention your Instagram handle.
If you have trouble getting it (you will) you can return and ask for tips here, or read our thoughts & regulators’ views on flawed common refusals:
@Jausl00s + me, ‘Researching with Data Rights’: techreg.org/article/view/1…
@EU_EDPB (consultation doc) edpb.europa.eu/system/files/2…
Read 4 tweets
Jul 18, 2022
The Data Protection and Digital Information Bill contains a lot of changes. Some were previewed in the June consultation response. Others weren't. Some observations: 🧵
Overshadowing everything is an ability for the Secretary of State to amend anything they feel like about the text of the UK GDPR through regulations, circumventing Parliamentary debate. This should not happen in a parliamentary democracy, is an abuse of powers, and must not pass.
Article 22, around automated decision-making, is gone, replaced by three articles which in effect say that normal significant, automated decisions are never forbidden but get some already-present safeguards; decisions based on ethnicity, sexuality, etc require a legal basis.
Read 15 tweets
Jul 18, 2022
No legislation envisaged, just v general "cross-sectoral principles on a non-statutory footing". UK gov continues its trend of shuffling responsibility for developing a regulatory approach onto the regulators themselves, while EU shuffles it onto private standards bodies.
Meanwhile, regulators are warned not to actually do anything, and care about unspecified, directionless innovation most of all, as will be clearer this afternoon as the UK's proposed data protection reforms are perhaps published in a Bill.
Read 5 tweets
May 13, 2022
By my calculations, @officestudents' "unexpected" first class degrees model they calc grade inflation with uncritically expects a white, non-mature/disabled female law student to have a 40.5% chance of a First; the same Black student to have 15.4% chance. theguardian.com/education/2022…
The data is hidden in Table 6 of the annex to the report here officeforstudents.org.uk/publications/a… (requires you to add up the model estimates, do inverse log odds) Image
(I also used the 2020-21 academic year but you can choose your own)
Read 6 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us!

:(