Admittedly, the Chamber at the end says it wasn't really trying to anonymise.
So, the EDAA runs a site called "Your Online Choices", an incredibly little used, awkward & archiaic self regulatory initiative of the ad industry to try and claim that people have online choices in the absence of them. This website is linked to by ads, and itself places cookies.
Here's the headline findings. Violation of arts 12-13 GDPR for laying a cookie to test the water for acceptability of cookies before consent banner was given.
Violation of art 13, GDPR for just putting a link to a privacy policy rather than displaying concrete and specific information about the cookies used.
No clear violation, just a strong recommendation (?) followed by an order (??), that countries data being transferred to needs to be included. Likely because the register is "based on a European regulator's model" so Belgian DPA doesn't want to start a fight...?
the Your Online Choices tracking page did not need a DPO
No breach of a cookie wall ban as you could browse the website with strictly necessary cookies only, but the Belgiam DPA makes it clear that it will take a dim view of cookie walls for non strictly necessary purposes.
Ultimately the latter was seemingly predictable because the website wasn't contingent on using non necessary cookies (unless of course you consider its structural role in enabling them throughout the whole adtech industry, but DP doesn't do structural roles well...)
It seems mostly this decision fiddles with the information requirements a bit. A real question would be whether or not a cookie banner could ever contain too many actors, or too much info, to ever consent to. Indications in this decision that the Belgian DPA may think it could.
oh also worth noting that this is a cross border case, was passed over from a data subject in Germany, EDAA is headquartered in Brussels.
• • •
Missing some Tweet in this thread? You can try to
force a refresh
Significant news for the AI Act from the Commission as it proposes its new Standardisation Strategy, involving amending the 2012 Regulation. Remember: private bodies making standards (CEN/CENELEC/ETSI) are the key entities in the AI Act that determine the final rules. 🧵
Firstly, the Commission acknowledges that standards are increasingly touching not on technical issues but on European fundamental rights (although doesn’t highlight the AI Act here). This has long been an elephant in the room: accused private delegation of rule making by the EC.
They point to CJEU case law James Elliot in that respect (see 🖼), where the Court has brought the interpretation of harmonised standards (created by private bodies!) within the scope of preliminary references. Could have also talked about Fra.Bo and Comm v DE.
B3. The proposal does little to stop the huge pre-emption of any national rules on use of AI, besides the reduction in scope of the AI definition which reduces the pre-empted scope slightly because not absolutely everything can be claimed to be ‘use of software’.
B4. A huge removal of a high risk system is to remove systems modelling and searching through giant crime databases. Likely because unlike many Annex III technologies, these are commonly used in MSs… In theory EC could propose its return one day but wouldn’t hold breath.
B5. The presidency thinks it is solving a great value chain problem by addressing general purpose systems, like APIs sold by Google, Microsoft, OpenAI etc. But it fails hugely here, and these companies will shriek with joy.
The Council presidency compromise text on the draft EU AI Act has some improvements, some big steps back, ignores some huge residual problems and gives a *giant* handout to Google, Amazon, IBM, Microsoft and similar. Thread follows. 🧵
The Good:
G1. The manipulation provisions are slightly strengthened by a weakening of intent and a consideration of reasonable likelihood. The recital also has several changes which actually seem like they have read our AIA paper, on sociotechnical systems and accumulated harms…
New 📰: There's more to the EU AI regulation than meets the eye: big loopholes, private rulemaking, powerful deregulatory effects. Analysis needs connection to broad—sometimes pretty arcane—EU law
The Act (new trendy EU name for a Regulation) is structured by risk: from prohibitions to 'high risk' systems to 'transparency risks'. So far so good. Let's look at the prohibitions first.
The Act prohibits some types of manipulative systems. The EC itself admits these have to be pretty extreme — a magic AI Black Mirror sound that makes workers work far beyond the Working Time Directive, and an artificially intelligent Chucky doll. Would it affect anything real?
Concerned with platforms' power to map & reconfigure the world w/ ambient sensing? I'm *hiring* a 2-year Research Fellow (postdoc) @UCLLaws. Think regulating Apple AirTags (UWB); Amazon Sidewalk (LoRa), and—yes—Bluetooth contact tracing. (please RT!) 1/ atsv7.wcn.co.uk/search_engine/…
Just as platforms wanted to be the only ones who could sell access to populations based on how they use devices, they want to determine and extract value from how physical space is used and configured. There is huge public value from this knowledge, and huge public risk. 3/
Hey Microsoft Research people who think that constant facial emotion analysis might not be a great thing (among others), what do you think of this proposed Teams feature published at CHI to spotlight videos of audience members with high affective ‘scores’? microsoft.com/en-us/research…
Requires constantly pouring all face data on Teams through Azure APIs. Especially identifies head gestures and confusion to pull audience members out to the front, just in case you weren’t policing your face enough during meetings already.
Also note that Microsoft announced on Tuesday that it is opening up its Teams APIs to try to become a much wider platform to eat all remote work, so even if Teams didn’t decide to implement this directly, employers could through third party integration! protocol.com/newsletters/so…