Discover and read the best of Twitter Threads about #aiact

Most recents (13)

Morgen Mittag ist es soweit 🔥🔥🔥: Das @Europarl_DE wird über den #AIAct, das #KI-Gesetz, abstimmen.
Die Konservativen - also inkl. @CDU - stellen sich gegen Bürgerrechte und wollen das Verbot für biometrische Massenüberwachung (Gesichtserkennung etc) kippen!
Die #KI-Verordnung soll das weltweit erste Gesetz werden, das künstliche Intelligenz umfassend reguliert. #AIAct
Read 8 tweets
Some notes on #AIAct and fundamental rights impact assessment (FRIA)

europarl.europa.eu/news/it/press-…
1) AIAct “the fundamental rights impact assessment referred to in paragraph1 shall be conducted in conjunction with the data protection impact assessment. The data protection impact assessment shall be published as an addendum.”
2) Article 29a lists some criteria for the FRIA (but see also Article 7).
Read 7 tweets
#RiseOfAI @Riseof_AI starts with a chatGPT piped to text-to-video keynote. @bootstrappingme says 99% of people can't spot that this is not a person. Are there stats about this? I had about 0.2 seconds of doubt, but I'm sure there's better text to speech out there now.
Fabian did a great (incl. funny) job of complying with joanna-bryson.blogspot.com/2018/11/puttin… and fighting the #futureOfWork misinfo, but repeats the "#AIAct would have blocked #chatGPT" US west-coast talking point. Has someone written a takedown of that yet? #AIEthics @meerihaataja * #riseofai
** The GDPR though should already have blocked #chatGPT hoovering up our data for @OpenAI / @Microsoft for free, is that already getting investigated for prosecution? Anyone? #DSA #AIReguation #digitalGovernance #RiseOfAI
Read 24 tweets
🧵 AI should be regulated so that everyone is protected, whatever their skin colour/migration story. But the #AIAct the European Parliament is set to vote on is silent about something extremely dangerous: it's called "non-remote biometric identification".
euronews.com/2023/04/24/as-…
Non-remote biometric identification systems (NRBI) include hand-held devices that scan faces, fingerprints or palms, & voice or iris identification tech used by police to identify you.
These systems are harmful in several ways. For one thing, personal data collected through these devices could be leaked and sent to other places for other goals. This is how a military database of fingerprints and iris scans was found on sale on eBay 👇 nytimes.com/2022/12/27/tec…
Read 13 tweets
This is often a critique of data protection as a mechanism for #AI regulation. But especially when combined with human rights and specific anti-discrimination law, how much of a gap does that leave? 🤔 (Of course, *effective* enforcement is always and everywhere a big q)
Non-personal data, obvs, although given the #GDPR’s expansive definition that is a shrinking area, and harms relating to profiling individuals will always be in scope. Group discrimination would come under HR and equality law (if written effectively). Safety (eg cars) separate
*Within* #GDPR there are certainly specific issues which @lilianedwards @mikarv @RDBinns @jennifercobbe and others are writing about. How far they can be addressed without reopening the legislative text?
Read 5 tweets
🚨The French National Assembly (@AssembleeNat) has permitted the use of AI-powered mass surveillance at the 2024 Paris Olympics. This completely undermines the EU’s ongoing efforts to regulate AI and protect fundamental rights through the #AIAct
amnesty.org/en/latest/news…
While France promotes itself as a champion of human rights globally, its decision to legalize AI-powered mass surveillance during the Olympics will lead to an all-out assault on the rights to privacy, protest, and freedom of assembly and expression.
This decision, which legalizes the use of AI-powered surveillance for the first time in France and the EU, risks permanently transforming France into a dystopian surveillance state, and allowing large-scale violations of human rights elsewhere in the bloc.
Read 6 tweets
Just back from a 🇺🇸 roadshow to spread the word about the emerging #AI regulation in 🇪🇺&🇬🇧, and the learning points from speaking to technology & privacy professionals are significant. Mini-🧵
1️⃣Overall, there is an understanding that this is happening and is happening fast, but for a sizeable majority, it is truly eye opening to hear about the breadth & depth of the forthcoming #AI regulatory regime and fully appreciating the precise impact is going to take time.
2️⃣The parallels with the emergence & spread of the #GDPR (despite the very different obligations) is what really makes people realise how ambitious #AI regulation is and how it will be affecting biz across the Atlantic.
Read 8 tweets
You know regulation by enforcement in the US? Like what Gary does ? In Europe they have regulation by outrage and its bizarre :

The EU will effectively ban #ChatGPT and other generative AI models through its “AI Act”
As these models fall into the “high risk” category, they’ll face infeasible technical requirements (“bias-free training data”, “error-free training data”, “complete training data”, etc.).
ChatGPT generates texts, nothing more. It’s not a threat to health and safety! That’s why it doesn’t make any sense to classify generative AI models as “high risk”.
Read 5 tweets
1/4🛑The @Europarl_EN must BAN #EmotionRecognition & #AI polygraphs in the #AIAct.

'Emotion recognition' technologies are built on a chilling history of racism.

Along with biometric categorisation, these segregationist systems are the hidden face of #BiometricMassSurveillance. Mass surveillance of people...
2/4 MEPs need to prohibit systems from detecting or inferring emotion on the basic data about our appearance or behaviour.

🙅🏾‍♀️These systems are based on racist pseudo-science & should not be allowed.

Read more from @ellajakubowska1 & @VidushiMarda: edri.org/our-work/emoti…
3/4 #FacialRecognition is used to assess if people are:

🛂‘deceptive’ about their immigration claim
🧑‍💼good employees
💸suitable consumers
📚good students
🤔likely to be violent & more.

This tech necessitates constant surveillance to make intrusive & arbitrary judgments about us
Read 4 tweets
Great news from Brussels for the #ReclaimYourFace movement!

Overall, 177 MEPs from 6 out of the 7 political groups support amendments for a stronger ban on remote biometric identification in the AI Act proposal 💪 🎉

reclaimyourface.eu/parliament-cal…
📍24 MEPs tabled a full and unequivocal ban on all types of remote biometric identification (RBI) in publicly-accessible spaces in the #AIAct IMCO-LIBE report.

📍18 MEPs supported a ban on real-time RBI in publicly-accessible spaces, by all actors, & without exceptions.
📍Dozens of MEPs proposed 2 new & important bans prohibiting the police from using private biometric databases, and creating biometric databases through mass/untargeted methods such as online scraping or the mass scraping of CCTV footage.

reclaimyourface.eu/social-media-a…
#ReclaimYourFace
Read 7 tweets
[1] European Parliament negotiators call for a prohibition on predictive policing!🔥

As reported in @euractiv, leads on the #AIAct @brandobenifei + @IoanDragosT call for a full BAN on #PredictivePolicing, a key demand of civil society:

🧵
edri.org/our-work/civil…
[2] In an interview with @BertuzLuca, #AIAct negotiators highlight that the draft report contains a ban on predictive policing.

[euractiv.com/section/digita…]
[3] Speaking to @clothildegouj at @politicoEurope, @IoanDragosT says that predictive policing violates the presumption of innocence.
Read 9 tweets
1/4 📢 Today, @EDRi, @fairtrials & 40+ civil society organisations urge the EU to ❌ BAN AI predictive & profiling systems in law enforcement & criminal justice in the #AIAct.

Check out our statement & spread the message 🔃 edri.org/our-work/civil… Children playing on a playground within a transparent bubble
2/4 AI systems are used to profile people & areas, predict crime & assess the likely ‘risk’ of criminality.

This leads to undemocratic practices like surveillance, stop & search, fines, questioning, arrest, detention, prosecution & civil punishments like denial of welfare.
3/4 🚨AI can exacerbate structural imbalances of power & often harm the most marginalised in society.

The data law enforcement & criminal justice use to build & run AI systems result in the over-policing, disproportional surveillance & imprisoning of racialised groups in Europe.
Read 5 tweets
1/5 🚨 Today is European day against #Islamophobia!

Many Muslims are subject to discrimination, oppression & abuse because of anti-terrorist & surveillance policies in Europe, disproportionately targeting them as a threat to security.
2/5 ❌ The EU's 2020 Counter-Terrorism Agenda uses a flawed narrative that more surveillance is needed to guarantee security. This will only increase the over-policing of Muslims through data-sharing & biometric mass surveillance.
3/5 ➡️Read @ENAREurope's report about the impact of counter-terrorism law and policy on racialised groups in Europe: dro.dur.ac.uk/33090/1/33090.…

📣 Instead, the EU should emphasise and detail measures to support & promote the protection of #HumanRights, #Equality & #RuleOfLaw.
Read 5 tweets

Related hashtags

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3.00/month or $30.00/year) and get exclusive features!

Become Premium

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!