Calls for a ban to any AI that may result in mass surveillance
Highlights the power asymmetry between those who employ AI technologies and those who are subject to them
"Highlights the potentially grave adverse consequences when individuals overly trust in the seemingly objective and scientific nature of AI tools and fail to consider the possibility of their results being incorrect, incomplete, irrelevant or discriminatory"
Underlines that decisions giving legal or similar effect always need to be taken by a human; calls for a ban on the use of AI for proposing judicial decisions
"Only such tools and systems should be allowed to be purchased by law enforcement or judiciary authorities in the Union whose algorithms and logic is auditable and accessible (...) and that they must not be closed or labelled as proprietary by the vendors"
Calls for a compulsory fundamental rights impact assessment to be conducted prior to the implementation or deployment of any AI systems for law enforcement or the judiciary
Opposes the use of AI to make behavioural predictions on individuals or groups on the basis of historical data and past behaviour, group membership, location, or any other such characteristics, thereby attempting to identify people likely to commit a crime
"Calls, furthermore, for the permanent prohibition of the use of automated analysis and/or recognition in publicly accessible spaces of other human features, such as gait, fingerprints, DNA, voice, and other biometric and behavioural signals"
A moratorium on the deployment of facial recognition systems for law enforcement purposes that have the function of identification;
and
a ban on the use of private facial recognition databases in law enforcement
Do ponto de vista 🇵🇹 vai ser interessante olhar para os votos dos nossos eurodeputados, cuja posição sobre IA (e sobre reconhecimento biométrico no espaço público, em particular) não conhecemos bem.
Relatório foi aprovado na Comissão LIBE com voto contra de @PauloRangel_pt
Há emendas (co-assinadas por Rangel) a diluir a proibição/moratória (ver acima) do uso de reconhecimento biométrico no espaço público
Happening now. @YlvaJohansson’s first intervention is a huge disappointment - confusing several different uses of AI, using clear fallacies (i.e. making equivalences between oddly different situations), and standing by the loopholes in the AI act proposal
The Parliament’s debates are always difficult to follow, and it’s too easy to get away with erroneous statements and a “have your cake and eat it” attitude - “I am for both fundamental rights and safety”, like if that solves the dilemma.
Some MEPs have decided to mimic Silicon Valley rhetoric about AI. No, sorry, but that’s not true: predictive policing is not a big advance on the fight against crime, it’s a setback for fundamental rights without evidence of effectiveness in preventing crime. Just an example
Great intervention by @svenja_hahn (interestingly, an FDP MEP). With clear positions both from FDP and Greens, will this mean Germany siding with a moratorium/ban of biometric recognition in public spaces?
Even better from @karmel80 now: not every use of AI is problematic, but predictive and risk assessment algorithms and automated decisions in law enforcement surely are (paraphrasing)
“We should avoid paranoia” - tell that to people (minorities, mostly) whose fundamental rights have been infringed. To people who have been erroneously stopped, searched, detained for interrogation. To communities wrongly flagged as areas of potential crime due to biased datasets
Did @YlvaJohansson really just say “the accuracy of AI technolgies is 10 times the accuracy of non-AI technologies”?
Oh please, I thought we were having a serious debate
You can tell I’m upset, there’s a typo and everything
Results are in! AM weakening language on remote biometric identification, mass surveillance and facial recognition were voted down. Report will be voted as tabled by @PetarVitanovMEP
Let's play who's who!
First: who voted against a ban on AI technologies that may lead to mass surveillance?
Second: who voted to not oppose the use of AI to make behavioural predictions?
Third: who voted for removing a moratorium on the deployment of facial recognition systems for law enforcement purposes that have the function of identification until they fulfil certain fundamental rights criteria?
Lastly: who voted for diluting a ban on any processing of biometric data, including facial images, for law enforcement purposes that leads to mass surveillance in publicly accessible spaces?
• • •
Missing some Tweet in this thread? You can try to
force a refresh
What we can learn from @60Minutes with #facebookwhistleblower is that Facebook keeps trying to frame the question as freedom of expression vs public safety, but it's very clearly profit vs public safety. No society should allow "profit" to be the answer to that choice.
There's Zuck telling congress "We're doing the best that can be done while respecting our countries' values";
There's the statement "balance protecting the right of people to express themselves openly vs keep our platform a safe and positive place".
That's bullsh.
There's no such thing as a right to be amplified in misleading, lying to, hating, harming others. Much can be done - it's Facebook's own documents saying it - before we get to the difficult speech vs safety question. What stops Facebook? The money they'll lose. Simple.
Brilliant read so far. @katecrawford unveiling the physical and sociopolitical nature of AI (and tech more broadly). We know it’s not just something “up” in the cloud, but it’s good to get to know some examples of the dark side of this moon