It feels like part of the reason that mainstream tech discourse has latched on so much to the specific problem of bias in AI, especially facial recognition, is that people are uncomfortable questioning the validity of institutions like policing.
It's much easier and safer to say "This software might be biased and therefore police shouldn't use it until it works right" than it is to say "this software will help police perform the functions of policing faster, and more efficiently, and that in and of itself is a bad thing"
The same could be said for corporations looking to use AI and things like face recognition for marketing or customer experience, etc. Yes, bias in these systems can exacerbate discrimination, but using software to extract ever more profit from humans is problematic from the start
Most mainstream articles about facial recognition basically say something like "Privacy advocates have raised concerns that the software exhibits racial and gender bias." This is true, those flaws are deeply concerning especially the way this software is being used RIGHT NOW.
But the reality is that facial recognition surveillance will still be used to enforce white supremacy if and when the algorithms improve and the bias issues are "addressed." Layering tech on top of inherently unjust systems simply automates and amplifies their injustice.
None of this is to diminish the absolutely crucial work being done by researchers and advocates exposing and documenting the ways that current facial recognition and other AI systems exhibit systemic racial and gender bias. That information is crucial to have a real discussion
But my point is that pundits and tech writers consistently using "bias" as shorthand for what is actually a much broader set of systemic problems misleads people into thinking that this is an issue that can be easily fixed by addressing the bias, retraining the software etc.
It's not hard to see parallels with the ways reformists approach systemic racism in policing: they call for more training, more investigations, more bureaucracy and safeguards, rather than recognizing that some things just need to be abolished. Facial recognition is one of em.
• • •
Missing some Tweet in this thread? You can try to
force a refresh
Lol at @SenatorCantwell who apparently doesn't even know what bill they are voting on today. She called it the "Kids Online Privacy Act," which is extra ironic since this bill will take away kids privacy rather than enhancing it.
@SenatorCantwell .@SenTedCruz pushing for #KOSA to include pre-emption, because he loves corporations even more than he hates gay people, and wants to help kill off state privacy bills like the #CCPA. Ironically this would also break stupid state bills like the Utah bill.
While politicians are racing ahead with proposals based on the premise that simply encountering content on social media is causing ... harms, the APA notes that the actual research is far less conclusive and far more nuanced than lawmakers’ rhetoric washingtonpost.com/politics/2023/…
In this new report, the APA also specifically note that there is a significant lack of research on how young people from marginalized communities (like Black and brown kids and LGBTQ kids) experience social media and associated benefits and harms.
That gap is dangerous, and we applaud the APA’s call for further research in this area. The reality is that many proposals for regulating social media will make some kids safer while making other kids less safe.
URGENT: We've just heard that @SenBlumenthal and @MarshaBlackburn plan to reintroduce the controversial Kids Online Safety Act (#KOSA) tomorrow.
They will say that they've engaged with LGBTQ groups (true) and addressed all concerns with the bill (NOT TRUE!!!)
Here's what's up:
When #KOSA was first introduced more than 100 human rights and LGBTQ organizations signed on to a letter that we organized explaining how this bill would be a disaster for LGBTQ rights, free expression, and kids safety. cnbc.com/2022/11/28/kid…
Realizing they had a problem @SenBlumenthal staff basically went behind the backs of the folks who organized that letter (mostly trans people with significant expertise in content moderation, tech policy, algorithmic harm, etc) and met with several LGBTQ groups without us.
Has anyone done a deep dive on the privacy and security implications of Netflix fingerprinting your home WiFi Network and essentially creating a record of when you are home or not … just to crack down on password sharing?
Oops, I was like genuinely asking not trying to do numbers but here we are. A reporter reached out to me about my thoughts on this and I'm still formulating but here's what I've got:
I mean in some ways there's nothing super unique about what Netflix is doing. Most websites you visit will know your IP address, rough location, what browser or OS you're using, etc.
weird to me how many organizations that say they want to "rid the Internet of disinformation" never seem to say anything about the fact that police routinely lie as a matter of practice, and that crime rates and statistics in the US are essentially a giant disinfo campaign 🤔
it's just funny to me that an entire industry has formed around the idea that people lying on the Internet is the greatest threat that our society faces, but it willfully refuses to call out the biggest source of lies because it's looking to those same systems to stop the lies
been reading @prisonculture and thinking through what an abolitionist lens on disinformation looks like. it's so far from the current way most mainstream progressive organizations are thinking through this issue: looking to more censorship & surveillance and policing as solutions
Madison Square Garden used facial recognition to identify and stop a mom from attending a Christmas show with her kid because she's an attorney at a firm who is engaged in litigation with them.
This is exactly why it is NOT ENOUGH to just ban government and law enforcement use of facial recognition and biometric surveillance. There are so many ways private corporations and even individuals can abuse this tech. It should be banned for all commercial use & public use.
Facial recognition surveillance should be banned in all "places of public accommodation" as defined by the ADA. Portland, OR already passed a citywide ordinance that does this. We need to recreate that at the Federal level and then make this a global norm fightfortheftr.medium.com/why-we-absolut…