Deb Raji Profile picture
AI accountability, audits & eval. Keen on participation & practical outcomes. Fellow @mozilla, CS PhDing @UCBerkeley. forever @AJLUnited, @hashtag_include ✝️
Dec 2, 2022 4 tweets 2 min read
There's a workshop paper from 2019(!) I wrote w/ @roeldobbe where we show through case studies that many of the "concrete" problems in AI safety are not at all concrete. In fact, much of that work ignores the nature of actual AI disasters on record & overlooks imp research areas. As part of that work, I read a lot of "AI safety" papers. What I realized was that many papers weren't even pretending to understand any of the actual safety risks on the ground, but were ultimately attempts at justifying pre-anointed concerns popular in the EA/longtermism crowd.
Dec 17, 2021 6 tweets 15 min read
@thegautamkamath @emilymbender @ergodicwalk @amandalynneP @cephaloponderer @alexhanna Methodology-wise, a lot of @timnitGebru's work will be relevant (& is always quite practical):
arxiv.org/abs/1803.09010,
dl.acm.org/doi/abs/10.114…, etc. @thegautamkamath @emilymbender @ergodicwalk @amandalynneP @cephaloponderer @alexhanna @timnitGebru I especially learnt a lot from "Lessons from the archives". Also a lot of @laroyo's work is quite practical: dl.acm.org/doi/abs/10.114…
Dec 8, 2021 19 tweets 11 min read
This is happening now! Amazing panel so far 🔥 @sleepinyourhat being very honest rn about NLP benchmark limitations - "We've got this discourse on what language models are good at that's not very grounded - you can point to benchmark performance to make claims, but also point to embarrassing failures & say that nothing works"
Mar 27, 2021 12 tweets 6 min read
These are the four most popular misconceptions people have about race & gender bias in algorithms.

I'm wary of wading into this conversation again, but it's important to acknowledge the research that refutes each point, despite it feeling counter-intuitive.

Let me clarify.👇🏾 1. Bias can start anywhere in the system - pre-processing, post-processing, with task design, with modeling choices, etc., in addition to issues with the data. The system arrives as the result of a lot of decisions, and any of those decisions can result in a biased outcome. Image
Mar 17, 2021 13 tweets 3 min read
There's something important that no one seems to be saying about this article so I'm putting my thoughts here to get it off my chest.

tl;dr I ended up reading this and thinking not so much about Facebook but much more about policy & incentives.

(THREAD)
technologyreview.com/2021/03/11/102… I feel like I need to remind people right now that Facebook does indeed have very serious fairness/bias issues. They have been sued by HUD & the ACLU for biased ad delivery for housing & jobs. It wasn't just a PR move to focus on these issues, but also a legal defense strategy.
Dec 6, 2020 11 tweets 5 min read
Nov 9, 2020 4 tweets 2 min read
It seriously annoys me how much tech cos and those representing them are valued in tech policy conversations.

It speaks to the prevalence of the false "prodigal tech bro" narrative that only those who participated in creating the harm, once reformed, will be able to stop it. Also undermines the value of those with non-tech co experience. I want to see those that have been fighting to protect people for year to be the top candidates considered.

Aug 13, 2020 12 tweets 5 min read
This is actually unbelievable. In the UK, students couldn't take A-level exams due to the pandemic, so scores were automatically determined by an algorithm.

As a result, most of the As this year - way more than usual - were given to students at private/independent schools. 😩 Looks like @ICOnews has some guidance for how students can access information about their scores and contest results. h/t @mikarv for bringing this into my timeline.

Jul 26, 2020 8 tweets 3 min read
Sometimes technology hurts people precisely because it *doesn't* work & sometimes it hurts people because it *does* work.

Facial recognition is both. When it *doesn't* work, people get misidentified, locked out, etc. But even when it *does*, it's invasive & still unsafe. I think there's something specifically disturbing about the fact that there are deployed technologies of any sort that are not built for or tested on black people (or any other minority population). That puts these populations at risk & is a problem worth addressing specifically.