Ali Alkhatib ➡️ @ali@masto.al2.in Profile picture
former Director, Center for Applied Data Ethics, University of San Francisco; former (x2) PhD student, Stanford CS; former (x3) anthropologist; HCI/AI/ethics
Apr 26 5 tweets 1 min read
philosophy chair getting arrested, an econ prof getting her head smashed against concrete, a street medic being held down and tased, snipers setting up vantage points to shoot at ohio state students -all because of nationwide protests against a genocide- you SHOULD be radicalized i haven't even mentioned the mass arrests, the administration's incendiary lies, the totally belligerent behavior of university presidents and other political officials, or THE HUNDREDS OF BODIES FOUND IN MASS GRAVES WHERE DOZENS OF PEOPLE WERE CONFIRMED TO HAVE BEEN BURIED ALIVE
Oct 18, 2022 5 tweets 1 min read
i think this is a good time to reflect on something!

the work i did in grad school focused specifically on gig work for a bit, and i have fairly strong feelings about the shitty conditions of people have to endure to make entirely too little money via these labor platforms. /4 i try not to be super hard-line about shaming people for using services like prime, doordash, instacart, etc... though.

these platforms are shit: don't pay workers enough, & in many cases play really scummy mind games and tricks on workers to get them to work more & earn less /4
Oct 18, 2022 5 tweets 2 min read
> The difference between a smartwatch and an ankle monitor is, in many ways, a matter of context: Who wears one for purported betterment, and who wears one because they are having state power enacted against them?

❗️ this piece by @hypervisible is a must

theatlantic.com/technology/arc… i'd actually add a bit here; they both "exist for one's betterment" - the imposition of *someone else's* morality and definition of "better" is a crucial part of policing your movements with eg an ankle monitor. how this tech leverages influence over you is continuously shifting
Oct 18, 2022 4 tweets 1 min read
interesting question... QTing for input:

my unrigorous guess is: a lot of people in this space at least study (often experience) violence along certain dimensions (eg race, gender, class, caste). what we learn from those fields informs & precipitates commitments to those issues as for why there aren't many AI ethics people with less progressive politics

i mean i have 2 answers:

1. if you avoid bringing history etc into your analysis you run out of analytical power really quickly

2. those people exist; i don't think we regard them as serious peers lol
Jan 16, 2021 5 tweets 2 min read
it's okay if you have a VAGUE sense of why people are so excited that Dr @Alondra Nelson is going to be the deputy director for Science and Society in the Biden administration. but there are very specific reasons a lot of us are so excited that i want you to enjoy too: Dr Nelson wrote *Body and Soul* - a history of grassroots organizing, an under-appreciated story of the Black Panther Party opening health centers and fighting for medical rights as a central plank in the fight for civil and human rights.
Jan 15, 2021 5 tweets 1 min read
i've been anxious about the utopia paper for many reasons. things i didn't have space to write about was one. another is that there are MANY people doing work in this area, but i either had to make a decision not to, or lost track of the thing that allowed me to ref them. there was a conversation about "what is AI" and the critique that my use of the term (in the 1st draft) was loose to the point of meaninglessness. to make it more precise, i ended up working the paper into a corner where i was talking about systems that use models generated by ML
Jan 15, 2021 4 tweets 2 min read
hi friends! sorry for flaking a bit

if you'd like to read my #CHI2021 paper "To Live in Their Utopia: Why Algorithmic Systems Create Absurd Outcomes", you can download a preprint and take a read here al2.in/papers/chi/uto…

some more stuff in the next tweet for lots of reasons i couldn't get into a few areas in this CHI paper, but there were a few threads i wished i had been able to unpack more, and entire directions of research that i totally couldn't fit into this, so i wrote about all that here: ali-alkhatib.com/blog/utopia-an…
Jan 15, 2021 4 tweets 1 min read
i hardly hear talk in my timeline about the direction twitter chose not to go in, given the premise of special status of some people. what would be different if people who had special status were held to *stricter* standards rather than looser ones? if the rules behind someone being verified include that the user be famous in their own right off-platform, then presumably they should be able to get their fringe/violent/dangerous content out on some other platform. these people should be the *least* destabilized by suspension
Jan 15, 2021 6 tweets 1 min read
for me the thing about the face/politics paper isn't the paper or the author. he's been doing unrigorous, shoddy, offensive work for years

what's wild to me is the apparent credulousness of the reviewers. like 5 people have to generally agree on the paper not being garbage

‽!? i mean i know exactly how it happens; draw reviewers from an intellectually shallow pool and this is what you get.

but it always surprises me that from a group of ~5 people not 1 is like "are you people fucking serious? there's not even a mechanism. it's just gee whiz bullshit"
Oct 31, 2020 5 tweets 2 min read
it's incredible that Dara comes right out and says that the business model hinges on figuring out who needs flexibility so much that they'll give up healthcare and worker protections. the hope is to target a small enough group that they can never become politically consequential. It sounds like Dara's read Ursula Le Guin's short story "The Ones Who Walk Away from Omelas", wherein the people of Omelas lock up and torture a child in the supposed interest of everyone else's prosperity and happiness. to release the child would mean doom, the townspeople say.