I know about diversity-washing, I know about the empty lip-service. But I still can't get past the contrast between @JeffDean's tweets (h/t @EricaJoy) and his treatment of @timnitGebru-- never having a conversation with her, not telling her manager, denying her DEI experience,...
Here's a recent thread I did on various forms of "washing": AI-ethics washing, diversity washing, Black Power washing,... all very relevant to Google AI
Jeff's org hired only 14% women last year, advocates for diversity at Google experience retaliation, and yet he really thinks he knows more about DEI than Timnit:
This idea that you can't highlight problems without offering a solution is pervasive, harmful, and false.
Efforts to accurately identify, analyze, & understand risks & harms are valuable. And most difficult problems are not going to be solved in a single paper.
I strongly believe that in order to solve a problem, you have to diagnose it, and that we’re still in the diagnosis phase of this... Trying to make clear what the downsides are, and diagnosing them accurately so that they can be solvable is hard work -- @JuliaAngwin
With industrialization, we had 30 yrs of child labor & terrible working conditions. It took a lot of journalist muckraking & advocacy to diagnose the problem & have some understanding of what it was, and then the activism to get laws changed
I have long admired @timnitGebru for her brilliance, moral courage, clear voice in speaking up for what is right, & influential scholarship. It is truly terrible that Google would do this.
In this thread, I want to share some of Timnit's work I love
I've quoted "Datasheets for Datasets" (2018) in many of my talks & assign it as reading in my class. It highlights decisions that go into creating & maintaining datasets, and how standardization & regulation came to other industries
Timnit worked with @jovialjoy on the original GenderShades research, which has had a profound impact on facial recognition, led to concrete regulations, and changed our industry
Reciprocity is a key part of life. Surveillance undermines reciprocity. Every time we opt for surveillance or extractive technology, we undermine reciprocity and relationship. -- @doxtdatorb#AgainstSurveillance
Between 1971-1974, a Detroit Police Department surveillance unit called STRESS (Stop The Robberies, Enjoy Safe Streets) fatally shot 24 people, 22 of them African-American @hypervisible#AgainstSurveillance
A freelance journalist in Vietnam w/ 150,000 followers & a verified Facebook account realized all his posts about a high-profile death penalty case had vanished with no notification
There has been some great work on framing AI ethics issues as ultimately about power.
I want to elaborate on *why* power imbalances are a problem. 1/
*Why* power imbalances are a problem:
- those most impacted often have least power, yet are the ones to identify risks earliest
- those most impacted best understand what interventions are needed
- often no motivation for the powerful to change
- power tends to be insulating 2/
The Participatory Approaches to ML workshop at #ICML2020 was fantastic. The organizers highlighted how even many efforts for fairness or ethics further *centralize power*
My impression is that some folks use machine learning to try to "solve" problems of artificial scarcity. Eg: we won't give everyone the healthcare they need, so let's use ML to decide who to deny.
Question: What have you read about this? What examples have you seen?
It's not explicitly stated in this article, but seems to be a subtext that giving everyone the healthcare they need wasn't considered an option: