This idea that you can't highlight problems without offering a solution is pervasive, harmful, and false.
Efforts to accurately identify, analyze, & understand risks & harms are valuable. And most difficult problems are not going to be solved in a single paper.
I strongly believe that in order to solve a problem, you have to diagnose it, and that we’re still in the diagnosis phase of this... Trying to make clear what the downsides are, and diagnosing them accurately so that they can be solvable is hard work -- @JuliaAngwin
With industrialization, we had 30 yrs of child labor & terrible working conditions. It took a lot of journalist muckraking & advocacy to diagnose the problem & have some understanding of what it was, and then the activism to get laws changed
We're in a 2nd machine age now.
Above two quotes are from this interview with @JuliaAngwin, who has done crucial investigative reporting on algorithmic bias & other harms of tech companies, including the 2016 ProPublica investigation into the COMPAS recidivism algorithm
Back to @JeffDean, the idea that a conference-peer-reviewed paper on ethical risks should be retracted for not being positive enough is so extreme as to seem almost like parody, "we don't talk about problems, only solutions." How will you know & understand those problems?
• • •
Missing some Tweet in this thread? You can try to
force a refresh
I know about diversity-washing, I know about the empty lip-service. But I still can't get past the contrast between @JeffDean's tweets (h/t @EricaJoy) and his treatment of @timnitGebru-- never having a conversation with her, not telling her manager, denying her DEI experience,...
I have long admired @timnitGebru for her brilliance, moral courage, clear voice in speaking up for what is right, & influential scholarship. It is truly terrible that Google would do this.
In this thread, I want to share some of Timnit's work I love
I've quoted "Datasheets for Datasets" (2018) in many of my talks & assign it as reading in my class. It highlights decisions that go into creating & maintaining datasets, and how standardization & regulation came to other industries
Timnit worked with @jovialjoy on the original GenderShades research, which has had a profound impact on facial recognition, led to concrete regulations, and changed our industry
Reciprocity is a key part of life. Surveillance undermines reciprocity. Every time we opt for surveillance or extractive technology, we undermine reciprocity and relationship. -- @doxtdatorb#AgainstSurveillance
Between 1971-1974, a Detroit Police Department surveillance unit called STRESS (Stop The Robberies, Enjoy Safe Streets) fatally shot 24 people, 22 of them African-American @hypervisible#AgainstSurveillance
A freelance journalist in Vietnam w/ 150,000 followers & a verified Facebook account realized all his posts about a high-profile death penalty case had vanished with no notification
There has been some great work on framing AI ethics issues as ultimately about power.
I want to elaborate on *why* power imbalances are a problem. 1/
*Why* power imbalances are a problem:
- those most impacted often have least power, yet are the ones to identify risks earliest
- those most impacted best understand what interventions are needed
- often no motivation for the powerful to change
- power tends to be insulating 2/
The Participatory Approaches to ML workshop at #ICML2020 was fantastic. The organizers highlighted how even many efforts for fairness or ethics further *centralize power*
My impression is that some folks use machine learning to try to "solve" problems of artificial scarcity. Eg: we won't give everyone the healthcare they need, so let's use ML to decide who to deny.
Question: What have you read about this? What examples have you seen?
It's not explicitly stated in this article, but seems to be a subtext that giving everyone the healthcare they need wasn't considered an option: