There has been some great work on framing AI ethics issues as ultimately about power.
I want to elaborate on *why* power imbalances are a problem. 1/
*Why* power imbalances are a problem:
- those most impacted often have least power, yet are the ones to identify risks earliest
- those most impacted best understand what interventions are needed
- often no motivation for the powerful to change
- power tends to be insulating 2/
The Participatory Approaches to ML workshop at #ICML2020 was fantastic. The organizers highlighted how even many efforts for fairness or ethics further *centralize power*
I think leaders at major tech companies often see themselves as benevolent & believe that critics do not understand the issues as well as they do. E.g. Facebook criticizes Kevin Roose's data as inaccurate, yet won’t share other data:
Only the issues *most visible* to system designers get addressed. With this model, many issues are not prioritized until they are incredibly widespread & causing serious harm (as opposed to addressing earlier). 5/
A classic example of tech platforms being insulated from the harms caused is the Black women who raised an alarm on deceptive sock-puppets and coordinated harassment in 2014, yet Twitter failed to respond 6/
What motivates the powerful to act: I still think about Facebook hiring 1,200 content moderators in Germany in < 1 year to avoid a hefty fine, vs after 5 years of warnings about genocide in Myanmar, just hiring “dozens” of Burmese content moderators 7/
Question: What reasons or resources have you found helpful in convincing those that don’t see power imbalances within tech & AI as a problem, believing the solution is primarily for those with power to wield it more benevolently? 11/
• • •
Missing some Tweet in this thread? You can try to
force a refresh
My impression is that some folks use machine learning to try to "solve" problems of artificial scarcity. Eg: we won't give everyone the healthcare they need, so let's use ML to decide who to deny.
Question: What have you read about this? What examples have you seen?
It's not explicitly stated in this article, but seems to be a subtext that giving everyone the healthcare they need wasn't considered an option:
I'm going to start a thread on various forms of "washing" (showy efforts to claim to care/address an issue, without doing the work or having a true impact), such as AI ethics-washing, #BlackPowerWashing, diversity-washing, greenwashing, etc
Feel free to add more articles!
"Companies seem to think that tweeting BLM will wash away the fact that they derive massive wealth from exploitation of Black labor, promotion of white anxiety about Blackness, & amplification of white supremacy."
--@hypervisible#BlackPowerWashing
Thread of some posts about diversity & inclusion I've written over the years. I still stand behind these.
(I'm resharing bc a few folks are suggesting Jeremy's CoC experience ➡️ partially our fault for promoting diversity, we should change our values, etc. Nope!)
1/
Math & CS have been my focus since high school/the late 90s, yet the sexism & toxicity of the tech industry drove me to quit. I’m not alone. 40% of women working in tech leave. (2015)
new free online course: Practical Data Ethics, from fast ai & @DataInstituteSF covering disinformation, bias, ethical foundations, privacy & surveillance, silicon valley ecosystem, and algorithmic colonialism
As @cfiesler showed w spreadsheet of >250 tech ethics syllabi & her accompanying meta-analysis, tech ethics is a sprawling subject. No single course can cover everything. And there are so many great courses out there!
I spent a lot of time trying to cut my assigned reading list down to a reasonable length, as there are so many fantastic articles & papers on these topics. The following list is not at all exhaustive.
Another form of measurement bias is when there is systematic error, such as how pulse oximeters (a crucial tool in treating covid) and fitbit heart rate monitors (used in 300 clinical trials) are less accurate on people of color 3/
Structural racism can be combated only if there is political will, not more data. Ending racism has to begin and end with political will. Data, while helpful in guiding policy focus, are not a shortcut to creating this will.
Data are not merely recorded or collected, they are produced. Data extraction infrastructures comprise multiple points of subjectivity: design, collection, analysis, interpretation and dissemination. All of these open the door to exploitation. 2/
In South Korea, digital COVID tracking has exacerbated hostility towards LGBTQ people.
When UK researchers set out to collect better data on Roma migrants to assess social needs, missteps in data presentation gave rise to political outcry over an "influx" of migrants. 3/