I'm going to start a thread on various forms of "washing" (showy efforts to claim to care/address an issue, without doing the work or having a true impact), such as AI ethics-washing, #BlackPowerWashing, diversity-washing, greenwashing, etc
Feel free to add more articles!
"Companies seem to think that tweeting BLM will wash away the fact that they derive massive wealth from exploitation of Black labor, promotion of white anxiety about Blackness, & amplification of white supremacy."
--@hypervisible#BlackPowerWashing
AI ethics washing is the fabrication or exaggeration of a company’s interest in equitable AI systems. A textbook example for tech giants is promoting “AI for good” initiatives while also selling surveillance capitalism tech -- @kharijohnson
Thread of some posts about diversity & inclusion I've written over the years. I still stand behind these.
(I'm resharing bc a few folks are suggesting Jeremy's CoC experience ➡️ partially our fault for promoting diversity, we should change our values, etc. Nope!)
1/
Math & CS have been my focus since high school/the late 90s, yet the sexism & toxicity of the tech industry drove me to quit. I’m not alone. 40% of women working in tech leave. (2015)
new free online course: Practical Data Ethics, from fast ai & @DataInstituteSF covering disinformation, bias, ethical foundations, privacy & surveillance, silicon valley ecosystem, and algorithmic colonialism
As @cfiesler showed w spreadsheet of >250 tech ethics syllabi & her accompanying meta-analysis, tech ethics is a sprawling subject. No single course can cover everything. And there are so many great courses out there!
I spent a lot of time trying to cut my assigned reading list down to a reasonable length, as there are so many fantastic articles & papers on these topics. The following list is not at all exhaustive.
Another form of measurement bias is when there is systematic error, such as how pulse oximeters (a crucial tool in treating covid) and fitbit heart rate monitors (used in 300 clinical trials) are less accurate on people of color 3/
Structural racism can be combated only if there is political will, not more data. Ending racism has to begin and end with political will. Data, while helpful in guiding policy focus, are not a shortcut to creating this will.
Data are not merely recorded or collected, they are produced. Data extraction infrastructures comprise multiple points of subjectivity: design, collection, analysis, interpretation and dissemination. All of these open the door to exploitation. 2/
In South Korea, digital COVID tracking has exacerbated hostility towards LGBTQ people.
When UK researchers set out to collect better data on Roma migrants to assess social needs, missteps in data presentation gave rise to political outcry over an "influx" of migrants. 3/
Things to know if you work on medical ML:
- Medical data can be incomplete, incorrect, missing, & biased
- Medical system is disempowering & often traumatic for patients
- Crucial to involve patients & to recognize risk of how ML can end up further disempowering 1/
On bias in medicine (& thus medical data): research shows that the pain of women is taken less seriously than pain of men. The pain of people of color is taken less seriously than pain of white people.
Result: longer time delays, lower quality of care, & worse outcomes 2/
A meta-analysis of 20 years of published research found that Black patients were 22% less likely than whites to get *any* pain medication and 29% less likely to be treated with opioids 3/
Important work is happening in *Participatory ML* and in recognizing that AI ethics is about the *distribution of power*. I want to create a thread linking to some of this work 1/