There has been some great work on framing AI ethics issues as ultimately about power.

I want to elaborate on *why* power imbalances are a problem. 1/
*Why* power imbalances are a problem:
- those most impacted often have least power, yet are the ones to identify risks earliest
- those most impacted best understand what interventions are needed
- often no motivation for the powerful to change
- power tends to be insulating 2/
The Participatory Approaches to ML workshop at #ICML2020 was fantastic. The organizers highlighted how even many efforts for fairness or ethics further *centralize power*

3/
I think leaders at major tech companies often see themselves as benevolent & believe that critics do not understand the issues as well as they do. E.g. Facebook criticizes Kevin Roose's data as inaccurate, yet won’t share other data:

4/
Only the issues *most visible* to system designers get addressed. With this model, many issues are not prioritized until they are incredibly widespread & causing serious harm (as opposed to addressing earlier). 5/

A classic example of tech platforms being insulated from the harms caused is the Black women who raised an alarm on deceptive sock-puppets and coordinated harassment in 2014, yet Twitter failed to respond 6/

slate.com/technology/201…
What motivates the powerful to act: I still think about Facebook hiring 1,200 content moderators in Germany in < 1 year to avoid a hefty fine, vs after 5 years of warnings about genocide in Myanmar, just hiring “dozens” of Burmese content moderators 7/

As more background, here is a thread of work on framing AI ethics issues as being about power 8/

Machine learning often has the effect of centralizing power 9/

Also, related: Tech is not neutral 10/

Question: What reasons or resources have you found helpful in convincing those that don’t see power imbalances within tech & AI as a problem, believing the solution is primarily for those with power to wield it more benevolently? 11/

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Rachel Thomas

Rachel Thomas Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @math_rachel

18 Nov
My impression is that some folks use machine learning to try to "solve" problems of artificial scarcity. Eg: we won't give everyone the healthcare they need, so let's use ML to decide who to deny.

Question: What have you read about this? What examples have you seen?
It's not explicitly stated in this article, but seems to be a subtext that giving everyone the healthcare they need wasn't considered an option:

To be clear, if the starting point is artificial scarcity of resources, this is a problem machine learning CAN'T solve
Read 4 tweets
17 Nov
I'm going to start a thread on various forms of "washing" (showy efforts to claim to care/address an issue, without doing the work or having a true impact), such as AI ethics-washing, #BlackPowerWashing, diversity-washing, greenwashing, etc

Feel free to add more articles!
"Companies seem to think that tweeting BLM will wash away the fact that they derive massive wealth from exploitation of Black labor, promotion of white anxiety about Blackness, & amplification of white supremacy."
--@hypervisible #BlackPowerWashing

Great paper on participation-washing in the machine learning community:

Read 5 tweets
30 Oct
Thread of some posts about diversity & inclusion I've written over the years. I still stand behind these.

(I'm resharing bc a few folks are suggesting Jeremy's CoC experience ➡️ partially our fault for promoting diversity, we should change our values, etc. Nope!)

1/
Math & CS have been my focus since high school/the late 90s, yet the sexism & toxicity of the tech industry drove me to quit. I’m not alone. 40% of women working in tech leave. (2015)

medium.com/tech-diversity… 2/
Superficial, showy efforts at diversity-washing are more harmful than doing nothing at all. Research studies confirm this (2015)

medium.com/tech-diversity… 3/
Read 8 tweets
19 Aug
new free online course: Practical Data Ethics, from fast ai & @DataInstituteSF covering disinformation, bias, ethical foundations, privacy & surveillance, silicon valley ecosystem, and algorithmic colonialism

cc: @craignewmark

ethics.fast.ai
As @cfiesler showed w spreadsheet of >250 tech ethics syllabi & her accompanying meta-analysis, tech ethics is a sprawling subject. No single course can cover everything. And there are so many great courses out there!

medium.com/cuinfoscience/…

cmci.colorado.edu/~cafi5706/SIGC… Slide showing paper title: "What do we teach when we te
I spent a lot of time trying to cut my assigned reading list down to a reasonable length, as there are so many fantastic articles & papers on these topics. The following list is not at all exhaustive.
Read 13 tweets
12 Aug
Videos from @StanfordAIMI Symposium are up! I spoke on why we need to expand the conversation on bias & fairness.

I will share some slides & related links in this THREAD, but please watch my 17-minute talk in full (the other talks are excellent too!) 1/

Slide saying "AI, Medicine, & Bias: Diversifying Your D
While using a diverse & representative dataset is important, there are many problems this WON'T solve, such as measurement bias

Great thread & research from @oziadias on what happens when you use healthcare *cost* as a proxy for healthcare *need* 2/

Another form of measurement bias is when there is systematic error, such as how pulse oximeters (a crucial tool in treating covid) and fitbit heart rate monitors (used in 300 clinical trials) are less accurate on people of color 3/

Read 13 tweets
31 Jul
Structural racism can be combated only if there is political will, not more data. Ending racism has to begin and end with political will. Data, while helpful in guiding policy focus, are not a shortcut to creating this will.

aljazeera.com/indepth/opinio… @InterwebzNani 1/ First, "proving" that racism exists is not the sam
Data are not merely recorded or collected, they are produced. Data extraction infrastructures comprise multiple points of subjectivity: design, collection, analysis, interpretation and dissemination. All of these open the door to exploitation. 2/ Second, data extraction can pose a threat to the rights of m
In South Korea, digital COVID tracking has exacerbated hostility towards LGBTQ people.

When UK researchers set out to collect better data on Roma migrants to assess social needs, missteps in data presentation gave rise to political outcry over an "influx" of migrants. 3/ In this context, we also need to consider the precedents we
Read 5 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!

Follow Us on Twitter!