I have long admired @timnitGebru for her brilliance, moral courage, clear voice in speaking up for what is right, & influential scholarship. It is truly terrible that Google would do this.

In this thread, I want to share some of Timnit's work I love

I've quoted "Datasheets for Datasets" (2018) in many of my talks & assign it as reading in my class. It highlights decisions that go into creating & maintaining datasets, and how standardization & regulation came to other industries

arxiv.org/abs/1803.09010
Timnit worked with @jovialjoy on the original GenderShades research, which has had a profound impact on facial recognition, led to concrete regulations, and changed our industry

It is rare for academic work to have this big of a practical impact:

"Lessons from Archives" is a great paper by @unsojo & @timnitGebru on what machine learning can learn from the library sciences about data collection, in light of as consent, power, inclusivity, transparency, and ethics & privacy:

I love this passage from @timnitGebru in NYT on what a narrow framing of bias as just error rates across groups misses: whether a task should exist at all, who deploys it, and on which population,...

Timnit is also one of the founders of @black_in_ai, which has members around the world & has improved the entire field of AI. I attended #BlackinAI workshops in 2018 & 2019 and they were my favorite parts of NeurIPS

In particular, @Black_in_ai does a great job of covering the span of abstract technical advancements, practical applications, & concretely addressing societal impact, in a way that few other groups do.
Timnit (along with many others) put countless hours in to trying to help Africans get visas to NeurIPS, working 5 months in advance. This is such tangible, practical work to increase inclusion & try to address a terrible injustice (visa denials) in AI:

I love that @timnitGebru spoke about clique culture at CVPR 2018, about how hard it can be for outsiders to break into machine learning, how cliques harm diversity, & what we can do to be more welcoming:

Timnit describes her time at NeurIPS 2016, of seeing only 6 Black attendees out of 8,500: "I was literally panicking. This field was growing exponentially, hitting the mainstream; it’s affecting every part of society. It is an emergency, and we have to do something about it now." What really just made it accelerate was [in 2016] when I wen
Above quote is from "'We’re in a diversity crisis': cofounder of Black in AI on what’s poisoning algorithms in our lives" from MIT Tech Review, Feb 2018

technologyreview.com/2018/02/14/145…
Here's the link to Timnit's talk on countering clique culture:

Also, @timnitGebru is one of the founders of the Fairness, Accountability, & Transparency Conference (@FAccTConference), a major conference on ethics in machine learning:
Timnit was also part of the team behind Model Cards for Model Reporting, to clarify intended use of an ML model, limitations, details of performance evaluation (including checking for bias), & more

academic paper: arxiv.org/abs/1810.03993

website: modelcards.withgoogle.com/about
She put in a huge amount of work advocating & organizing for ICLR (major machine learning conference) to be held in Ethiopia in 2020 (later cancelled due to covid), trying to counter the western-centric bias of ML confs

venturebeat.com/2018/11/19/maj…
Dr. Gebru's Tutorial on Fairness, Accountability, Transparency, & Ethics in Computer Vision from CVPR 2020

Slides & videos available here: sites.google.com/view/fatecv-tu…

Closing the AI Accountability Gap: Defining an End-to-End Framework for Internal Algorithmic Auditing

arxiv.org/abs/2001.00973

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Rachel Thomas

Rachel Thomas Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @math_rachel

1 Dec
Reciprocity is a key part of life. Surveillance undermines reciprocity. Every time we opt for surveillance or extractive technology, we undermine reciprocity and relationship. -- @doxtdatorb #AgainstSurveillance
Between 1971-1974, a Detroit Police Department surveillance unit called STRESS (Stop The Robberies, Enjoy Safe Streets) fatally shot 24 people, 22 of them African-American @hypervisible #AgainstSurveillance
Teach-in Against Surveillance is happening now, with livestream:
eventbrite.com/x/teach-in-aga…
Read 9 tweets
1 Dec
A freelance journalist in Vietnam w/ 150,000 followers & a verified Facebook account realized all his posts about a high-profile death penalty case had vanished with no notification

amnesty.org/en/latest/news… Content removed without any...
“Imagine if you spent years & years growing your Facebook account, but then in one easy act, Facebook just erases all your work"

Facebook has blocked every post a pro-democracy activist has tried to post about Communist Party, with no option to appeal Nguyen Van Trang, a pro-dem...
More background in this LA Times article:
latimes.com/world-nation/s…
Read 7 tweets
19 Nov
There has been some great work on framing AI ethics issues as ultimately about power.

I want to elaborate on *why* power imbalances are a problem. 1/
*Why* power imbalances are a problem:
- those most impacted often have least power, yet are the ones to identify risks earliest
- those most impacted best understand what interventions are needed
- often no motivation for the powerful to change
- power tends to be insulating 2/
The Participatory Approaches to ML workshop at #ICML2020 was fantastic. The organizers highlighted how even many efforts for fairness or ethics further *centralize power*

3/
Read 11 tweets
18 Nov
My impression is that some folks use machine learning to try to "solve" problems of artificial scarcity. Eg: we won't give everyone the healthcare they need, so let's use ML to decide who to deny.

Question: What have you read about this? What examples have you seen?
It's not explicitly stated in this article, but seems to be a subtext that giving everyone the healthcare they need wasn't considered an option:

To be clear, if the starting point is artificial scarcity of resources, this is a problem machine learning CAN'T solve
Read 4 tweets
17 Nov
I'm going to start a thread on various forms of "washing" (showy efforts to claim to care/address an issue, without doing the work or having a true impact), such as AI ethics-washing, #BlackPowerWashing, diversity-washing, greenwashing, etc

Feel free to add more articles!
"Companies seem to think that tweeting BLM will wash away the fact that they derive massive wealth from exploitation of Black labor, promotion of white anxiety about Blackness, & amplification of white supremacy."
--@hypervisible #BlackPowerWashing

Great paper on participation-washing in the machine learning community:

Read 5 tweets
30 Oct
Thread of some posts about diversity & inclusion I've written over the years. I still stand behind these.

(I'm resharing bc a few folks are suggesting Jeremy's CoC experience ➡️ partially our fault for promoting diversity, we should change our values, etc. Nope!)

1/
Math & CS have been my focus since high school/the late 90s, yet the sexism & toxicity of the tech industry drove me to quit. I’m not alone. 40% of women working in tech leave. (2015)

medium.com/tech-diversity… 2/
Superficial, showy efforts at diversity-washing are more harmful than doing nothing at all. Research studies confirm this (2015)

medium.com/tech-diversity… 3/
Read 8 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!

Follow Us on Twitter!