I have long admired @timnitGebru for her brilliance, moral courage, clear voice in speaking up for what is right, & influential scholarship. It is truly terrible that Google would do this.
In this thread, I want to share some of Timnit's work I love
I've quoted "Datasheets for Datasets" (2018) in many of my talks & assign it as reading in my class. It highlights decisions that go into creating & maintaining datasets, and how standardization & regulation came to other industries
Timnit worked with @jovialjoy on the original GenderShades research, which has had a profound impact on facial recognition, led to concrete regulations, and changed our industry
"Lessons from Archives" is a great paper by @unsojo & @timnitGebru on what machine learning can learn from the library sciences about data collection, in light of as consent, power, inclusivity, transparency, and ethics & privacy:
I love this passage from @timnitGebru in NYT on what a narrow framing of bias as just error rates across groups misses: whether a task should exist at all, who deploys it, and on which population,...
Timnit is also one of the founders of @black_in_ai, which has members around the world & has improved the entire field of AI. I attended #BlackinAI workshops in 2018 & 2019 and they were my favorite parts of NeurIPS
In particular, @Black_in_ai does a great job of covering the span of abstract technical advancements, practical applications, & concretely addressing societal impact, in a way that few other groups do.
Timnit (along with many others) put countless hours in to trying to help Africans get visas to NeurIPS, working 5 months in advance. This is such tangible, practical work to increase inclusion & try to address a terrible injustice (visa denials) in AI:
I love that @timnitGebru spoke about clique culture at CVPR 2018, about how hard it can be for outsiders to break into machine learning, how cliques harm diversity, & what we can do to be more welcoming:
Timnit describes her time at NeurIPS 2016, of seeing only 6 Black attendees out of 8,500: "I was literally panicking. This field was growing exponentially, hitting the mainstream; it’s affecting every part of society. It is an emergency, and we have to do something about it now."
Above quote is from "'We’re in a diversity crisis': cofounder of Black in AI on what’s poisoning algorithms in our lives" from MIT Tech Review, Feb 2018
Also, @timnitGebru is one of the founders of the Fairness, Accountability, & Transparency Conference (@FAccTConference), a major conference on ethics in machine learning:
Timnit was also part of the team behind Model Cards for Model Reporting, to clarify intended use of an ML model, limitations, details of performance evaluation (including checking for bias), & more
She put in a huge amount of work advocating & organizing for ICLR (major machine learning conference) to be held in Ethiopia in 2020 (later cancelled due to covid), trying to counter the western-centric bias of ML confs
Reciprocity is a key part of life. Surveillance undermines reciprocity. Every time we opt for surveillance or extractive technology, we undermine reciprocity and relationship. -- @doxtdatorb#AgainstSurveillance
Between 1971-1974, a Detroit Police Department surveillance unit called STRESS (Stop The Robberies, Enjoy Safe Streets) fatally shot 24 people, 22 of them African-American @hypervisible#AgainstSurveillance
A freelance journalist in Vietnam w/ 150,000 followers & a verified Facebook account realized all his posts about a high-profile death penalty case had vanished with no notification
There has been some great work on framing AI ethics issues as ultimately about power.
I want to elaborate on *why* power imbalances are a problem. 1/
*Why* power imbalances are a problem:
- those most impacted often have least power, yet are the ones to identify risks earliest
- those most impacted best understand what interventions are needed
- often no motivation for the powerful to change
- power tends to be insulating 2/
The Participatory Approaches to ML workshop at #ICML2020 was fantastic. The organizers highlighted how even many efforts for fairness or ethics further *centralize power*
My impression is that some folks use machine learning to try to "solve" problems of artificial scarcity. Eg: we won't give everyone the healthcare they need, so let's use ML to decide who to deny.
Question: What have you read about this? What examples have you seen?
It's not explicitly stated in this article, but seems to be a subtext that giving everyone the healthcare they need wasn't considered an option:
I'm going to start a thread on various forms of "washing" (showy efforts to claim to care/address an issue, without doing the work or having a true impact), such as AI ethics-washing, #BlackPowerWashing, diversity-washing, greenwashing, etc
Feel free to add more articles!
"Companies seem to think that tweeting BLM will wash away the fact that they derive massive wealth from exploitation of Black labor, promotion of white anxiety about Blackness, & amplification of white supremacy."
--@hypervisible#BlackPowerWashing
Thread of some posts about diversity & inclusion I've written over the years. I still stand behind these.
(I'm resharing bc a few folks are suggesting Jeremy's CoC experience ➡️ partially our fault for promoting diversity, we should change our values, etc. Nope!)
1/
Math & CS have been my focus since high school/the late 90s, yet the sexism & toxicity of the tech industry drove me to quit. I’m not alone. 40% of women working in tech leave. (2015)