The orgs in his thread are excellent, but it is hypocritical and downright harmful for Jeff Dean to share them like this. 1/
Jeff even includes Black in AI, a fantastic org co-founded by @timnitGebru, whom he fired and then tried to portray using the angry Black woman trope. 2/

venturebeat.com/2020/12/10/tim…
One of the 3 conflicting stories that Google has provided about why Dr. Gebru was fired is that it was for being honest about how working on diversity initiatives at Google made her life HARDER. 3/

Companies like the good PR that diversity initiatives bring them and at the same time they will tank your career for working on those initiatives or taking the goals too seriously. 4/
More broadly, Dean’s list are all orgs that he can keep at a distance. Promoting them on twitter doesn’t require Google to make any of the hard, structural changes that are so deeply necessary. 5/

Again, companies love the idea of diverse student interns; they hate the reality of experts & leaders from those same groups. 6/

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Rachel Thomas

Rachel Thomas Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @math_rachel

2 Jul
There is a current bill to overhaul Australia's National Disability Insurance Scheme (NDIS). While it was sold as being about "fairness"/"efficiency"/etc, the intention is to cut support for people with disabilities by $700 million. 1/

theguardian.com/australia-news… Asked in March whether the policy was aimed at finding effic
More up to date info on the current status of the bill and how to oppose it from @criprights 2/
People are using hashtag #RoboNDIS, in reference to RoboDebt, when the Australian govt automatically & unlawfully created debts for hundreds of thousands of welfare recipients (of money that they didn't actually owe), destroying many lives 3/

Read 6 tweets
1 Jul
With methods research, the dataset is secondary. This focus is misaligned with broader goals of studying risk assessments (eg COMPAS). A paper can be high quality in a pure AI/ML methods sense, but irrelevant for criminal justice impact or worse. 1/
from arxiv.org/abs/2106.05498 Firstly, we argue that the ...
"Placing data in subservience to optimization goals decontextualizes it – the objective is beating a measure of performance instead of gleaning new insights from the data." 2/ Secondly, placing the data ...
"Risk assessment in criminal justice is not a modular pipeline in which each component can be replaced with a fairer version the way you would replace a sorting algorithm with a more efficient implementation. It is a tangled mess drenched in an ongoing history of inequity." 3/ Secondly, placing the data ...
Read 5 tweets
5 May
Computerization does not result in the same organization "by different means" but changes what the org does

Automation is often justified in the name of efficiency, yet it can paradoxically lead to inefficiency: policy & admin complexity increase, surveillance accelerates @pwh67 Efficiency is a constant refrain and objective for automatio
A key dynamic arising from digital technology in government is differentiating the population into ever smaller segments, which risks reinforcing social divisions & inequality and disrupting procedural fairness. tandfonline.com/doi/full/10.10… In the twentieth century, these ‘categorical’ differenceHenman (2005) outlines policy and ethical challenges of this
In the case of RoboDebt (algorithm that mistakenly overcalculated debt, with no human oversight or appeals), the algorithm was used to covertly redefine basic operations & procedures.

It was not just automation, but a change of government policy & principles. @pwh67  RoboDebt also removed government officials from checking de
Read 4 tweets
29 Apr
Saving Face: Investigating the Ethical Concerns of Facial Recognition Auditing, from AIES 2020, by @rajiinio @timnitGebru @mmitchell_ai @jovialjoy Joonseok Lee @cephaloponderer

🧵with some quotes, but I recommend reading the full paper: arxiv.org/abs/2001.00964 1/ Saving Face: Investigating the Ethical Concerns of Facial Re
External algorithmic audits only incentivize companies to address performance disparities on the tasks they were publicly audited for

Microsoft & Amazon addressed gender classification disparity after audit, but still had huge performance gap by skin color for age classification We found that this result holds not only for the audited com
Audits have to be deliberate so as not to normalize tasks that are inherently harmful to certain communities.

Gender classification has harmful effects in both incorrect AND correct classification. Promotes stereotypes and excludes trans & non-binary individuals. 3/ While it is important to strive for equal performance across
Read 6 tweets
28 Apr
Question: what are your favorite articles/papers/essays about the idea of external audits for algorithmic systems?
In "The Case for Digital Public Infrastructure", @EthanZ proposes building auditable & transparent search & discovery tools... for the emergence of a strategy that allows review & resists gaming

knightcolumbia.org/content/the-ca… One obvious, but difficult ...
Algorithmic audits will not produce accountability on their own; however if government creates meaningful regulatory oversight, algorithmic audits could become much more impactful

@AlexCEngler analysis on auditing employment algorithms for discrimination brookings.edu/research/audit…
Read 12 tweets
24 Mar
To rush CS students through simplified, condensed overviews of ethical understanding & position them to be the primary arbiter of change promotes engineers' inclination towards seeing themselves as solitary savior, to the detriment of the quality of the solution. 1/ Incidents of algorithmic mi...
Incidents of algorithmic misuse, unethical deployments, or harmful bias cannot be addressed by developing moral integrity at an individual level. The current issues are the result of collective failure. 2/

@rajiinio @morganklauss @amironesei Incidents of algorithmic mi...
Less about a single engineer’s efforts to enforce their understanding of diverse representation into the model, and more about a form of participatory design where other stakeholders are actively & humbly welcomed to join in creation of more just & equitable systems 3/ s. A step towards more incl...
Read 6 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!

Follow Us on Twitter!

:(