Jeff even includes Black in AI, a fantastic org co-founded by @timnitGebru, whom he fired and then tried to portray using the angry Black woman trope. 2/
One of the 3 conflicting stories that Google has provided about why Dr. Gebru was fired is that it was for being honest about how working on diversity initiatives at Google made her life HARDER. 3/
Companies like the good PR that diversity initiatives bring them and at the same time they will tank your career for working on those initiatives or taking the goals too seriously. 4/
More broadly, Dean’s list are all orgs that he can keep at a distance. Promoting them on twitter doesn’t require Google to make any of the hard, structural changes that are so deeply necessary. 5/
There is a current bill to overhaul Australia's National Disability Insurance Scheme (NDIS). While it was sold as being about "fairness"/"efficiency"/etc, the intention is to cut support for people with disabilities by $700 million. 1/
People are using hashtag #RoboNDIS, in reference to RoboDebt, when the Australian govt automatically & unlawfully created debts for hundreds of thousands of welfare recipients (of money that they didn't actually owe), destroying many lives 3/
With methods research, the dataset is secondary. This focus is misaligned with broader goals of studying risk assessments (eg COMPAS). A paper can be high quality in a pure AI/ML methods sense, but irrelevant for criminal justice impact or worse. 1/
from arxiv.org/abs/2106.05498
"Placing data in subservience to optimization goals decontextualizes it – the objective is beating a measure of performance instead of gleaning new insights from the data." 2/
"Risk assessment in criminal justice is not a modular pipeline in which each component can be replaced with a fairer version the way you would replace a sorting algorithm with a more efficient implementation. It is a tangled mess drenched in an ongoing history of inequity." 3/
Computerization does not result in the same organization "by different means" but changes what the org does
Automation is often justified in the name of efficiency, yet it can paradoxically lead to inefficiency: policy & admin complexity increase, surveillance accelerates @pwh67
A key dynamic arising from digital technology in government is differentiating the population into ever smaller segments, which risks reinforcing social divisions & inequality and disrupting procedural fairness. tandfonline.com/doi/full/10.10…
In the case of RoboDebt (algorithm that mistakenly overcalculated debt, with no human oversight or appeals), the algorithm was used to covertly redefine basic operations & procedures.
It was not just automation, but a change of government policy & principles. @pwh67
External algorithmic audits only incentivize companies to address performance disparities on the tasks they were publicly audited for
Microsoft & Amazon addressed gender classification disparity after audit, but still had huge performance gap by skin color for age classification
Audits have to be deliberate so as not to normalize tasks that are inherently harmful to certain communities.
Gender classification has harmful effects in both incorrect AND correct classification. Promotes stereotypes and excludes trans & non-binary individuals. 3/
Question: what are your favorite articles/papers/essays about the idea of external audits for algorithmic systems?
In "The Case for Digital Public Infrastructure", @EthanZ proposes building auditable & transparent search & discovery tools... for the emergence of a strategy that allows review & resists gaming
Algorithmic audits will not produce accountability on their own; however if government creates meaningful regulatory oversight, algorithmic audits could become much more impactful
To rush CS students through simplified, condensed overviews of ethical understanding & position them to be the primary arbiter of change promotes engineers' inclination towards seeing themselves as solitary savior, to the detriment of the quality of the solution. 1/
Incidents of algorithmic misuse, unethical deployments, or harmful bias cannot be addressed by developing moral integrity at an individual level. The current issues are the result of collective failure. 2/
Less about a single engineer’s efforts to enforce their understanding of diverse representation into the model, and more about a form of participatory design where other stakeholders are actively & humbly welcomed to join in creation of more just & equitable systems 3/