Computerization does not result in the same organization "by different means" but changes what the org does
Automation is often justified in the name of efficiency, yet it can paradoxically lead to inefficiency: policy & admin complexity increase, surveillance accelerates @pwh67
A key dynamic arising from digital technology in government is differentiating the population into ever smaller segments, which risks reinforcing social divisions & inequality and disrupting procedural fairness. tandfonline.com/doi/full/10.10…
In the case of RoboDebt (algorithm that mistakenly overcalculated debt, with no human oversight or appeals), the algorithm was used to covertly redefine basic operations & procedures.
It was not just automation, but a change of government policy & principles. @pwh67
The above quotes are from "Of algorithms, Apps and advice: digital social policy and service delivery" by @pwh67
External algorithmic audits only incentivize companies to address performance disparities on the tasks they were publicly audited for
Microsoft & Amazon addressed gender classification disparity after audit, but still had huge performance gap by skin color for age classification
Audits have to be deliberate so as not to normalize tasks that are inherently harmful to certain communities.
Gender classification has harmful effects in both incorrect AND correct classification. Promotes stereotypes and excludes trans & non-binary individuals. 3/
Question: what are your favorite articles/papers/essays about the idea of external audits for algorithmic systems?
In "The Case for Digital Public Infrastructure", @EthanZ proposes building auditable & transparent search & discovery tools... for the emergence of a strategy that allows review & resists gaming
Algorithmic audits will not produce accountability on their own; however if government creates meaningful regulatory oversight, algorithmic audits could become much more impactful
To rush CS students through simplified, condensed overviews of ethical understanding & position them to be the primary arbiter of change promotes engineers' inclination towards seeing themselves as solitary savior, to the detriment of the quality of the solution. 1/
Incidents of algorithmic misuse, unethical deployments, or harmful bias cannot be addressed by developing moral integrity at an individual level. The current issues are the result of collective failure. 2/
Less about a single engineer’s efforts to enforce their understanding of diverse representation into the model, and more about a form of participatory design where other stakeholders are actively & humbly welcomed to join in creation of more just & equitable systems 3/
Big news: @jeremyphoward & I have moved to his home country of Australia (he is not a USA citizen & has been wanting to return for years). I’m excited about the move, although it is bittersweet 1/
The last year in the USA has been horrifying. I’m lucky in so many ways: we were able to isolate pretty strictly as a family of 3 (a privilege that many did not have) & our daughter thrived at home with us, although it was still hard to go for a year with... 2/
no in-person childcare; not seeing any of my friends; hearing regularly that mass preventable death is okay as long as it is mostly people with chronic illness (like me), the elderly, & BIPOC dying; and worry of not being able to access an ER or ICU 3/
Calculating the souls of Black folk: predictive analytics in the child welfare system
powerful & informative talk by @UpFromTheCracks at @DataInstituteSF Center for Applied Data Ethics seminar series, video now online
If there was a material benefit from the family regulation system (child welfare system), middle class white people would be seeking it out for their kids.
The child welfare system is not biased, it is racist.
Racist in the Ruth Wilson Gilmore sense of the word: racism is a state-sanctioned and/or extralegal production & exploitation of group differentiated vulnerability to premature death.
Is your machine learning solution creating more problems than it solves? @JasmineMcNealy on how in focusing narrowly on one problem, we may be missing many others @DataInstituteSF CADE Seminar Series
21 States Are Now Vetting Unemployment Claims With a ‘Risky’ Facial Recognition System:
"legitimate claimants have also been rejected by the company’s machine learning & facial recognition systems — leading to massive delays in life-sustaining funds"
Central considerations when designing algorithms:
- *Who* is the prototype?
- Continuous evaluation & auditing
- We need to normalize STOPPING the use of a tool when harm is occurring. We don't need to keep using a tool just because we've already started. @JasmineMcNealy