Here are some interesting papers/posts/threads on limitations of AI ethics principles. Please share if you have more.
"Better, Nicer, Clearer, Fairer: A Critical Assessment of the Movement for Ethical Artificial Intelligence and Machine Learning", which analyzed 7 high-profile AI values/ethics statements
@jesswhittles Perhaps not a limitation-- @cwiggins shares the importance of distinguishing btwn:
- Defining ethics (eg the Belmont Principles)
- Operationalizing (eg IRBs)
- Flow of power that makes this possible (e.g. universities need federal funding, so they comply) fast.ai/2019/03/04/eth…
@jesswhittles@cwiggins Chris covers this in more depth in his Data: Past, Present, & Future course. In particular, see Lecture 12 on the outcry to the Tuskegee Syphilis Trial leading to new standards for human subject research: github.com/data-ppf/data-…
If you are failing at “regular” ethics, you won’t be able to embody AI ethics either. The two are not separate. fast.ai/2019/04/22/eth…
The mere presence of diversity policies can lead white ppl to be less likely to believe racial discrimination exists & men to be less likely to believe gender discrimination exists, despite other evidence.
Report from @article19org@VidushiMarda on limitations of normative ethics and FAT (Fairness, Accountability, & Transparency) approaches, and how a human rights based approach could strengthen them:
In computational systems, we are often interested in unobservable theoretical constructs (eg "creditworthiness", "teacher quality", "risk to society"). Many harms are result of a mismatch between the constructs & their operationalization -- @az_jacobs@hannawallach
A measurement model is "valid" if the theoretical understanding matches the operationalization. There are many ways validity can fail.
Some types of Validity
content: does measurement capture everything we want
convergent: match other measurements
predictive: related to other external properties
hypothesis: theoretically useful
consequential: downstream societal impacts
reliability: noise, precision, stability
I made a playlist of 11 short videos (most are 7-13 mins long) on Ethics in Machine Learning
This is from my 2 hrs ethics lecture in Practical Deep Learning for Coders v4. I thought these short videos would be easier to watch, share, or skip around
What are Ethics & Why do they Matter? Machine Learning Edition
- 3 Case Studies to know about
- Is this really our responsibility?
- What is ethics? @scuethics
- What do we teach when we teach tech ethics? @cfiesler
Software systems have bugs, algorithms can have errors, data is often incorrect.
People impacted by automated systems need timely, meaningful ways to appeal decisions & find recourse, and we need to plan for this in advance
For example, if you start a “women & allies” email list and then fire a Black woman for being honest on it, it probably would have been better not to have the email list in the first place 3/
Some folks have asked about data vs. algorithms. Treating these as separate silos doesn't really make sense, and it contributes to a common perception that the data is someone else's problem, an unglamorous & lesser task:
Machine learning often fails to critique the origin, motivation, platform, or potential impact of the data we use, and this is a problem that we in ML need to address.
Q: Is AI development trapped in a paradigm that pursues efficiency above all else? @ResistanceAI
@Abebab cites ongoing work that finds efficiency, accuracy, & performance are the key values mentioned in most ML papers
Q: Is AI development trapped in a paradigm that pursues efficiency above all else?
@red_abebe: Efficient for whom? With example of criminal justice system, is it efficient to have 2 million in USA in prison?
Noopur Raval: The efficiency paradigm can show up in unexpected forms, including many projects claiming to be for social good. Technology can appear part of a mystical, deceptive promise to make things better.
This idea that you can't highlight problems without offering a solution is pervasive, harmful, and false.
Efforts to accurately identify, analyze, & understand risks & harms are valuable. And most difficult problems are not going to be solved in a single paper.
I strongly believe that in order to solve a problem, you have to diagnose it, and that we’re still in the diagnosis phase of this... Trying to make clear what the downsides are, and diagnosing them accurately so that they can be solvable is hard work -- @JuliaAngwin
With industrialization, we had 30 yrs of child labor & terrible working conditions. It took a lot of journalist muckraking & advocacy to diagnose the problem & have some understanding of what it was, and then the activism to get laws changed