Here are some interesting papers/posts/threads on limitations of AI ethics principles. Please share if you have more.
"Better, Nicer, Clearer, Fairer: A Critical Assessment of the Movement for Ethical Artificial Intelligence and Machine Learning", which analyzed 7 high-profile AI values/ethics statements

Many principles proposed in AI ethics are too broad to be useful and could end up postponing meaningful debate.

"The role and limits of principles in AI ethics: towards a focus on tensions" by @jesswhittles

@jesswhittles Perhaps not a limitation-- @cwiggins shares the importance of distinguishing btwn:
- Defining ethics (eg the Belmont Principles)
- Operationalizing (eg IRBs)
- Flow of power that makes this possible (e.g. universities need federal funding, so they comply)
fast.ai/2019/03/04/eth… Internal Forces: Defining E...
@jesswhittles @cwiggins Chris covers this in more depth in his Data: Past, Present, & Future course. In particular, see Lecture 12 on the outcry to the Tuskegee Syphilis Trial leading to new standards for human subject research:
github.com/data-ppf/data-…
If you are failing at “regular” ethics, you won’t be able to embody AI ethics either. The two are not separate.
fast.ai/2019/04/22/eth… We’ve seen the litany of mo...
The mere presence of diversity policies can lead white ppl to be less likely to believe racial discrimination exists & men to be less likely to believe gender discrimination exists, despite other evidence.

We may see similar effect w AI ethics statements
medium.com/tech-diversity… Researchers showed that the...
Report from @article19org @VidushiMarda on limitations of normative ethics and FAT (Fairness, Accountability, & Transparency) approaches, and how a human rights based approach could strengthen them:

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Rachel Thomas

Rachel Thomas Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @math_rachel

30 Dec 20
In computational systems, we are often interested in unobservable theoretical constructs (eg "creditworthiness", "teacher quality", "risk to society"). Many harms are result of a mismatch between the constructs & their operationalization -- @az_jacobs @hannawallach Why measurement? When we re...
A measurement model is "valid" if the theoretical understanding matches the operationalization. There are many ways validity can fail. Slides showing: - a chart w...
Some types of Validity
content: does measurement capture everything we want
convergent: match other measurements
predictive: related to other external properties
hypothesis: theoretically useful
consequential: downstream societal impacts
reliability: noise, precision, stability Validity of a measurement o...
Read 4 tweets
26 Dec 20
I made a playlist of 11 short videos (most are 7-13 mins long) on Ethics in Machine Learning

This is from my 2 hrs ethics lecture in Practical Deep Learning for Coders v4. I thought these short videos would be easier to watch, share, or skip around

Screenshot of part of youtu...
What are Ethics & Why do they Matter? Machine Learning Edition
- 3 Case Studies to know about
- Is this really our responsibility?
- What is ethics? @scuethics
- What do we teach when we teach tech ethics? @cfiesler

Software systems have bugs, algorithms can have errors, data is often incorrect.

People impacted by automated systems need timely, meaningful ways to appeal decisions & find recourse, and we need to plan for this in advance

Read 12 tweets
23 Dec 20
Interested in improving diversity in AI, or in tech in general? I have done a bunch of research on this and have some advice 1/
First, what doesn’t work: shallow, showy diversity efforts (even if they are well-intentioned) aren’t just ineffective, they actively cause harm.

Spend time thinking through your strategy & making sure you can back it up 2/

medium.com/tech-diversity… It is painful watching tech...
For example, if you start a “women & allies” email list and then fire a Black woman for being honest on it, it probably would have been better not to have the email list in the first place 3/

Read 23 tweets
11 Dec 20
I just remembered this gem from 2017 screen shot of tweet from Pedro Domingos @pmddomingos. True
Some folks have asked about data vs. algorithms. Treating these as separate silos doesn't really make sense, and it contributes to a common perception that the data is someone else's problem, an unglamorous & lesser task:

Machine learning often fails to critique the origin, motivation, platform, or potential impact of the data we use, and this is a problem that we in ML need to address.

Read 6 tweets
11 Dec 20
Q: Is AI development trapped in a paradigm that pursues efficiency above all else? @ResistanceAI

@Abebab cites ongoing work that finds efficiency, accuracy, & performance are the key values mentioned in most ML papers
Q: Is AI development trapped in a paradigm that pursues efficiency above all else?

@red_abebe: Efficient for whom? With example of criminal justice system, is it efficient to have 2 million in USA in prison?
Noopur Raval: The efficiency paradigm can show up in unexpected forms, including many projects claiming to be for social good. Technology can appear part of a mystical, deceptive promise to make things better.
Read 6 tweets
7 Dec 20
This idea that you can't highlight problems without offering a solution is pervasive, harmful, and false.

Efforts to accurately identify, analyze, & understand risks & harms are valuable. And most difficult problems are not going to be solved in a single paper.
I strongly believe that in order to solve a problem, you have to diagnose it, and that we’re still in the diagnosis phase of this... Trying to make clear what the downsides are, and diagnosing them accurately so that they can be solvable is hard work -- @JuliaAngwin WHAT’S WRONG WITH AI JA: I strongly believe that in order
With industrialization, we had 30 yrs of child labor & terrible working conditions. It took a lot of journalist muckraking & advocacy to diagnose the problem & have some understanding of what it was, and then the activism to get laws changed

We're in a 2nd machine age now. WHAT’S WRONG WITH AI JA: I strongly believe that in order
Read 5 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!

Follow Us on Twitter!