My Authors
Read all threads
While recent conversations surrounding AI bias were necessary & useful, several notable works of prolific researchers on the topic were never acknowledged or even quoted. So, I am starting a thread to discuss important work on fairness, explainability, and ethics (1/N) #aibias
This thread will focus on works of researchers who never got their due in the discussions that happened in the past week. I will include works by my favorite researchers across all races/genders. Please feel free to comment & add about anyone I might have missed. (2/N)
Cynthia Dwork and @mrtz deserve to be mentioned on top of this list. They started working on fairness in 2012 when the whole world was not sure about why fair algorithms were even needed. Their paper "Fairness through awareness" is one of my most favorite papers. (3/N)
I would encourage anyone new to fairml to check out Moritz's book fairmlbook.org and his tutorial on fairness mrtz.org/nips17/#/ Tweets won't do justice to his amazing body of work on fairness. Please check out his webpage: mrtz.org (4/N)
Cynthia Dwork is also a pioneer in both fairness and differential privacy. She has done fundamental work on both these topics. Please check out and (5/N)
Jon Kleinberg is next on this list. Jon and several of his students and postdocs including @hima_lakkaraju, @manish_raghavan, @maithra_raghu, and @HodaHeidari have written bulk of my favorite papers on the topics of fairness, explainability, and AI assisted decision making (6/N)
Jon Kleinberg (+ @hima_lakkaraju) wrote one of my favorite papers which provides guidelines for thinking about several of the issues that arise when designing/evaluating AI tools for important decisions. If you haven't already seen this, check out: nber.org/papers/w23180 (7/N)
Jon Kleinberg (+ @manish_raghavan) have written another amazing paper on understanding the trade-offs between different notions of fairness and why they are fundamentally incompatible. arxiv.org/abs/1609.05807 (8/N)
Jon Kleinberg (+ @maithra_raghu)'s paper on algorithmic automation problem also sheds light on various critical aspects of automating decision making arxiv.org/abs/1903.12220 (9/N)
It is impossible to do justice to Prof. Kleinberg's work on fairness and related topics just via tweets. Please check out his webpage for a full slew of his papers: cs.cornell.edu/home/kleinber/
@hima_lakkaraju has done very important work on various aspects of ML assisted decision-making -- explainability, fairness, & detecting biases. Her course on explainability is what got me interested in the FATML space in first place: interpretable-ml-class.github.io (11/N)
@hima_lakkaraju's work on exposing vulnerabilities of explanation methods, and how they mislead end users into trusting biased algorithms is one of the best papers I have seen on this topic recently. arxiv.org/pdf/1911.02508… and arxiv.org/pdf/1911.06473… (12/N)
@HodaHeidari has also been doing some amazing work on the topic of algorithmic fairness. Her papers arxiv.org/pdf/1902.04783… and cs.cornell.edu/~hh732/heidari… are a must read. (13/N)
@kamalikac is one of my favorite researchers on trustworthy ML. I learned about basics of differential privacy and trustworthy ML from her tutorials and courses: vimeo.com/248492174 and cseweb.ucsd.edu/classes/sp20/c… (14/N)
This list cannot be complete without @suchisaria who has done a ton of research on safe & reliable machine learning. slideslive.com/38915708/safe-…; Two of her papers on preventing failures and trusting predictions arxiv.org/abs/1812.04597 & arxiv.org/abs/1901.00403 are must reads. (15/N)
@FinaleDoshi is an amazing researcher who works on interpretability, RL & healthcare. Her position paper on interpretability (+ @_beenkim) is a must read arxiv.org/abs/1702.08608. Another paper on accountability of AI is also a revelation arxiv.org/abs/1711.01134 (16/N)
@5harad has done some of the initial work on exposing discrimination in various applications including police stops and bail decisions. His papers 5harad.com/papers/fair-ml…, 5harad.com/papers/100M-st…, advances.sciencemag.org/content/6/7/ea… are eye opening. (17/N)
@ecekamar's work on complementary human/machine decision making is a must read in FATML. My favorite work includes detecting and fixing blind spots of ML models which arise due to dataset biases. See arxiv.org/abs/1610.09064 (+ @hima_lakkaraju) & arxiv.org/abs/1805.08966 (18/N)
@2plus2make5 has also done incredible work on detecting discrimination and algorithmic decision making. If you haven't already, please check out: arxiv.org/abs/1701.08230 and arxiv.org/abs/1702.08536 (19/N)
@hannawallach and @jennwvaughan have also been doing amazing work at the intersection of fairness, interpretability, and HCI. Check out some of their amazing work at jennwv.com/papers/interp-… & jennwv.com/papers/accurac… (20/N)
@kgummadi also has an amazing body of work on fairness and algorithmic decision making. He is one of the most underrated researchers on this topic. While he has several papers on the topic, see papers.ssrn.com/sol3/papers.cf… and arxiv.org/abs/1507.05259 (21/N)
@sameer_ is another important name in interpretability literature. His paper on LIME arxiv.org/abs/1602.04938 is extremely well known. He also has a lot of interesting work on biases and interpretations (arxiv.org/pdf/2005.00724…) in NLP. (22/N)
@Aaroth and @mkearnsupenn are another set of researchers who have done some very important and foundational work on fairness and discrimination. I recently started reading their book on ethical machine learning and it has been a revelation. amazon.com/Ethical-Algori… (23/N)
@aaroth and @mkearnsupenn have pretty much worked on every subtopic pertaining to fairness. Some of my favorite works of these folks include: arxiv.org/abs/1810.08810 papers.nips.cc/paper/6355-fai… aaronsadventures.blogspot.com/2019/05/indivi… (24/N)
I am sure I am missing a bunch of other amazing folks working in FATML. I also want to reemphasize that @le_roux_nicolas's list () covers a lot of my favorite researchers on the topic. Please feel free to comment below about your favorite researchers. (N/N)
Missing some Tweet in this thread? You can try to force a refresh.

Keep Current with Suguna Misra

Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

Twitter may remove this content at anytime, convert it as a PDF, save and print for later use!

Try unrolling a thread yourself!

how to unroll video

1) Follow Thread Reader App on Twitter so you can easily mention us!

2) Go to a Twitter thread (series of Tweets by the same owner) and mention us with a keyword "unroll" @threadreaderapp unroll

You can practice here first or read more on our help page!

Follow Us on Twitter!

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3.00/month or $30.00/year) and get exclusive features!

Become Premium

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!