Account Share

Unrolled thread from @random_walker

7 tweets
AI analog of asking women to smile: editing faces to make them look more/less smiling also made them look more feminine/masculine because of the correlations in the training data.
arxiv.org/abs/1609.04468
AI labels images of men cooking as women. homes.cs.washington.edu/~my89/publicat…
Commercial gender classifiers did 8%–21% better on male faces than female faces,12%–19% better on lighter faces than darker faces, and worst on darker female faces.
proceedings.mlr.press/v81/buolamwini…
English to Turkish and back to English, using Google Translate (and every other translation engine).
opus.bath.ac.uk/55288/4/Calisk…
Error rate of YouTube's automatic voice transcription varies based on speaker's dialect region and gender.
rachaeltatman.com/sites/default/…
AI-generated word analogies reflect societal stereotypes.
papers.nips.cc/paper/6228-man…
(There are tons of other examples of bias in computer vision and natural-language processing systems, but in this thread I've highlighted ones that are backed by research papers, as a starting point for a deeper exploration of the work happening in this space.)
This content can be removed from Twitter at anytime, get a PDF archive by mail!
This is a Premium feature, you will be asked to pay 20$
for a one year Premium membership with unlimited archiving.
Did Thread Reader help you to today?
Support me! Become a 💎 Premium member ($20) and get exclusive features!
Too expensive? Choose your price, buy me a 🍺 beer or help for the ⚙️ server cost:
Donate with 😘 Paypal or  Become a Patron 😍 on Patreon.com
Trending hashtags:
Did Thread Reader help you to today?
Support me! Become a 💎 Premium member ($20) and get exclusive features!
Too expensive? Choose your price, buy me a 🍺 beer or help for the ⚙️ server cost:
Donate with 😘 Paypal or  Become a Patron 😍 on Patreon.com