, 10 tweets, 3 min read
My Authors
Read all threads
Artificial Intelligence can be a bitch.

Here are 6 high-profile projects that have miserably failed and have made the respective companies look really foolish:

🧵👇
1⃣ Back in 2015, a software engineer reported that Google Photos was classifying his black friends as gorillas.

The algorithm powering the service was unable to properly classify some people of color 🤦!

Here is the story: theverge.com/2018/1/12/1688…

👇
2⃣ Back in 2016, Amazon had to scrap it's AI recruiting tool because it discovered that the system taught itself that male 👨 candidates were preferable, and penalized every resume that pointed to a female 👩 candidate.

Here is the story: reuters.com/article/us-ama…

👇
3⃣ The COMPAS system, used in the US to assess a criminal's likelihood to re-offend, incorrectly classified black defendants as far more likely than white defendants to be at a higher risk of recidivism.

It was a scandal.

Here is the story: propublica.org/article/how-we…

👇
4⃣ In 2016, Microsoft created a chatbot named Tay, which really quickly turned into a Holocaust-denying racist 👿.

Microsoft had to take the chatbot down from Twitter and never put it back again.

Here is the story: bbc.com/news/technolog…
5⃣ In 2018, MIT found that three of the gender-recognition systems (from IBM, Microsoft, and Megvii), had an accuracy of 99% for white men and only 35% for dark-skinned women.

Just think about that difference!

Here is the story: newscientist.com/article/216102…
6⃣A few months ago, in May 2020, Microsoft’s MSN came under fire after it mistakenly paired an article on Little Mix singer Jade Thirwall with a photo of her bandmate Leigh-Anne Pinnock, both of whom are mixed-race.

Here is the story: hivelife.com/microsoft-ai-m…

👇
There's something at play in every one of these cases: "algorithm bias."

This is a phenomenon that occurs when an algorithm produces results that are systemically prejudiced due to erroneous assumptions in the machine learning process.

Basically, garbage-in, garbage-out.

👇
The study of ethics, and the impact of biases when implementing Artificial Intelligence is paramount to achieve the results we want as a society.

Take a look at this TED Talk that @_jessicaalonso_ shared with me. It's pretty revealing:

👇
I personally need to do better when thinking about how the solutions I build impact society and how to avoid biases that could undermine their usefulness.

I encourage you to do the same.
Missing some Tweet in this thread? You can try to force a refresh.

Keep Current with Santiago

Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

Twitter may remove this content at anytime, convert it as a PDF, save and print for later use!

Try unrolling a thread yourself!

how to unroll video

1) Follow Thread Reader App on Twitter so you can easily mention us!

2) Go to a Twitter thread (series of Tweets by the same owner) and mention us with a keyword "unroll" @threadreaderapp unroll

You can practice here first or read more on our help page!

Follow Us on Twitter!

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3.00/month or $30.00/year) and get exclusive features!

Become Premium

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!