Profile picture
Janelle Shane @JanelleCShane
, 8 tweets, 3 min read Read on Twitter
"in fact this occurrence, far from illustrating some kind of malign intelligence inherent to AI, simply reveals a problem with computers that has existed since they were invented: they do exactly what you tell them to do."
techcrunch.com/2018/12/31/thi…
Researchers were tipped off when the algorithm not only did suspiciously well at converting maps to satellite images, but was able to reproduce features like trees & cars that weren't in the maps at all. The original map, left; the street map generated from the original, center; and the aerial map generated only from the street map. Note the presence of dots on both aerial maps not represented on the street map.
In fact, it appeared not to be looking at the maps at all when reconstructing satellite images. It could hide the original satellite data in maps of completely different scenes, and still get the original image back. The map at right was encoded into the maps at left with no significant visual changes.
Technically, the algorithm did what they asked. Literal-minded but also following the path of least resistance. The intention was for the agent to be able to interpret the features of either type of map and match them to the correct features of the other. But what the agent was actually being graded on (among other things) was how close an aerial map was to the original, and the clarity of the street map.<br />
<br />
So it didn’t learn how to make one from the other. It learned how to subtly encode the features of one into the noise patt
This is one reason why machine learning algorithms are prone to bias: technically, their job was "copy the humans". It's not their fault the humans in their training data were being all biased.
Machine learning algorithms will often AMPLIFY the bias in their training data. From their perspective, reproducing racial and/or gender bias is a handy shortcut toward their goal of "copy the humans".
Given that most training data will contain bias, the tendency of algorithms to copy and amplify bias is a huge issue. For more reading:
medium.com/@AINowInstitut…
Missing some Tweet in this thread?
You can try to force a refresh.

Like this thread? Get email updates or save it to PDF!

Subscribe to Janelle Shane
Profile picture

Get real-time email alerts when new unrolls are available from this author!

This content may be removed anytime!

Twitter may remove this content at anytime, convert it as a PDF, save and print for later use!

Try unrolling a thread yourself!

how to unroll video

1) Follow Thread Reader App on Twitter so you can easily mention us!

2) Go to a Twitter thread (series of Tweets by the same owner) and mention us with a keyword "unroll" @threadreaderapp unroll

You can practice here first or read more on our help page!

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just three indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member and get exclusive features!

Premium member ($30.00/year)

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!