Update: Following my tweet on Thursday, Google fixed its autocomplete promoting that "civil war is inevitable"
Here's what Google autocomplete for "civil war is " looked like before Thursday:
THREAD
Today, there is only one suggestion saying that civil war is coming: 2/
But Google didn't fix it for other languages, for instance for French there is the same thing: 3/
In Spanish we also have one suggestion that that states that civil war is inevitable: 4/
Why does Google suggest that? Does the algorithm know something we don't? Having worked at Google, I don't think so. I think fearmongering is efficient for clicks, so Google *promotes* fearmongering.
5/
Now, imagine what many people think when they see "civil war is inevitable" coming from Google. Many must think there has be some truth to it. They might think they need to take action.
6/
Conclusion: when exposing dangerous algorithmic bugs, Google takes action.
Finding them one by one is not enough. So I'll continue working with researchers to create tools to help anyone find those bugs.
7/
New Tang Dynasty TV was funded by the Falun Gong religious movement, that got banned in China, and thousands of followers have died under Chinese custody.
This channel existed for years, but its views just exploded last month. Why?
Is YouTube's algo still taking people down the rabbit hole?
Yes.
Right now, the algorithm is promoting to millions of teens this conspiracy that literally says:
"We ask you to suspend your disbelief and take a journey down the rabbit hole" (1:37)
1/5
It's on a channel called "After Skool", so clearly targeting kids/teens.
It was so massively recommended that it reached 1 million views in few days
2/5
One of the top comments is revealing:
"Hey YouTube? What's this doing in my recommendations? If I didn't have a basic understanding of science and the ability to recognize logical fallacies this could've sent me down the path to conspiracy paranoia"
Earlier this year a YouTuber showed how YouTube's recommendation algorithm was pushing thousands of users towards sexually suggestive videos of children, used by a network of pedophiles.
YouTube bans sexual videos. What happened?
2/
At YouTube, we designed the AI for engagement. Hence, if pedophiles spend more time on YouTube than other users, the job of the AI will become to try to *increase* their numbers.
3/
Having worked at YouTube, I know they don't intentionally bias algorithms, but they can pick a metric and not investigate enough to see if it creates bias.
Here's my theory about it. 2/
In order to show something in trending, they probably check that the video doesn't offend people. This seems reasonable. 3/