Profile picture
, 18 tweets, 7 min read Read on Twitter
Google just released a *comprehensive* paper on how they are taking on disinformation (including at YouTube).

Let's take a look:
blog.google/around-the-glo…
@Google is:
- Focusing on disinformation: "deliberate efforts to deceive"
- Using 3 core strategies:
1) Making quality count
2) Counteracting malicious actors
3) Giving users more context
- Collaborating(ish) — @firstdraftnews, @_trustproject, @factchecknet
They break it down into 3 areas:
a) Google Search/News
b) YouTube
c) Advertising

There is not much particularly new in these 32 pages for those paying close attention—but it is a helpful summary and reference for those looking to build upon existing work and identify holes.
There is also a section about deepfakes / synthetic media where @Google is...vague.
It is:
- "Investing in research" (on detection)
- "Working with leading experts"
- "Engaging with civil society, academia, newsrooms, and governments"
- "Releasing datasets of synthesized content"
One thing that might be of particular interest in @Google's disinformation report is the section on YouTube.
They "use everyday people as evaluators to provide input on what constitutes disinformation or borderline content [to] inform our ranking systems"

cc @gchaslot, @zeynep
YouTube talks here about the evolution of their recommendation metrics, from clicks, to watch time, to user surveys.

They seem to only use a "higher bar" for videos in sensitive domains that are recommended or on the homepage. YouTube search doesn't appear to get this treatment.
YouTube specifically focuses on elevating authoritative content in "news, politics, medicine, and science."

They also imply that one of YouTube's core company goals (aka OKR's—Objective Key Results) has moved from "Growth" to "Responsible Growth".
YouTube even provides a case study of these strategies applied to "News & Politics" videos.

More on this from me shortly—YouTube /has/ gotten better, but we are still seeing a non-significant proportion of "conspiracy" news on the homepage and in subsequent recommendations.
YouTube also seems somewhat disingenuous.

They say that "our business depends on the trust users place in our services to provide reliable, high-quality information."

Uh. That isn't what most users go YouTube for, it's not why people create videos, and advertisers don't care.
Many advertisers sometimes care if their videos are near ISIS or InfoWars or creepy kid videos. But beyond that...well just ask @SeatGeek and @Honey who are happily supporting a conspiracy video that got 30 MILLION views in the last two weeks.
motherboard.vice.com/en_us/article/…
That said, good on YouTube if "the primary goal of our recommendation systems today is to create a trusted and positive experience for our users".

That's what we've been wanting to hear! 👏👏👏🎉
(though I thank research, press & regulatory threats for making this happen...)
Google also has some odd text:
They says Google news/search ranking is never personalized on ideology because "our systems do not collect such signals, nor do they have an understanding of political ideologies."

Anyone who understand machine learning knows this is bunk.
You don't decide what the ML model learns with the text you give it.
What they meant to say(?) is that Google never *intentionally* personalizes based on ideology.
The only way to actually tell is to analyze this, which requires collecting the signals they say they don't collect!
They should fix that line, but I can see why they didn't—changing it might make twitchy partisan regulators think that Google is trying to be a kingmaker (and makes it less likely those partisans will force harmful "transparency").

Sorry, that was a tangent.
Back to YouTube:
If can only read one thing about what @YouTube is doing about disinformation, this page is probably it.

It sets out the core principles:
1) Don't ban most things
2) High bar for recommendations
3) Monetize carefully

These are good.

But...it perhaps oversells "effective".
Which gets at what is perhaps most missing from this whitepaper (not that I expected it).

There is nothing here on *effective* @Google/@YouTube are at addressing disinformation.
How are they measuring success???
What principles do they have about measurement?
To be clear, this is not meant to be an even-handed report on how well Google is doing.
This is meant to say to regulators/press: "We are doing things. LOOK! See all the things?! There are a LOT of things. Aren't we great!!!"

And...they're right. This is much better than 2016 👏
Final takeaways on the Google/YouTube Disinformation report.

If you build products:
> Read it! Lots of good ideas here.

If you push for responsibility (press/research/gov):
> Read it! Or waste everyone's time.
> Goog still has lots to do. Will save that for another thread...
Missing some Tweet in this thread?
You can try to force a refresh.

Like this thread? Get email updates or save it to PDF!

Subscribe to Aviv Ovadya
Profile picture

Get real-time email alerts when new unrolls are available from this author!

This content may be removed anytime!

Twitter may remove this content at anytime, convert it as a PDF, save and print for later use!

Try unrolling a thread yourself!

how to unroll video

1) Follow Thread Reader App on Twitter so you can easily mention us!

2) Go to a Twitter thread (series of Tweets by the same owner) and mention us with a keyword "unroll" @threadreaderapp unroll

You can practice here first or read more on our help page!

Follow Us on Twitter!

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just three indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3.00/month or $30.00/year) and get exclusive features!

Become Premium

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!