Share this page!

Thread Reader is happy to present
an unrolled Twitter story with 60 tweets


Follow @yonatanzunger on Twitter Read thread on Twitter
I worked on policy issues at G+ and YT for years. It was *painfully* obvious that Twitter never took them seriously.
Twitter was so enamored of the idea that they had helped catalyze the Arab Spring that "free speech" became an unexamined article of faith.
Unexamined as in, whenever serious questions came up of "wait, does this actually help free speech?," the most naïve answer always won.
It's hard to think of a single case where Twitter's answer wasn't "allow everything, make it users' responsibility to block" —
Even when it was very clear that this imposed unscalable burdens on individual users, silenced *their* speech, or created public risks.
And Trump using Twitter was clearly far too exciting to leadership as well: "OMG we're right in the middle of the political process!"
The "public interest" exemption was largely shaped post-facto, in the same way that the lack of a hate speech rule was.
The fact is that this kind of speech drives traffic and press, and this counters investor concerns about lack of revenue.
I have had to sit and *make* these tradeoffs, so please don't try to bullshit me by explaining how it's more complicated than we think.
It is insanely complicated, one of the hardest things I've ever worked on, and I *still* know when I'm being bullshitted.
Twitter chose to optimize for traffic at the expense of user experience. That's why GamerGate, that's why Trump, that's why Nazis.
And Twitter's concept of itself as a "public forum" nonetheless shies away from the issues that every real public forum in the world sees.
If you have a vested interest in attracting speakers who draw the most traffic, you are not a neutral platform, and have to deal with that.
There is nothing at all wrong with that—few platforms *should* be neutral—but you can't act like it's not there.
If you're going to reap the benefits of having created one of the key sites where Nazis organize, you need to deal with the costs, too. //
I should spend some time being clear about what good solutions look like, too. This *can* be done properly.
I fully agree with the goal of maximizing speech. We benefit, as a society and as individuals, from a free marketplace of ideas.
But KEY POINT: People's speech can be used to suppress other people's speech. (Harassment, threats, etc)
Specifically, if someone can impose costs on another person for speaking, then speech becomes limited to those most able to pay those costs.
Kathy Sierra's famous article, "Trouble at the Kool-Aid Point," illustrates failure modes well: http://seriouspony.com/trouble-at-the-koolaid-point/
As Ian Gent's explanation of the Petrie Multiplier explains why simple blocking doesn't solve it: http://blog.ian.gent/2013/10/the-petrie-multiplier-why-attack-on.html
The key result: Because speech can be used to suppress other speech, the speech maximum is *not* the zero-regulation point.
The zero-regulation point, "everyone can speak and can also block," means that someone targeted for mass harassment pays a much higher cost.
And differential costs for speech, especially when correlated with existing social differentials, mean you get nonuniform speech output.
See also my response to @DavidBrin about the differential costs of "real name" policies: https://plus.google.com/+YonatanZunger/posts/WegYVNkZQqq
What the research shows (cf @maeveyd's detailed results here: http://www.pewinternet.org/2014/10/22/online-harassment/) is that harassment reduces engagement.
And importantly, harassment breaks into "severe" and "mild" flavors, which are not uniformly distributed.
For example, while men and women experience ~ the same amount of total harassment, women experience ~2x the amount of severe harassment.
This is a key thing you need to understand when walking into a policy discussion.
This also ties policy to business goals: maximizing speech is related to maximizing engagement.
Doing this wrong will differentially reduce engagement by women, minorities, etc., which fundamentally narrows your advertising base.
(I should say that everything I'm saying here comes from working with some of the greatest minds in this subject, esp. @LeaKissner)
(As well as @Aiiane, @Theophite, @daniellecitron, @ma_franks, @randileeharper, and many others. No progress is made here solo.)
So what does healthy policy look like? You look for things which systematically cause people to feel uncomfortable engaging.
Things that make people not post in the first place, because they know what will happen if they do.
You shut down big bad things quickly and visibly, before they can pull the entire conversation to be around them.
You reduce interaction opportunities for things which are known to be toxic. You try to avoid "toxic meetings," period.
And the key to all of this is to *define an editorial voice for the platform,* separate from that of its users.
That voice is your de facto set of rules for "No, this is not OK here; you don't like it, go somewhere else."
Here's the big operational secret: 90% of what makes online policy hard is trying to do it while claiming neutrality.
Some of that is a consequence of things like CDA230, where too strong an editorial voice makes you legally liable for anything anyone says.
And a lot of it is a consequence of trying to pursue a defense of "we allow everything," which is key against government censorship.
If you *do* ever take things down, governments will pressure you to take down anything *they* don't like. And they're all bastards.
The US, Europe, Russia, China, India, Turkey... every government claims great reasons to suppress inconvenient speech.
And laws like CDA§230 *were* shields for companies to preserve some degree of openness in the wake of that, so long as they were "neutral."
But those shields are being dismantled by increasingly authoritarian governments (US and EU, I'm looking at *you*) already.
GDPR and RTBF takedowns (EU), Operation Chokepoint-style financial restrictions (US), pretty much everything in China...
The claim of neutrality is no longer an effective legal shield. In the one place where it sort of is—CDA230—there was always an abuse hole.
It's *possible* to construct policies which are CDA230-compliant and still have an editorial voice, but people have shied away from it.
Rightly, because it was a huge legal risk. But the value of not doing that is going down.
So if you want to have functional policy in the modern age, come up with an editorial voice, and admit that it constitutes a social norm.
The value prop of your platform to users *is* that social norm; embrace it, identify it, advertise it.
You can then build rules in the exact same way—with all the trips to make sure each takedown doesn't become a lawsuit—and *increase* usage.
So end of long rant: If you want to maximize user engagement, don't be afraid to tell bad actors to piss off.
They create short-term engagement, but drive long-term usage down. //
(I could easily make ten tweets just listing amazing people that have worked on this.)
And very much also @avflox, one of the best minds I know on these problems.
For those wondering what Chokepoint is: it was a US gvt effort to eliminate "troublesome," but legal, industries: https://mikandi.com/blog/featured/guide-to-operation-choke-point/
Companies seen by the DOJ as being too friendly to the wrong types of content were targets for massive investigations.
It's part of a system of off-the-books pressure which is why eg ad networks and payment processors won't work with various industries.
Did Thread Reader help you to today?
Buy the developer a 🍺 beer or help for the ⚙️ server cost.
Donate with 😘 Paypal or Become a Patron 😍 on Patreon.com

More from @yonatanzunger (see all)
 

Trending now