Yishan Profile picture
Oct 31 121 tweets 18 min read
[Hey Yishan, you used to run Reddit, ]
How do you solve the content moderation problems on Twitter?
(Repeated 5x in two days)

Okay, here are my thoughts:
(1/nnn)
The first thing most people get wrong is not realizing that moderation is a SIGNAL-TO-NOISE management problem, not a content problem.
Our current climate of political polarization makes it easy to think itʻs about the content of the speech, or hate speech, or misinformation, or censorship, or etc etc.
Then you end up down this rabbit hole of trying to produce some sort of “council of wise elders” who can adjudicate exactly what content to allow and what to ban, like a Solomonic compromise.
No, whatʻs really going to happen is that everyone on council of wise elders will get tons of death threats, eventually quit...
... the people you recruit to replace them will ask the first group why they quit, and decline your job offer, and youʻll end up with a council of third-rate minds and politically-motivated hacks, and the situation will be worse than how you started.
No, you canʻt solve it by making them anonymous, because then you will be accused of having an unaccountable Star Chamber of secret elites (especially if, I dunno, you just took the company private too). No, no, they have to be public and “accountable!”
The fallacy is that it is very easy to think itʻs about WHAT is said, but Iʻll show you why itʻs not…
That said, while Iʻm most well-known for my prior work in social media, today Iʻm working on climate: removing CO2 from the atmosphere is critical to overcoming the climate crisis, and the restoration of forests is one of the BEST ways to do that.

In fact, here are 4 myths relating to trees and forests as a climate solution:

Myth 1: Trees are too slow
Myth 2: Trees are not a permanent solution
Myth 3: Tree-planting is more harmful:
Myth 4: There is not enough room to plant enough trees to solve the problem
When I go around saying that global reforestation is the BEST solution to climate change, I'm not talking about cost or risk or whatever, I'm saying no other solution has as many positive co-benefits as this one, if we do it at scale.
Find ways to help support what weʻre doing:
And now, back to your regular programming of spicy social media takes…

When we last left you, I was just saying that the fallacy is that It is very easy to think itʻs about WHAT is said, but Iʻll show you why itʻs not…
First, here is a useful framing to consider in this discussion: imagine that you are doing content moderation for a social network and you CANNOT UNDERSTAND THE LANGUAGE.
Pretend itʻs an alien language, and all youʻre able to detect is meta-data about the content, e.g. frequency and user posting patterns. How would you go about making the social network “good” and ensure positive user benefit?
Well, let me present a “ladder” of things often subject to content moderation:
1: spam
2: non-controversial topics
3: controversial topics (politics, religion, culture, etc)
If you launch a social network, the FIRST set of things you end up needing to moderate is #1: spam. Vigorous debate, even outright flamewars are typically beneficial for a small social network: it generates activity, engages users.
It doesnʻt usually result in offline harm, which is what typically prompts calls to moderate content.
(And: platform owners donʻt want to have to moderate content. Itʻs extra work and they are focused on other things)
Moderating spam is very interesting: it is almost universally regarded as okay to ban (i.e. CENSORSHIP) but spam is in no way illegal.

Spam actually passes the test of “allow any legal speech” with flying colors. Hell, the US Postal Service delivers spam to your mailbox.
When 1A discussions talk about free speech on private platforms mirroring free speech laws, the exceptions cited are typically “fire in a crowded theater” or maybe “threatening imminent bodily harm.”
Spam is nothing close to either of those, yet everyone agrees: yes, itʻs okay to moderate (censor) spam.
Why is this? Because it has no value? Because itʻs sometimes false? Certainly itʻs not causing offline harm.

No, no, and no.
No one argues that speech must have value to be allowed (c.f. shitposting). And itʻs not clear that content should be banned for being untrue (esp since adjudicating truth is likely intractable). So what gives? Why are we banning spam?
Hereʻs the answer everyone knows: there IS no principled reason for banning spam. We ban spam for purely outcome-based reasons:

It affects the quality of experience for users we care about, and users having a good time on the platform makes it successful.
(And successful on a social platform usually means a lucrative ads program, which is ironically one of the things motivating spam in the first place: )

(But thatʻs a digression)
Not only that, but you can usually moderate (identify and ban) spam without understanding the language.

Spam is typically easily identified due to the repetitious nature of the posting frequency, and simplistic nature of the content (low symbol pattern complexity).
Machine learning algorithms are able to accurate identify spam, and itʻs not because they are able to tell itʻs about Viagra or mortgage refinancing, itʻs because spam has unique posting behavior and patterns in the content.
Moreover, AI is able to identify spam about things it hasnʻt seen before.

This is unlike moderation of other content (e.g. political), where moderators arenʻt usually able to tell that a “new topic” is going to end up being troublesome and eventually prompt moderation.
But spam about an all-new low-quality scammy product can be picked up by an AI recognizing patterns even though the AI doesnʻt comprehend whatʻs being said.

It just knows that a message being broadcast with [THIS SET OF BEHAVIOR PATTERNS] is something users donʻt want.
Spam filters (whether based on keywords, frequency of posts, or content-agnostic-pattern-matching) are just a tool that a social media platform owner uses to improve the signal-to-noise ratio of content on their platform.
Thatʻs what youʻre doing when you ban spam.

I have said before that itʻs not topics that are censored, it is behavior:
So now we move on to the next classes of content on the ladder:

2: non-controversial topics
3: controversial topics (politics, religion, culture, etc)
Letʻs say you are in an online discussion about a non-controversial topic. Usually it goes fine, but sometimes one of the following pathologies erupts:
a) ONE particular user gets tunnel-vision and begins to post the same thing over and over, or brings up his opinion every time someone mentions a peripherally-related topic. He just wonʻt shut up, and his tone ranges from annoying to abrasive to grating.
b) An innocuous topic sparks a flamewar, e.g. someone mentions one of John Mulaneyʻs jokes and it leads to a flamewar about whether itʻs OK to like him now, how DARE he go and… how can you possibly condone… etc
When I SAY those things, they donʻt sound too bad. But I want you to imagine the most extreme, pathological cases of similar situations youʻve been in on a social platform:
a guy who floods every related topic thread with his opinion (objectively not an unreasonable one) over and over, and
a crazy flamewar that erupts over a minor comment that wonʻt end and everyone is hating everyone else and new enemy-ships are formed and some of your best users have quit the platform in DISGUST
You remember that time those things happened on your favorite discussion platform? Yeah. Did you blood pressure go up just a tiny bit thinking about that?

Okay. Just like spam, none of those topics ever comes close to being illegal content.
But, in any outcome-based world, stuff like that makes users unhappy with your platform and less likely to use it, and as the platform owner, if you could magically have your druthers, youʻd prefer it if those things didnʻt happen.
Most users are NOT Eliezer Yudkowsky or Scott Alexander and confronted with an inflammatory posting thinking, "Hmm, perhaps I should challenge my priors?" Most people are pretty easy to get really worked up.
Events like that will happen, and they canʻt be predicted, so the only thing to do when it happens is to either do nothing (and have your platform take a hit or die), or somehow moderate that content.
RIGHT NOW RIGHT HERE I want to correct a misconception rising in your mind:

Just because I am saying you will need to moderate that content does NOT mean I am saying that all methods or any particular method employed by someone is the best or correct or even a good one.
I am NOT, right here, advocating or recommending bans, time-limited bans, or hell-banning, or keyword-blocking, or etc etc whatever specific method. I am JUST saying that as a platform owner you will end up having to moderate that content.
And, there will be NO relation between the topic of the content and whether you moderate it, because itʻs the specific posting behavior thatʻs a problem. What do I mean by that?
It means people will say, “You banned people in the discussion about liking John Mulaney Leaving His Wife but you didnʻt ban people in the discussion about Kanye West Being Anti-Semitic ARE YOU RACIST HEY I NOTICE ALL YOUR EXECS ARE WHITE!”
No, itʻs because for whatever reason people didnʻt get into a flamewar about Kanye West or there wasnʻt a Kanye-subtopic-obsessed guy who kept saying the same thing over and over and over again.
In fact, once again, I challenge you to think about it this way: could you make your content moderation decisions even if you didnʻt understand the language they were being spoken in?

Here, there is a parallel to the usage of “Lorem Ipsum” in the world of design.
en.wikipedia.org/wiki/Lorem_ips…
Briefly, when showing clients examples of a proposed webpage design, professional designers usually replace the text with nonsense text, i.e. “Lorem ipsum dolor etc…” because they donʻt want the client to be subconsciously influenced by the content.
Like if the content says, “The Steelers are the greatest football team in history” then some clients are going to be subconsciously motivated to like the design more, and some will like it less.
(Everyone from Pittsburgh who is reading this has now been convinced of the veracity and utter reasonableness of my thinking on this topic)
Everyone else… letʻs take another temporary detour into the world of carbon credits.

Carbon credits are great for offsetting your carbon footprint, but are they really helpful to the true climate problem?
The problem with credits is what when you buy a credit, the CO2 represented by that credit has already been removed: someone planted a tree, the tree grew and removed CO2 from the air, and thatʻs what issues the carbon credit. (simplified explanation)
At that point, the CO2 molecules have been removed from the atmosphere!! Thatʻs the key thing we need to have happen!

When you buy the credit, you take it and emit your CO2, and now youʻve undone that CO2 removal!
What you really want to do is buy FUTURE carbon credits: you want to pay now for future carbon credits, because then youʻre paying for people to plant additional trees NOW (or suck down CO2 using machines, but weʻll use trees as the shorthand here).
Now the CO2 has been removed. Then once you receive your credits, the price should have gone up (all carbon credit price projects have prices increasing sharply over the next few years/decades), you sell only enough to cover your original investment.
Then you take the excess and you retire them, so that no one can buy them and use them to emit. NOW youʻve sequestered a bunch of CO2, allowed the re-emission of only some of it, and you have a net removal of CO2.
(You gave up extra returns to do this, but I presume youʻre in a position where you, like me, are looking to trade money for a livable planet and/or realize that worsening climate is already impacting us economically in bad ways)
To enable people to do this, we need to enable huge numbers of NEW reforestation teams, and if thatʻs something you want to support, youʻll be interested in Terraformationʻs new Seed-to-Carbon Accelerator:

terraformation.com/blog/carbon-fo…
Now back to where we were… when we left off, I was talking about how people are subconsciously influenced by the specific content thatʻs being moderated (and not the behavior of the user) when they judge the moderation decision.
When people look at moderation decisions by a platform, they are not just subconsciously influenced by the nature of the content that was moderated, they are heavily - overwhelmingly - influenced by the nature of the content!
Would you think the moderation decision you have a problem with would be fair if the parties involved were politically reversed?
Youʻd certainly be a lot more able to clearly examine the merit of the moderation decision if you couldnʻt understand the language of the content at all, right?
People in China look at America and donʻt really think the parties are so different from each other, they just think itʻs a disorganized and chaotic system that resulted in a mob storming the capitol after an election.
Even if youʻre trying to cause BAD things to happen, e.g. Russia is just happy that the US had a mob storming the capital after an election instead of an orderly transfer of power. They donʻt care who is “the good guy,” they just love our social platforms.
Youʻll notice that I just slippery-sloped my way from #2 to #3:
2: non-controversial topics
3: controversial topics (politics, religion, culture, etc)
Because #2 topics become #3 topics organically - they get culture-linked to something in #3 or whatever - and then youʻre confronting #3 topics or proxies for #3 topics.
You know, non-controversial #2 topics like… vaccines and wearing masks.

If you told me 10 years ago that people would be having flamewars and deep identity culture divides as a result of online opinions on WEARING MASKS I would have told you that you were crazy.
That kind of thing cannot be predicted, so thereʻs no way to come up with rules beforehand based on any a-priori thinking.

Or some topics NEED to be discussed in a dispassionate way divorced from politics:
Like the AI, human content moderators cannot predict when a new topic is going to start presenting problems that are sufficiently threatening to the operation of the platform.

The only thing they can do is observe if the resultant user behavior is sufficiently problematic.
But that is not something outside observers see, because platforms donʻt advertise problematic user behavior because if you knew there was a guy spam-posting an opinion (even one you like) over and over and over, you wouldnʻt use the platform.
All they see is the sensationalized (mainstream news) headlines saying TWITTER/FACEBOOK bans PROMINENT USER for posts about CONTROVERSIAL TOPIC.
This is because old-media journalists always think itʻs about content. Newspapers donʻt really run into the equivalent of “relentless shitposting users” or “flamewars between (who? dueling editorialists?).” Itʻs not part of their institutional understanding of “content.”
Content for all media prior to social media is “anything that gets people engaged, ideally really worked up.” Why would you EVER want to ban something like that? It could only be for nefarious reasons.
Any time an old-media news outlet publishes something that causes controversy, they LOVE IT. Controversy erupting from old-media news outlets is what modern social media might call “subclinical.”
In college, I wrote a sort of crazy satirical weekly column for the school newspaper. The satire was sometimes lost on people, and so my columns resulted in more letters to the editor than any other columnist ever. The paper loved me.
(Or itʻs possible they loved me because I was the only writer who turned in his writing on time every week)
Anyhow, old media controversy is far, far below the intensity levels of problematic behavior that would e.g. threaten the ongoing functioning or continued consumer consumption of that old-media news outlet.
MAYBE sometimes an advertiser will get mad, but a backroom sales conversation will usually get them back once the whole thing blows over.
So we observe the following events:

1: innocuous discussion
2: something blows up and user(s) begin posting with some disruptive level of frequency and volume
2a: maybe a user does something offline as a direct result of that intensity
...
3: platform owner moderates the discussion to reduce the intensity
4: media reporting describes the moderation as targeting the content topic discussed
5: platform says, “no, itʻs because they <did X specific bad behavior> or <broke established ruled>”
...
6: no one believes them
7: media covers the juiciest angle, i.e. "Is PLATFORM biased against TOPIC?"

Because, you see, controversial issues always look like freedom of speech issues.
But no one cries freedom of speech when itʻs spam, or even non-controversial topics. Yeah, you close down the thread about John Mulaney but everyone understands itʻs because it was tearing apart the knitting group.
“Becky, you were banned because you wouldnʻt let up on Karen and even started sending her mean messages to her work email when she blocked you here.”
Controversial topics are just overrepresented in instances where people get heated, and when people get heated, they engage in behavior they wouldnʻt otherwise engage in.
But that distinction is not visible to people who arenʻt running the platform.
One of the things that hamstrings platforms is that unlike judicial proceedings in the real world, platforms do not or cannot reveal all the facts and evidence to the public for review.
In a real-world trial, the proceedings are generally public. Evidence of the alleged wrongdoing is presented and made part of the public record.
Although someone might be too lazy to look it up, an interested critic will be able to look at the evidence on case before deciding if they want to (or can credibly, without being debunked) whip up an angry mob against the system itself.
At Reddit, weʻd have to issue moderation decisions (e.g. bans) on users and then couldnʻt really release all the evidence of their wrongdoing, like abusive messages or threats, or spamming with multiple accounts, etc.
The justification is that private messages are private, or sometimes compromising to unrelated parties, but whatever the reasons, that leaves fertile ground for unscrupulous users to claim that they were victimized...
... and politically interested parties to amplify their message that the platform is biased against them.
I had long wondered about a model like “put up or shut up” where any users challenging a moderation decision would have to consent to having ALL the evidence of their behavior made public by the platform, including private logs and DMs.
But there are huge privacy issues and having a framework for full-public-disclosure would be a lot of work. Nevertheless, it would go a long way to making moderation decisions and PROCESSES more transparent and well-understood by the general public.
Social platforms actually have much BETTER and more high-quality evidence of user misbehavior than “the real world.” In the real world, facts can be obscured or hidden. On a digital platform, everything you do is logged. The truth is there.
And, not only that, the evidence can even be presented in an anonymized way for impartial evaluation.
Strip out identifiers and political specifics, and like my “in a language you donʻt understand” example: moderators (and armchair quarterbacks) can look at the behavior and decide if itʻs worthy of curtailment.
Again, this is a lot of work. You canʻt just dump data, because itʻs a heightened situation of emotional tension: the first time you try, something extra will get accidentally disclosed, and youʻll have ANOTHER situation on your hands. Now you have two problems.
So I donʻt know if thatʻs workable. But what I do know is, people need to think about content moderation differently, because:
1: It is a signal-to-noise management issue
2: Freedom of speech was NEVER the issue (c.f. spam)
3: Could you still moderate if you canʻt read the language?
Warning: donʻt over-rotate on #3 and try to do all your content moderation through AI. Facebook tried that, and ended up with a bizarre inhuman dystopia. (I have a bunch more to say about this if people care)
Having said all that, I wish to offer my comments on the (alleged) “war room team” that Elon has apparently put to work at Twitter:
I donʻt know the other people super well (tho Sriram is cool; he was briefly an investor in a small venture of mine), but Iʻm heartened to know that @DavidSacks is involved.
Sacks is a remarkably good operator, possibly one of the best ones in the modern tech era. He was tapped to lead a turnaround at Zenefits when that company got into really hot water:
“Content moderation” is the most visible issue with Twitter (the one talking heads love to obsess over) but itʻs always been widely known that Twitter suffers from numerous operational problems that many CEOs have tried in vain to fix.
If Twitter were operationally excellent, itʻll have a lot better chance of tackling its Inherently Very Hard Moderation Problems and maybe emerge with novel solutions that benefit everyone. If anyone can do that, itʻs Sacks.
Twitter employees are about to either be laid off or will look back on this as the time they did the best work of their lives.
Finally, while Iʻve got your attention, Iʻd like to tell you my personal secret to a positive Twitter experience - a little-known Twitter add-on called Block Party: @blockpartyapp_
One thing that Twitter did well (that Iʻm surprised FB hasnʻt copied) is exposing their API for content filtering.

This allows 3rd-party app developers to create specialized solutions that Twitter canʻt/wonʻt do.
Block Partyʻs founder Tracy Chou understands the weird and subtle nuances of content filtering on the internet: you donʻt use a cudgel, you need a scalpel (or three).
Block Party doesnʻt simply wholesale block things, it filters them in an intelligent way based on criteria you set, and uses data across the system to tune itself.
It doesnʻt just throw away things it filters for you, it puts them in a box so you can go through it later when you want. Because no automated filter is perfect! (Remember the “bizarre inhuman AI dystopia” from above?)
If youʻre someone who gets a LOT of hate (or just trash) and you donʻt really WANT to go through it but need to (just in case thereʻs something valuable), you can also authorize a trusted friend to do it for you.
Overall, it has smoothly and transparently improved the signal-to-noise ratio of my Twitter experience, especially during a period of cultural upheaval when youʻd expect MORE crazy crap…
But no, for me, my Twitter experience is great and clean and informative and clever. Iʻve used Twitter more and more ever since installing it.

Disclosure: as a result of these experiences, Iʻm now an investor in Block Party.
If you enjoyed this and want more spicy takes on social media (and advice on how to fix the climate, or investment tips), follow me!

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Yishan

Yishan Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @yishan

Nov 26
The problem with SF is that it thinks it’s a special snowflake and its problems can only be solved by doing crazy things that no one else does, when in reality Running A City Well is pretty much the same everywhere, and you just have to copy what some other successful city does.
I feel like this view is held both by the extreme left (hence: don’t prosecute any crimes ever) and by tech people (hence: surely crypto and robots are the answer), and the real answer is just depoliticizing things and implementing mundane low-tech solutions.
Probably a good acid test for any proposed policy for SF is “do at least ten other major cities that don’t suck also do this?” If so, it’s likely a good policy to implement. If not, let’s shelve it for now, eh? Like until there’s no poop on the streets.
Read 4 tweets
Nov 18
If I had to bet on whether Elon will pull through and Twitter ultimately succeeds, I’d probably still bet on the success case. Social networks are surprisingly durable.

Still, there is a KNOWN playbook for how to take over a dysfunctional company and quickly turn it around.

🧵
Plenty of hedge funds know how to do this, but it was probably most famously put into practice when Carlos Ghosn took over Nissan in 1999, executing the “Nissan Revival Plan” and returning the company to profitability within a year and reducing its debt by 50% within 3 years.
(Ghosn was so incredibly successful at this that his influence grew to encompass MULTIPLE car companies, which freaked out the Japanese, who then conspired in 2018 to have him arrested and accused of accounting irregularities...
Read 50 tweets
Nov 16
A fair amount of commentary around Twitter employees getting fired for criticizing Elon seems to revolve around criticizing the employee for being clueless about where/how they are criticizing their employer (e.g. in public, unconstructively, etc).

This is missing the point.
If you quit, you are generally ineligible to claim unemployment. You are also not offered severance.

While the mechanics of whether these “fired” employees are offered the severance as their “laid-off” brethren is unknown, certainly it’s still better to be fired than to quit.
These employees are probably very aware of that fact, and they are employees who survived the layoffs but don’t want to work for Elon, so once AN employee was fired for mouthing off about technical topics, they all realized that was probably the best way to depart.
Read 6 tweets
Nov 5
I’m going to talk about therapy.

The world needs more high-profile people to talk about it, help normalize it, explain why it’s good, and remove the stigma surrounding it. I have what, like 70k followers? That’s enough to start getting the message out there.
Plus, if you’re like, “Okay, I’ll try it… where do I start?” hopefully this thread will demystify some of it for you and help you get started.
First, I’m probably going to say a bunch of politically incorrect things, or even things a lot of people might find offensive or exclusionary, or totally at odds with their own mental health experiences.
Read 122 tweets
Aug 22
I've always loved stuff like this: en.wikipedia.org/wiki/Fast_inve…

Back in the 00s I thought it had been invented entirely by @ID_AA_Carmack himself, but it looks like it has a long history of development via several people starting in the 80s.
The wiki article has new info I didn't know about, including discussion of how the magic number was derived and then refined over time by multiple people: en.wikipedia.org/wiki/Fast_inve…
I think why I love this is because it's such a great example of how you can achieve exceptional levels of performance (up to 4x faster) if you are willing to acquire a very deep understanding how things work - here, it's math and internal representations of numbers in computers.
Read 4 tweets
Aug 15
Am I the only one who thinks we SHOULD store nuclear waste in dry casks above ground precisely because 90% of the potential energy is still in it, and within ~100 yrs we’ll probably figure out how to use the rest of it? Feels like we’ve barely scratched the surface of reusing it.
We call things “waste” only for as long as we don’t have a way to economically recycle it. As soon as that’s figured out, waste instantly changes to a valuable asset.
Remove the fear aspect, and a rock that will emit energetic particles for hundreds or thousands of years inherently sounds like something very valuable to us that we just haven’t learned to harness yet.
Read 6 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us on Twitter!

:(