[Hey Yishan, you used to run Reddit, ]
How do you solve the content moderation problems on Twitter?
(Repeated 5x in two days)
Okay, here are my thoughts:
(1/nnn)
The first thing most people get wrong is not realizing that moderation is a SIGNAL-TO-NOISE management problem, not a content problem.
Our current climate of political polarization makes it easy to think itʻs about the content of the speech, or hate speech, or misinformation, or censorship, or etc etc.
Then you end up down this rabbit hole of trying to produce some sort of “council of wise elders” who can adjudicate exactly what content to allow and what to ban, like a Solomonic compromise.
No, whatʻs really going to happen is that everyone on council of wise elders will get tons of death threats, eventually quit...
... the people you recruit to replace them will ask the first group why they quit, and decline your job offer, and youʻll end up with a council of third-rate minds and politically-motivated hacks, and the situation will be worse than how you started.
No, you canʻt solve it by making them anonymous, because then you will be accused of having an unaccountable Star Chamber of secret elites (especially if, I dunno, you just took the company private too). No, no, they have to be public and “accountable!”
The fallacy is that it is very easy to think itʻs about WHAT is said, but Iʻll show you why itʻs not…
That said, while Iʻm most well-known for my prior work in social media, today Iʻm working on climate: removing CO2 from the atmosphere is critical to overcoming the climate crisis, and the restoration of forests is one of the BEST ways to do that.
When I go around saying that global reforestation is the BEST solution to climate change, I'm not talking about cost or risk or whatever, I'm saying no other solution has as many positive co-benefits as this one, if we do it at scale.
And now, back to your regular programming of spicy social media takes…
When we last left you, I was just saying that the fallacy is that It is very easy to think itʻs about WHAT is said, but Iʻll show you why itʻs not…
First, here is a useful framing to consider in this discussion: imagine that you are doing content moderation for a social network and you CANNOT UNDERSTAND THE LANGUAGE.
Pretend itʻs an alien language, and all youʻre able to detect is meta-data about the content, e.g. frequency and user posting patterns. How would you go about making the social network “good” and ensure positive user benefit?
Well, let me present a “ladder” of things often subject to content moderation:
1: spam
2: non-controversial topics
3: controversial topics (politics, religion, culture, etc)
If you launch a social network, the FIRST set of things you end up needing to moderate is #1: spam. Vigorous debate, even outright flamewars are typically beneficial for a small social network: it generates activity, engages users.
It doesnʻt usually result in offline harm, which is what typically prompts calls to moderate content.
(And: platform owners donʻt want to have to moderate content. Itʻs extra work and they are focused on other things)
Moderating spam is very interesting: it is almost universally regarded as okay to ban (i.e. CENSORSHIP) but spam is in no way illegal.
Spam actually passes the test of “allow any legal speech” with flying colors. Hell, the US Postal Service delivers spam to your mailbox.
When 1A discussions talk about free speech on private platforms mirroring free speech laws, the exceptions cited are typically “fire in a crowded theater” or maybe “threatening imminent bodily harm.”
Spam is nothing close to either of those, yet everyone agrees: yes, itʻs okay to moderate (censor) spam.
Why is this? Because it has no value? Because itʻs sometimes false? Certainly itʻs not causing offline harm.
No, no, and no.
No one argues that speech must have value to be allowed (c.f. shitposting). And itʻs not clear that content should be banned for being untrue (esp since adjudicating truth is likely intractable). So what gives? Why are we banning spam?
Hereʻs the answer everyone knows: there IS no principled reason for banning spam. We ban spam for purely outcome-based reasons:
It affects the quality of experience for users we care about, and users having a good time on the platform makes it successful.
(And successful on a social platform usually means a lucrative ads program, which is ironically one of the things motivating spam in the first place:
Not only that, but you can usually moderate (identify and ban) spam without understanding the language.
Spam is typically easily identified due to the repetitious nature of the posting frequency, and simplistic nature of the content (low symbol pattern complexity).
Machine learning algorithms are able to accurate identify spam, and itʻs not because they are able to tell itʻs about Viagra or mortgage refinancing, itʻs because spam has unique posting behavior and patterns in the content.
Moreover, AI is able to identify spam about things it hasnʻt seen before.
This is unlike moderation of other content (e.g. political), where moderators arenʻt usually able to tell that a “new topic” is going to end up being troublesome and eventually prompt moderation.
But spam about an all-new low-quality scammy product can be picked up by an AI recognizing patterns even though the AI doesnʻt comprehend whatʻs being said.
It just knows that a message being broadcast with [THIS SET OF BEHAVIOR PATTERNS] is something users donʻt want.
Spam filters (whether based on keywords, frequency of posts, or content-agnostic-pattern-matching) are just a tool that a social media platform owner uses to improve the signal-to-noise ratio of content on their platform.
Thatʻs what youʻre doing when you ban spam.
I have said before that itʻs not topics that are censored, it is behavior:
Letʻs say you are in an online discussion about a non-controversial topic. Usually it goes fine, but sometimes one of the following pathologies erupts:
a) ONE particular user gets tunnel-vision and begins to post the same thing over and over, or brings up his opinion every time someone mentions a peripherally-related topic. He just wonʻt shut up, and his tone ranges from annoying to abrasive to grating.
b) An innocuous topic sparks a flamewar, e.g. someone mentions one of John Mulaneyʻs jokes and it leads to a flamewar about whether itʻs OK to like him now, how DARE he go and… how can you possibly condone… etc
When I SAY those things, they donʻt sound too bad. But I want you to imagine the most extreme, pathological cases of similar situations youʻve been in on a social platform:
a guy who floods every related topic thread with his opinion (objectively not an unreasonable one) over and over, and
a crazy flamewar that erupts over a minor comment that wonʻt end and everyone is hating everyone else and new enemy-ships are formed and some of your best users have quit the platform in DISGUST
You remember that time those things happened on your favorite discussion platform? Yeah. Did you blood pressure go up just a tiny bit thinking about that?
Okay. Just like spam, none of those topics ever comes close to being illegal content.
But, in any outcome-based world, stuff like that makes users unhappy with your platform and less likely to use it, and as the platform owner, if you could magically have your druthers, youʻd prefer it if those things didnʻt happen.
Most users are NOT Eliezer Yudkowsky or Scott Alexander and confronted with an inflammatory posting thinking, "Hmm, perhaps I should challenge my priors?" Most people are pretty easy to get really worked up.
Events like that will happen, and they canʻt be predicted, so the only thing to do when it happens is to either do nothing (and have your platform take a hit or die), or somehow moderate that content.
RIGHT NOW RIGHT HERE I want to correct a misconception rising in your mind:
Just because I am saying you will need to moderate that content does NOT mean I am saying that all methods or any particular method employed by someone is the best or correct or even a good one.
I am NOT, right here, advocating or recommending bans, time-limited bans, or hell-banning, or keyword-blocking, or etc etc whatever specific method. I am JUST saying that as a platform owner you will end up having to moderate that content.
And, there will be NO relation between the topic of the content and whether you moderate it, because itʻs the specific posting behavior thatʻs a problem. What do I mean by that?
It means people will say, “You banned people in the discussion about liking John Mulaney Leaving His Wife but you didnʻt ban people in the discussion about Kanye West Being Anti-Semitic ARE YOU RACIST HEY I NOTICE ALL YOUR EXECS ARE WHITE!”
No, itʻs because for whatever reason people didnʻt get into a flamewar about Kanye West or there wasnʻt a Kanye-subtopic-obsessed guy who kept saying the same thing over and over and over again.
In fact, once again, I challenge you to think about it this way: could you make your content moderation decisions even if you didnʻt understand the language they were being spoken in?
Briefly, when showing clients examples of a proposed webpage design, professional designers usually replace the text with nonsense text, i.e. “Lorem ipsum dolor etc…” because they donʻt want the client to be subconsciously influenced by the content.
Like if the content says, “The Steelers are the greatest football team in history” then some clients are going to be subconsciously motivated to like the design more, and some will like it less.
(Everyone from Pittsburgh who is reading this has now been convinced of the veracity and utter reasonableness of my thinking on this topic)
Everyone else… letʻs take another temporary detour into the world of carbon credits.
Carbon credits are great for offsetting your carbon footprint, but are they really helpful to the true climate problem?
The problem with credits is what when you buy a credit, the CO2 represented by that credit has already been removed: someone planted a tree, the tree grew and removed CO2 from the air, and thatʻs what issues the carbon credit. (simplified explanation)
At that point, the CO2 molecules have been removed from the atmosphere!! Thatʻs the key thing we need to have happen!
When you buy the credit, you take it and emit your CO2, and now youʻve undone that CO2 removal!
What you really want to do is buy FUTURE carbon credits: you want to pay now for future carbon credits, because then youʻre paying for people to plant additional trees NOW (or suck down CO2 using machines, but weʻll use trees as the shorthand here).
Now the CO2 has been removed. Then once you receive your credits, the price should have gone up (all carbon credit price projects have prices increasing sharply over the next few years/decades), you sell only enough to cover your original investment.
Then you take the excess and you retire them, so that no one can buy them and use them to emit. NOW youʻve sequestered a bunch of CO2, allowed the re-emission of only some of it, and you have a net removal of CO2.
(You gave up extra returns to do this, but I presume youʻre in a position where you, like me, are looking to trade money for a livable planet and/or realize that worsening climate is already impacting us economically in bad ways)
To enable people to do this, we need to enable huge numbers of NEW reforestation teams, and if thatʻs something you want to support, youʻll be interested in Terraformationʻs new Seed-to-Carbon Accelerator:
Now back to where we were… when we left off, I was talking about how people are subconsciously influenced by the specific content thatʻs being moderated (and not the behavior of the user) when they judge the moderation decision.
When people look at moderation decisions by a platform, they are not just subconsciously influenced by the nature of the content that was moderated, they are heavily - overwhelmingly - influenced by the nature of the content!
Would you think the moderation decision you have a problem with would be fair if the parties involved were politically reversed?
Youʻd certainly be a lot more able to clearly examine the merit of the moderation decision if you couldnʻt understand the language of the content at all, right?
People in China look at America and donʻt really think the parties are so different from each other, they just think itʻs a disorganized and chaotic system that resulted in a mob storming the capitol after an election.
Even if youʻre trying to cause BAD things to happen, e.g. Russia is just happy that the US had a mob storming the capital after an election instead of an orderly transfer of power. They donʻt care who is “the good guy,” they just love our social platforms.
Youʻll notice that I just slippery-sloped my way from #2 to #3:
2: non-controversial topics
3: controversial topics (politics, religion, culture, etc)
Because #2 topics become #3 topics organically - they get culture-linked to something in #3 or whatever - and then youʻre confronting #3 topics or proxies for #3 topics.
You know, non-controversial #2 topics like… vaccines and wearing masks.
If you told me 10 years ago that people would be having flamewars and deep identity culture divides as a result of online opinions on WEARING MASKS I would have told you that you were crazy.
That kind of thing cannot be predicted, so thereʻs no way to come up with rules beforehand based on any a-priori thinking.
Or some topics NEED to be discussed in a dispassionate way divorced from politics:
Like the AI, human content moderators cannot predict when a new topic is going to start presenting problems that are sufficiently threatening to the operation of the platform.
The only thing they can do is observe if the resultant user behavior is sufficiently problematic.
But that is not something outside observers see, because platforms donʻt advertise problematic user behavior because if you knew there was a guy spam-posting an opinion (even one you like) over and over and over, you wouldnʻt use the platform.
All they see is the sensationalized (mainstream news) headlines saying TWITTER/FACEBOOK bans PROMINENT USER for posts about CONTROVERSIAL TOPIC.
This is because old-media journalists always think itʻs about content. Newspapers donʻt really run into the equivalent of “relentless shitposting users” or “flamewars between (who? dueling editorialists?).” Itʻs not part of their institutional understanding of “content.”
Content for all media prior to social media is “anything that gets people engaged, ideally really worked up.” Why would you EVER want to ban something like that? It could only be for nefarious reasons.
Any time an old-media news outlet publishes something that causes controversy, they LOVE IT. Controversy erupting from old-media news outlets is what modern social media might call “subclinical.”
In college, I wrote a sort of crazy satirical weekly column for the school newspaper. The satire was sometimes lost on people, and so my columns resulted in more letters to the editor than any other columnist ever. The paper loved me.
(Or itʻs possible they loved me because I was the only writer who turned in his writing on time every week)
Anyhow, old media controversy is far, far below the intensity levels of problematic behavior that would e.g. threaten the ongoing functioning or continued consumer consumption of that old-media news outlet.
MAYBE sometimes an advertiser will get mad, but a backroom sales conversation will usually get them back once the whole thing blows over.
So we observe the following events:
1: innocuous discussion
2: something blows up and user(s) begin posting with some disruptive level of frequency and volume
2a: maybe a user does something offline as a direct result of that intensity
...
3: platform owner moderates the discussion to reduce the intensity
4: media reporting describes the moderation as targeting the content topic discussed
5: platform says, “no, itʻs because they <did X specific bad behavior> or <broke established ruled>”
...
6: no one believes them
7: media covers the juiciest angle, i.e. "Is PLATFORM biased against TOPIC?"
Because, you see, controversial issues always look like freedom of speech issues.
But no one cries freedom of speech when itʻs spam, or even non-controversial topics. Yeah, you close down the thread about John Mulaney but everyone understands itʻs because it was tearing apart the knitting group.
“Becky, you were banned because you wouldnʻt let up on Karen and even started sending her mean messages to her work email when she blocked you here.”
Controversial topics are just overrepresented in instances where people get heated, and when people get heated, they engage in behavior they wouldnʻt otherwise engage in.
But that distinction is not visible to people who arenʻt running the platform.
One of the things that hamstrings platforms is that unlike judicial proceedings in the real world, platforms do not or cannot reveal all the facts and evidence to the public for review.
In a real-world trial, the proceedings are generally public. Evidence of the alleged wrongdoing is presented and made part of the public record.
Although someone might be too lazy to look it up, an interested critic will be able to look at the evidence on case before deciding if they want to (or can credibly, without being debunked) whip up an angry mob against the system itself.
At Reddit, weʻd have to issue moderation decisions (e.g. bans) on users and then couldnʻt really release all the evidence of their wrongdoing, like abusive messages or threats, or spamming with multiple accounts, etc.
The justification is that private messages are private, or sometimes compromising to unrelated parties, but whatever the reasons, that leaves fertile ground for unscrupulous users to claim that they were victimized...
... and politically interested parties to amplify their message that the platform is biased against them.
I had long wondered about a model like “put up or shut up” where any users challenging a moderation decision would have to consent to having ALL the evidence of their behavior made public by the platform, including private logs and DMs.
But there are huge privacy issues and having a framework for full-public-disclosure would be a lot of work. Nevertheless, it would go a long way to making moderation decisions and PROCESSES more transparent and well-understood by the general public.
Social platforms actually have much BETTER and more high-quality evidence of user misbehavior than “the real world.” In the real world, facts can be obscured or hidden. On a digital platform, everything you do is logged. The truth is there.
And, not only that, the evidence can even be presented in an anonymized way for impartial evaluation.
Strip out identifiers and political specifics, and like my “in a language you donʻt understand” example: moderators (and armchair quarterbacks) can look at the behavior and decide if itʻs worthy of curtailment.
Again, this is a lot of work. You canʻt just dump data, because itʻs a heightened situation of emotional tension: the first time you try, something extra will get accidentally disclosed, and youʻll have ANOTHER situation on your hands. Now you have two problems.
So I donʻt know if thatʻs workable. But what I do know is, people need to think about content moderation differently, because:
1: It is a signal-to-noise management issue
2: Freedom of speech was NEVER the issue (c.f. spam)
3: Could you still moderate if you canʻt read the language?
Warning: donʻt over-rotate on #3 and try to do all your content moderation through AI. Facebook tried that, and ended up with a bizarre inhuman dystopia. (I have a bunch more to say about this if people care)
Having said all that, I wish to offer my comments on the (alleged) “war room team” that Elon has apparently put to work at Twitter:
I donʻt know the other people super well (tho Sriram is cool; he was briefly an investor in a small venture of mine), but Iʻm heartened to know that @DavidSacks is involved.
Sacks is a remarkably good operator, possibly one of the best ones in the modern tech era. He was tapped to lead a turnaround at Zenefits when that company got into really hot water:
“Content moderation” is the most visible issue with Twitter (the one talking heads love to obsess over) but itʻs always been widely known that Twitter suffers from numerous operational problems that many CEOs have tried in vain to fix.
If Twitter were operationally excellent, itʻll have a lot better chance of tackling its Inherently Very Hard Moderation Problems and maybe emerge with novel solutions that benefit everyone. If anyone can do that, itʻs Sacks.
Twitter employees are about to either be laid off or will look back on this as the time they did the best work of their lives.
Finally, while Iʻve got your attention, Iʻd like to tell you my personal secret to a positive Twitter experience - a little-known Twitter add-on called Block Party: @blockpartyapp_
One thing that Twitter did well (that Iʻm surprised FB hasnʻt copied) is exposing their API for content filtering.
This allows 3rd-party app developers to create specialized solutions that Twitter canʻt/wonʻt do.
Block Partyʻs founder Tracy Chou understands the weird and subtle nuances of content filtering on the internet: you donʻt use a cudgel, you need a scalpel (or three).
Block Party doesnʻt simply wholesale block things, it filters them in an intelligent way based on criteria you set, and uses data across the system to tune itself.
It doesnʻt just throw away things it filters for you, it puts them in a box so you can go through it later when you want. Because no automated filter is perfect! (Remember the “bizarre inhuman AI dystopia” from above?)
If youʻre someone who gets a LOT of hate (or just trash) and you donʻt really WANT to go through it but need to (just in case thereʻs something valuable), you can also authorize a trusted friend to do it for you.
Overall, it has smoothly and transparently improved the signal-to-noise ratio of my Twitter experience, especially during a period of cultural upheaval when youʻd expect MORE crazy crap…
But no, for me, my Twitter experience is great and clean and informative and clever. Iʻve used Twitter more and more ever since installing it.
Disclosure: as a result of these experiences, Iʻm now an investor in Block Party.
If you enjoyed this and want more spicy takes on social media (and advice on how to fix the climate, or investment tips), follow me!
I feel like there is an ancient desire common to all peoples, stretching back through history. Our ancestors knew the fundamental value of water megaprojects: land otherwise dry, but made fertile by applying patient labor, putting into place crucial water-moving infrastructure.
Our planet actually has lots of water. The modern notion of "abundance mentality" is key here: are you willing to do the work to bring water from where it exists to where you need it?
If you can, you can be wealthy. Our willingness to do so at whatever level our civlization's then-current technology allows is the most basic expression of abundance mentality: we can have water, we can have food, we can be wealthy - if we are willing to do the work and build the systems to bring the water.
There is, literally, enough water for everyone to thrive.
Moral hazard or not, I now feel the climate situation is bad enough that we should begin scalability work for stratospheric aerosol injection (SAI) immediately.
This is not the same conclusion I would have had even two years ago, but the increase in ocean temperature and extreme climate events indicates a trend that will rapidly get worse unless we are able to take global-scale action within the next 1-5 years, and SAI is the only feasible one.
For those whose initial reaction is opposed, there are a few key things you should be aware of:
- One common fear is that this will be bad for crop yields. I thought this too, but the existing data from volcanic eruptions (which have similar effect) indicated a neutral to positive (!) productivity effect on crops.
- This is not "polluting the air with sulphur." The amount of SO2 needed to significantly induce cooling is on the order of 1% of the SO2 pollution we currently emit, and we would be injecting it into the upper atmosphere. Existing SO2 pollution occurs much lower down, so moving it much higher would likely be better, in terms of health/pollution effects.
- The cessation of sulphur emissions from ships since the 2020 ban on those fuels has given us strong evidence that the prior SO2 emitted by those ships had an (unintended) anti-warming effect on the Atlantic shipping lanes, which is now warming rapidly. While it was also unhealthy pollution, it gives us strong real-world data that this would work at large scale, and we can do it without the harmful pollution side effects by injecting it in the higher atmosphere.
At this point I believe the facts now this conclusion should be relatively uncontroversial if one is practical about looking for solutions.
I am the "tree guy" and in 2020 I would not have supported this, as I felt the world could move quickly to a large-scale reforestation and land restoration effort to make significant progress by 2030. But pandemic, wars, and recession have prevented this (along with good ol' inertia), and warming has accelerated.
Would successful implementation of SAI reduce incentive to move away from fossil fuels? It is a very real risk, yes. In fact, I personally think it is likely.
But the hard brutal reality is that the heating trends right now are very dire, and immediate action to reduce the heating are necessary.
We must begin scaling SAI immediately precisely so that things like reforestation and other carbon capture solutions have time for implementation, which in turn buys time for decarbonization of our economies.
If you want to support this, @MakeSunsets seems to have highest ROI and most scalable method of doing this. You can donate to them or utilize their DIY guide, as SAI can be done in a decentralized way.
Most of the copy on their website talks about it as “cooling power equivalent to trees” which is really scientifically awful if you are STEM-literate, but I talked to them and they do it because (as measured in donation effectiveness), it drives the most action.
They have done the science properly under the hood, so it is just part of the unfortunate reality of climate where you need to speak differently to audiences with different levels of sophistication. One of the things I like about them is that they have very good telemetry and measurement so they can report accurately on what they’re doing.
The advantage of using high-altitude balloons is that they are cheap and scalable to produce, rather than needing to design-build expensive new aircraft to deploy it, which was how SAI was originally conceived.
If you just want to support tree-planting (forest restoration), you can still send money to terraformation.org. We will direct it to maximally catalytic tree-planting (native biodiverse forest restoration) efforts.
If you wish to invest larger amounts, you can contact us and we can arrange for you to fund any number of projects that we have coming through our forest creation accelerator.
Conspiracy theorists who keep saying there’s “no way” the hurricane could have intensified so much without some human cause are so close to getting it.
I mean, literally there was a decades-long conspiracy and people have been trying to tell you about it
"As early as 1959, oil industry executives understood the connection between burning fossil fuels and climate change. Soon thereafter, industry scientists confirmed beyond a reasonable doubt that the burning of fossil fuels contributed to anthropogenic climate change. In response, oil companies scrambled to promulgate climate change denial and disinformation in order to avoid government regulation. It was not until the late 1990s and early 2000s that oil companies began publicly acknowledging the scientific consensus on climate change and responded by promoting market-based solutions to mitigating emissions.
Popular concern for anthropogenic climate change did not emerge until the late 1980s, but formerly secret industry documents that are now available through the Climate Files database reveal that oil industry scientists were raising concern about oil’s impacts on the climate as early as the 1950s and 1960s."
There's this hypothetical climate scenario where a summer heat wave hits a city, temp is high enough that:
- internal combustion engines don't work
- HVAC is overloaded and also breaks
- because people can't leave or be cooled, thousands or millions die in the span of a week
Given the non-linear rise in high temp records, I actually think it is within the realm of possibility that this occurs as soon as NEXT SUMMER.
I hate to bring it up because it's going to sound like fear-mongering, but I promise it's not.
I've always thought of "deadly-too-hot" scenarios as being vaguely further in the future, but in looking at trends and what we've seen this summer, it may be closer than we imagine.
Here is a graph showing surface air temps. The increases in high temps seems to be non-linear (i.e. accelerating).
It's worth noting that the reduction in sulphur emissions from ships is partially responsible for this jump (IYKYK).
Here is the administrative proceeding from the SEC. I recall much of this personally in the news at the time. If we steelman PG/YC side's comments, maybe the news narrative was controlled by Sacks, but official docs from the SEC are another thing: