Our new research estimates that *one in twenty* comments on Reddit are violations of its norms: anti-social behaviors that most subreddits try to moderate. But almost none are moderated.
First, what does this mean? It means if you are scrolling through a post on Reddit, in a single scroll, you will likely see at least one comment that exemplifies bad behaviors such as personal attacks or bigotry that most communities would choose not to see. (2/13)
So let’s get into the details. What did we measure exactly? We measured the proportion of unmoderated comments in the 97 most popular subreddits that are violations of one of its platform norms that most subreddits try to moderate (e.g., personal attacks, bigotry). (3/13)
We measured this by designing a human-AI pipeline that identifies these norm violations at scale, and a bootstrap estimation procedure to quantify measurement uncertainty. Our measurement covers two periods: (1) 2016, and (2) 2020–21. (4/13)
We found that 6.25% (95% Confidence Interval [5.36%, 7.13%]) of all comments in 2016, and 4.28% (95% CI [2.50%, 6.26%]) in 2020-21, are norm violations. Moderators only removed one in twenty violating comments in 2016, and one in ten violating comments in 2020. (5/13)
Personal attacks were the most prevalent category of norm violation; pornography and bigotry were the most likely to be moderated, while politically inflammatory comments and misogyny/vulgarity were the least likely to be moderated. (6/13)
Here’s how I see these results.
1) Let’s put this in perspective. Facebook’s 2020 transparency report of 0.1% of its content being categorized as hate speech raised concerns as it translates to affecting millions of users.
I’d say, ~5% of norm violation is *a lot.* (7/13)
Given our focus on platform-level norm violations, our estimation is, if anything, a lower-bound. Notice that these rates are comparable to the 3–5% rate of hate speech moderation reported in Facebook’s internal docs made public in recent whistleblower exposures. (8/13)
That these findings converge (despite the very different contexts and, likely, diverging methods of measurement) illustrate a broader point about the challenges of today’s large-scale content moderation. (9/13)
2) So where do we go from here? If 5% of Reddit content violates its own norms, we can’t just point the finger at moderators, and we can’t just agitate for marginally better tools. Anti-social behavior is orders of magnitude too omnipresent for that. (10/13)
Probably the right conversation to have here involves the need for massive shifts in how we envision these systems. Can we look beyond social media’s predominant role as a public square to one where reachability can co-exist with a stronger sense of shared norms? (11/13)
Can we turn the reactive design practices of building interventions in response to a dumpster fire into one that can proactively shape behaviors early in these spaces? And can we better support the moderators psychologically? (12/13)
One more thank you to my collaborator on this work, @josephseering, and my advisor, @msbernst. And thank you to all the volunteer content moderators who are engaged in an extremely important, yet challenging task on behalf of their communities. (13/13)
• • •
Missing some Tweet in this thread? You can try to
force a refresh
E.g., say you are creating a new community for discussing a StarWar game with a few rules. Given this description, our tool generated a simulacrum like this: (2/10)
Why are these useful? In social computing design, understanding our design decisions’ impact is hard since many challenges do not arise until a system is populated by *many*. Think about: newcomers with unintentional norm-breaking, trolling, or other antisocial behaviors (3/10)