I have literally no idea what Facebook's new policy is on QAnon or what it will apply to in future, and so I would like you to please read this post but replace "Twitter" with "Facebook"
Hard not to think that the House condemnation played a role here, given timing. I hope so: that seems a more accountable and democratic way for this to work. I wish that had been made explicit.
Applaud the outcome if you like, but this is not tech accountability. This may have temporarily solved fb's problem, but it has not solved ours, either with fb or (I bet) QAnon. It's kicking the can down the road.
Oh, sub in "Facebook" for Twitter again plz ☝️
Btw, Twitter did end up releasing what is the most comprehensive policy so far on this front. I like it, but bc I'm a pain, I'm waiting to see if they enforce it consistently & transparently (or at all)
And, this is not CoMo but on the human side, this is a blunt way of dealing with a mental health crisis wrapped in conspiracy theories. We will need bigger imaginations than just bans.
• • •
Missing some Tweet in this thread? You can try to
force a refresh
Watching Trump continually test platforms' voter suppression policies, instinctively trying to find ambiguities and loopholes, I'm always reminded of this @kevinroose piece, which to me will be a classic of this era: The President vs. The Mods
"if the mods are afraid to hold them accountable when they break the rules, they will keep pushing the limits again and again — until ultimately, the board is theirs to run."
As an Australian, as in all things, I'm in favor of a purposive interpretation of platform voter suppression and election misinformation policies, rather than a purely textualist one.
This is an important and careful review of the Oversight Board, with robust recommendations that provide important markers now for how Facebook responds and develops the institution. THREAD on some of the things we should watch coming out of this: /1
One of the most important, echoing a constant theme in my own work, is that the "subject matter jurisdiction" (as I call it here ssrn.com/abstract=33653…) needs to expand over time beyond mere take-down/leave up decisions, if the Board is to be a meaningful check /2
A huge and welcome theme of the report is the need to focus on vulnerable groups, noting the difficulty, intersectionality and contextuality of determining this. Imp. to note that this (& other recs) is a rec for the Board, which Facebook cannot and should not control /3
Most coverage of fb's Community Standards report has focused on the Fake Accounts no., but inclusion of appeals data is a Big Deal that deserves more attention. Smarter ppl will no doubt dig into it, but something that jumped out: Hate Speech is by far the most appealed category
People really care about when they are taken down for hate speech! Although their appeals aren't as successful as in other categories, it's not negligible. Terrorist content is rarely appealed, but reasonably successful when appealed.
This information is important context for claims about how well AI is doing at proactively identifying certain types of content. But also for understanding the good that Public Reasoning can do in explaining why content is taken down papers.ssrn.com/sol3/papers.cf…