The case concerned a video that was critical of Modi & the BJP. Facebook removed it under its Dangerous Orgs policy. It was a mistake, & when the Board selected the case, but before it decided, fb said it got it wrong and reversed the decision. /2
This is not the first time the Board has done this. It's telling Facebook "you can't moot a case by simply reversing decisions when we pick a case. You wouldn't have found the mistake otherwise." /3
This was a case of human error. It was flagged for terrorism, a human looked at it and removed it. Facebook said the length of the video (17 mins), the complexity of the content could have contributed to the mistake—reviewers don't always have time to watch videos in full /4
The user's account was automatically locked, and they did not get a review. Facebook said this was due to reduced appeal capacity due to COVID. The Board says "yeah, we get that, unusual times etc. but you really need to get back to full capacity asap" /5
The Board was very conscious of the political context. It expressed concern about mistakes that especially impact minority language speakers or religious minorities, and noted that the political context in India right now "underscores the importance of getting decisions right" /6
The Board asked for, and Facebook refused to provide (on the basis that it was not "reasonably required" or for legal reasons), answers to the Board's qs re: possible communications from Indian authorities. It's possible fb is legally restricted from doing so, but still (!) /7
The Board acknowledges that "mistakes are inevitable when moderating content at scale" but that without knowing error rates it's impossible to tell from one case whether this is a systemic problem or a one-off. /8
This is real progress from the Board!! Its previous decisions were WAY too case-specific, & didn't reckon with the way CoMo is different from offline speech cases
1. Fb should translate its Community Standards into Punjabi because, you know, 30 million ppl speak it in India and more around the world.
Uh... INSANE this had to come from the Board, but here we are. Geez, fb. /10
2. Fb should restore human review to pre-pandemic levels asap, while protecting health of staff.
Yes. And (this is me now) fb in its response really should disclose how far off those levels it is, and its timeline and plan for getting back to full capacity. /11
(me still) This has been going on for a while now, and looks set to continue in certain areas for even longer. There's only so long a company with the resources of fb should be able to keep pleading "pandemic" when the "pandemic" also makes adequate CoMo even more important /12
3. Finally, the Board says fb should report increase public info on "error rates"
😊😊😊 this recommendation is like Christmas come early for me, and is just obviously the way the conversation about CoMo needs to go
The Board says fb shd do this by "making this info viewable by country and language" & "underscores that more detailed transparency will help the public spot areas where errors are more common, incl potential specific impacts on minority groups, & alert fb to correct them." /14
These recommendations are targeted, strong, and important. They are not binding. How fb responds in 30 days is the most critical part of this process (& also the part ppl tend to pay least attention to). /15
There's a lot of (in many respects, justified!) skepticism of the Board. It can't fix everything. But these recommendations show why I'm still hopeful it can make some meaningful impact. If fb plays ball. /16
As always, we're tracking this @lawfareblog FOBblog, and will be watching next steps.
Facebook can turn down the distribution on "borderline content" that approaches the line of breaking the rules. And it does, in emergency situations. But it's never explained why it doesn't do that all the time.
Indeed, in 2018, one Mark Zuckerberg argued they should
"Content-moderation decisions are momentous but they are as momentous as they are bc of fb’s engineering decisions & other choices that determine which speech proliferates... & in what context [users] see it"
I think it's unlikely the @OversightBoard will take their recommendation to refuse to answer the question abt Trump's account until fb commissions & publishes a study abt the lead up to Jan. 6
(Altho I think it should and likely will recommend such a study in their decision)
But what if they do? A fun hypothetical for this wannabe law professor to imagine.
Things I'm curious about: 1. If the decision goes against public comment, will that discourage future participation? 2. Level of intl engagement 3. If future overseas cases can also garner such considered engagement (I sure hope so!)
Really starting to regret not putting my comment on letterhead...
The US has a rich tradition of seeing the 1A as existing to facilitate democracy and self-government. Australia drew on that thinking in implying a freedom of political communication into its Constitution which, famously, has no right to free speech.
During the same period (as Emily documents, drawing on @glakier's work), the US itself moved way from that tradition, adopting an increasingly libertarian view of the 1A instead.
absolutely nailed the spelling of "too" this time ☺️
@OversightBoard Checks and balances shouldn't exist only for decisions taken against the winds of public opinion. Facebook should allow oversight of its most high-profile and controversial content moderation decision yet.