Samidh Profile picture
13 Sep, 11 tweets, 3 min read
While I had no involvement whatsoever in @JeffHorwitz's very thorough reporting in the WSJ on FB's x-check system, I was quoted in the article based on a leaked internal post, so I am compelled to give a more full perspective.
First, to state the obvious, automated moderation systems inevitably make lots of mistakes because human language is nuanced & complex. In theory, a confirmatory round of review is prudent because it is an awful experience to have your post taken down without cause.
But how you execute that second round of review is critically important! Figuring out who is eligible, how you staff, etc. makes all the difference between responsible enforcement and de-facto exemptions from the platform's policies.
Though it is easy to say there aren't two systems of justice on FB in terms of the policies themselves, having two different *processes* for justice is equally problematic. As they say, justice delayed is often justice denied. And process *is* law.
So how do you reconcile this essentially impossible accuracy vs. latency vs. resourcing tradeoff? There's no ideal solution, but here are some things that could help: 1/ list transparency, 2/ adequate staffing, and 3/ better user messaging & appeals.
On 1/ Write a publicly defensible and transparent standard for who is eligible for x-check-style reviews. Then make both the standard and the lists visible to all both to ensure those standards are being met and to evolve the standards over time.
On 2/ Adequately staff the human moderation queue to guarantee a maximum latency (SLA) for reviews. If a reasonable SLA can't be achieved, make the eligibility criteria more stringent. While pending review, reduce the distribution of potentially violating posts to minimize harm.
On 3/ Failing 1 & 2, fallback on automation and pair it with more informative messaging to users as well as an expedited and robust appeals process. People should know their posts were taken down due to an algorithm and get an equitable chance for a second look.
Steps like these would not be free of controversy. Critics will assume worst intent and assail the company for imagined favoritism and bias. This is where uninformed criticism can actually backfire and prevent platforms from doing the brave thing for society.
Fundamentally, the trust & safety solution space is far broader when platforms prioritize reduction of user harm over short-term reputational concerns. When integrity teams get frustrated with company leaders, it is usually due to a misalignment on which priority matters more.
Finally, I do wish the WSJ article had better highlighted the progress FB has made already, including on the issues flagged above. There is more to do and I hope that this moment of broader reflection is an opportunity for the teams doing this hard work to get even more done.

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Samidh

Samidh Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @samidh

16 Sep
Today's WSJ reporting was especially difficult for me to read because it touches on a topic that probably "kept me awake" more than anything else when I was at FB. And that is, how can social networks operate responsibly in the global south? wsj.com/articles/faceb…

🧵...
It can't be easily disputed that social networks' rapid expansion into the global south was at times reckless and arguably neocolonialist. And the inadequate attention both within platforms and within the media on these issues is rightly shocking. What can help? Some thoughts...
When a social network operates in any market, it needs to ensure it can adhere to some minimal set of trust & safety standards. It needs to be capable of processing user reports and automatically monitoring for the worst content in all the supported dialects.
Read 10 tweets
15 Sep
Was hoping for a quiet day but @JeffHorwitz strikes again. Do I have thoughts on the issues raised? You bet! I share in the spirit of trying to enhance understanding of these complex dilemmas. In short, we need to imbue feeds with a sense of morality. wsj.com/articles/faceb…
When you treat all engagement equally (irrespective of content), increasing feed engagement will invariably amplify misinfo, sensationalism, hate, and other societal harms. I wish this weren't the case, but it is so predictable that it is perhaps a natural law of social networks.
So it is no surprise that the MSI (meaningful social interaction) ranking changes of 2018/2019 had this impact, and as the reporting shows, many people at FB are conscious of and concerned about these side effects.
Read 11 tweets
14 Sep
To those whose reaction to this story involves saying "I can't believe Instagram wrote that down", would you rather they not write it down? wsj.com/articles/faceb…
I see it as a testament to @mosseri's leadership that Instagram is willing to invest in understanding its impact on people-- both the good and the awful-- and spin up dedicated efforts to mitigate even the most intractable and heartbreaking harms.
The alternative would be an app that is blind to its role in society. That would be reckless and dangerous to us all. Instead, we need to engage with this research thoughtfully and bring to the conversation a spirit of constructive problem solving.
Read 5 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!

Follow Us on Twitter!

:(