Samidh Profile picture
3 Oct, 18 tweets, 6 min read
Since annotating leaks seems to be in vogue these days, here are my notes on this memo. 🧵... nytimes.com/2021/10/02/tec…
Maybe not a primary *cause*, but is it an accelerant? And if it were, does FB think it has a responsibility to lessen its contribution? There are many long-standing ills of humanity that tech can make worse. The builders of these technologies should want to do their part to help.
Yes, but what is meaningful engagement for an individual might also be extremely harmful for society overall. Polarizing content is very enticing to an individual, but can break society apart if it is becomes predominant in the info environment.
Alternative interpretation: "We knew it would be harmful at the time but decided to fix it later, just as we do with all launches."
Yup, "that's life". What is new though are platforms preferentially amplifying this kind of speech over more level-headed ramblings of non-extremists. Acknowledging that this change meant you are more likely to come across extreme posts just shows what FB chose to prioritize.
This conflates two things: hate speech + polarization, which aren't necessarily the same. What makes polarizing content such a challenge is it generally doesn't violate the more narrow rules around hate speech. Efforts on the latter don't excuse inaction on the former.
This is kind of an own-goal. Not sure why WhatsApp needs to be dragged into this topic. Polarization there is a separate discussion which requires looking at the dynamics of groups and forwarding. If it weren't an issue, would WhatsApp have instituted forwarding limits?
Yup, this was all very excellent work, if I do say so myself ;-)
These were indeed extremely important measures! Now providing the public with transparency on their triggering criteria would ensure accountability that these measures are activated/deactivated due to risk of actual harm rather than just PR risk.
This is a very misleading analogy. It implies that the current way that feeds are ranked is somehow "correct". The truth is that *all* ranking changes will impact the flow of benign information. Assuming the status quo is to be protected is what leads to poor decision-making.
This is where the rubber hits the road. What is the acceptable tradeoff between benign and harmful posts? To prevent X harmful posts from going viral, would you be willing to prevent Y benign posts from going viral? No easy answers. Worth the debate.
How about some intellectual consistency? You can't earlier say you worry about collateral damage, and then say you are okay with banning all political group recommendations. Makes the criteria feel opaque, hypocritical, or even non-existent.
The work to remove organized hate networks flies under the radar but is extremely impressive and rightfully deserving of recognition/praise. (Sadly this tweet will probably never get much distribution because it applauds FB. Please prove me wrong.)
Conflation once again. I'd wager the vast majority of election delegitimization content was not from organized hate groups, but rather from more traditional political organizers. This makes it much harder to deal with and worthy of deeper research... for those who are brave.
"Squarely" is a cleverly flexible word. But the overall rhetoric here sadly shows FB doesn't have a clear sense of its own responsibility. What the world asks of the company is not to accept sole blame, but rather to respond and help where it is able. That is response-ability.
Still unable to say his name. The specter of 2024 looms...
... but at least 2020 is over!
100% agree. Now it is time for the decision-makers to honor this incredible work, not just with their words but with their actions.

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Samidh

Samidh Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @samidh

1 Oct
Here is a quick primer for external folks who are seeking to make sense of FB's internal research leaks. There is a lot of context that's critical to grasp.🧵...
First recognize that researchers at FB are literally the best in the world at understanding complex issues at the interface of society and technology. They are of the highest character and typically have the most rigorous training. And they are in this to help, not for the $.
Given that integrity teams are organizationally siloed away from the rest of their product orgs, integrity researchers will focus on harms and not give the full picture of a platform's effects because that is literally their job-- and a critical role to play!
Read 9 tweets
16 Sep
Today's WSJ reporting was especially difficult for me to read because it touches on a topic that probably "kept me awake" more than anything else when I was at FB. And that is, how can social networks operate responsibly in the global south? wsj.com/articles/faceb…

🧵...
It can't be easily disputed that social networks' rapid expansion into the global south was at times reckless and arguably neocolonialist. And the inadequate attention both within platforms and within the media on these issues is rightly shocking. What can help? Some thoughts...
When a social network operates in any market, it needs to ensure it can adhere to some minimal set of trust & safety standards. It needs to be capable of processing user reports and automatically monitoring for the worst content in all the supported dialects.
Read 10 tweets
15 Sep
Was hoping for a quiet day but @JeffHorwitz strikes again. Do I have thoughts on the issues raised? You bet! I share in the spirit of trying to enhance understanding of these complex dilemmas. In short, we need to imbue feeds with a sense of morality. wsj.com/articles/faceb…
When you treat all engagement equally (irrespective of content), increasing feed engagement will invariably amplify misinfo, sensationalism, hate, and other societal harms. I wish this weren't the case, but it is so predictable that it is perhaps a natural law of social networks.
So it is no surprise that the MSI (meaningful social interaction) ranking changes of 2018/2019 had this impact, and as the reporting shows, many people at FB are conscious of and concerned about these side effects.
Read 11 tweets
14 Sep
To those whose reaction to this story involves saying "I can't believe Instagram wrote that down", would you rather they not write it down? wsj.com/articles/faceb…
I see it as a testament to @mosseri's leadership that Instagram is willing to invest in understanding its impact on people-- both the good and the awful-- and spin up dedicated efforts to mitigate even the most intractable and heartbreaking harms.
The alternative would be an app that is blind to its role in society. That would be reckless and dangerous to us all. Instead, we need to engage with this research thoughtfully and bring to the conversation a spirit of constructive problem solving.
Read 5 tweets
13 Sep
While I had no involvement whatsoever in @JeffHorwitz's very thorough reporting in the WSJ on FB's x-check system, I was quoted in the article based on a leaked internal post, so I am compelled to give a more full perspective.
First, to state the obvious, automated moderation systems inevitably make lots of mistakes because human language is nuanced & complex. In theory, a confirmatory round of review is prudent because it is an awful experience to have your post taken down without cause.
But how you execute that second round of review is critically important! Figuring out who is eligible, how you staff, etc. makes all the difference between responsible enforcement and de-facto exemptions from the platform's policies.
Read 11 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!

Follow Us on Twitter!

:(