Samidh Profile picture
Oct 3, 2021 18 tweets 6 min read Read on X
Since annotating leaks seems to be in vogue these days, here are my notes on this memo. 🧵... nytimes.com/2021/10/02/tec…
Maybe not a primary *cause*, but is it an accelerant? And if it were, does FB think it has a responsibility to lessen its contribution? There are many long-standing ills of humanity that tech can make worse. The builders of these technologies should want to do their part to help.
Yes, but what is meaningful engagement for an individual might also be extremely harmful for society overall. Polarizing content is very enticing to an individual, but can break society apart if it is becomes predominant in the info environment.
Alternative interpretation: "We knew it would be harmful at the time but decided to fix it later, just as we do with all launches."
Yup, "that's life". What is new though are platforms preferentially amplifying this kind of speech over more level-headed ramblings of non-extremists. Acknowledging that this change meant you are more likely to come across extreme posts just shows what FB chose to prioritize.
This conflates two things: hate speech + polarization, which aren't necessarily the same. What makes polarizing content such a challenge is it generally doesn't violate the more narrow rules around hate speech. Efforts on the latter don't excuse inaction on the former.
This is kind of an own-goal. Not sure why WhatsApp needs to be dragged into this topic. Polarization there is a separate discussion which requires looking at the dynamics of groups and forwarding. If it weren't an issue, would WhatsApp have instituted forwarding limits?
Yup, this was all very excellent work, if I do say so myself ;-)
These were indeed extremely important measures! Now providing the public with transparency on their triggering criteria would ensure accountability that these measures are activated/deactivated due to risk of actual harm rather than just PR risk.
This is a very misleading analogy. It implies that the current way that feeds are ranked is somehow "correct". The truth is that *all* ranking changes will impact the flow of benign information. Assuming the status quo is to be protected is what leads to poor decision-making.
This is where the rubber hits the road. What is the acceptable tradeoff between benign and harmful posts? To prevent X harmful posts from going viral, would you be willing to prevent Y benign posts from going viral? No easy answers. Worth the debate.
How about some intellectual consistency? You can't earlier say you worry about collateral damage, and then say you are okay with banning all political group recommendations. Makes the criteria feel opaque, hypocritical, or even non-existent.
The work to remove organized hate networks flies under the radar but is extremely impressive and rightfully deserving of recognition/praise. (Sadly this tweet will probably never get much distribution because it applauds FB. Please prove me wrong.)
Conflation once again. I'd wager the vast majority of election delegitimization content was not from organized hate groups, but rather from more traditional political organizers. This makes it much harder to deal with and worthy of deeper research... for those who are brave.
"Squarely" is a cleverly flexible word. But the overall rhetoric here sadly shows FB doesn't have a clear sense of its own responsibility. What the world asks of the company is not to accept sole blame, but rather to respond and help where it is able. That is response-ability.
Still unable to say his name. The specter of 2024 looms...
... but at least 2020 is over!
100% agree. Now it is time for the decision-makers to honor this incredible work, not just with their words but with their actions.

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Samidh

Samidh Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @samidh

Apr 27, 2022
Okay, I will try to keep this as simple as possible for Trust & Safety novices out there. Plenty of speech is not strictly illegal but is a horrible experience for people on a platform. Examples: Racism, Doxxing, Nudity, Spam, etc. So what do you do about it? 🧵...
Option A: Do nothing in the spirit of free speech absolutism. Then your platform becomes a cesspool that silences the voices of anyone outside the mainstream, thereby betraying your goal of being a marketplace of ideas.
Option B: Have community standards that make such speech out of bounds (i.e., removable). Then you have to define where to draw the line and many people will accuse you of censorship when they disagree with your boundary.
Read 6 tweets
Dec 13, 2021
1/ Today seems like the right day to share my "Framework for Integrity Responsibility" which is a discussion tool I created that helped us at FB figure out what we could take on. I hope it can be useful to the whole industry. Here's a quick primer on how it works... 🧵
2/ First some background: The most painful disagreements we had at FB almost always came down to differences of opinion over what we felt we had a responsibility to solve. Usually integrity teams would have a wider view of that responsibility than execs, leading to frustration.
3/ To facilitate a conversation and expose disconnects, I created this framework. For any integrity problem, it asks ppl to place it in a matrix. The rows convey whether you think the responsibility for addressing the problem lies with platforms, individuals, or society/gov...
Read 9 tweets
Nov 5, 2021
1/ It's been ~6 weeks since I left FB, and since then I've tried to bring deeper understanding to the complex issues the company faces. Some tweets have been supportive and others skeptical. But I now understand why so many ex-employees avoid saying anything vaguely critical. 🧵
2/ First, Twitter asymmetrically amplifies tweets that are critical of FB over those that are in defense of the company. I'd estimate ~300x more distribution for the former. Makes people who are balanced (like me) seem much more adversarial than they are, unfortunately.
3/ Many ex-employees avoid critiquing FB because they don't want to risk violating "non-disparagement" exit agreements they chose to sign. Here's what I was offered a ton of $ to sign, yet refused. I wanted to preserve my voice. FB sought to muzzle it. Ironic given their mission. Image
Read 6 tweets
Oct 25, 2021
So much to unpack from the latest tranche of FB Papers (stay tuned). But for today I'll just tweet quotes that honor rank & file integrity workers in the arena-- their conscientiousness, creativity, and courage in the face of internal hurdles couldn't be more clear or more noble.
From @claresduffy in @CNNBusiness, quoting @lessig: "The company is filled with thousands of thousands of Frances Haugens ... who are just trying to do their job. They are trying to make Facebook safe and useful and the best platform for communication that they can."
From @CaseyNewton in @verge: "After reviewing hundreds of documents and interviewing current and former Facebook employees about them, it’s clear that a large contingent of workers within the company are trying diligently to rein in the platform’s worst abuses...."
Read 4 tweets
Oct 18, 2021
There are a million things I could say about this hate speech prevalence debate, but it boils down to this: it perfectly illustrates just how worrisome it is that FB's integrity efforts are managed as growth initiatives. Long 🧵 so stick with me... wsj.com/articles/faceb…
First some quick background on hate speech itself. It is extremely hard to define (varies by place & time) and the FB teams working on it are absolutely world class. The fact that we can even have a debate about measurement-- based on numbers they can calculate-- is laudable.
Yet the fact that FB is so single-minded about average prevalence as the key metric tells you their execs fundamentally misunderstand the problem. Hate speech is harmful not because the average person sees it, but rather because it foments extremism.
Read 11 tweets
Oct 1, 2021
Here is a quick primer for external folks who are seeking to make sense of FB's internal research leaks. There is a lot of context that's critical to grasp.🧵...
First recognize that researchers at FB are literally the best in the world at understanding complex issues at the interface of society and technology. They are of the highest character and typically have the most rigorous training. And they are in this to help, not for the $.
Given that integrity teams are organizationally siloed away from the rest of their product orgs, integrity researchers will focus on harms and not give the full picture of a platform's effects because that is literally their job-- and a critical role to play!
Read 9 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us!

:(