Samidh Profile picture
Chief Product Officer @ Groq. Tweets about AI for Societal Impact, Responsible Innovation, Disruptive Tech, and Product Mgmt. Ex-FB Civic Integrity Founder.
Apr 27, 2022 6 tweets 2 min read
Okay, I will try to keep this as simple as possible for Trust & Safety novices out there. Plenty of speech is not strictly illegal but is a horrible experience for people on a platform. Examples: Racism, Doxxing, Nudity, Spam, etc. So what do you do about it? 🧵... Option A: Do nothing in the spirit of free speech absolutism. Then your platform becomes a cesspool that silences the voices of anyone outside the mainstream, thereby betraying your goal of being a marketplace of ideas.
Dec 13, 2021 9 tweets 2 min read
1/ Today seems like the right day to share my "Framework for Integrity Responsibility" which is a discussion tool I created that helped us at FB figure out what we could take on. I hope it can be useful to the whole industry. Here's a quick primer on how it works... 🧵 2/ First some background: The most painful disagreements we had at FB almost always came down to differences of opinion over what we felt we had a responsibility to solve. Usually integrity teams would have a wider view of that responsibility than execs, leading to frustration.
Nov 5, 2021 6 tweets 2 min read
1/ It's been ~6 weeks since I left FB, and since then I've tried to bring deeper understanding to the complex issues the company faces. Some tweets have been supportive and others skeptical. But I now understand why so many ex-employees avoid saying anything vaguely critical. 🧵 2/ First, Twitter asymmetrically amplifies tweets that are critical of FB over those that are in defense of the company. I'd estimate ~300x more distribution for the former. Makes people who are balanced (like me) seem much more adversarial than they are, unfortunately.
Oct 25, 2021 4 tweets 3 min read
So much to unpack from the latest tranche of FB Papers (stay tuned). But for today I'll just tweet quotes that honor rank & file integrity workers in the arena-- their conscientiousness, creativity, and courage in the face of internal hurdles couldn't be more clear or more noble. From @claresduffy in @CNNBusiness, quoting @lessig: "The company is filled with thousands of thousands of Frances Haugens ... who are just trying to do their job. They are trying to make Facebook safe and useful and the best platform for communication that they can."
Oct 18, 2021 11 tweets 3 min read
There are a million things I could say about this hate speech prevalence debate, but it boils down to this: it perfectly illustrates just how worrisome it is that FB's integrity efforts are managed as growth initiatives. Long 🧵 so stick with me... wsj.com/articles/faceb… First some quick background on hate speech itself. It is extremely hard to define (varies by place & time) and the FB teams working on it are absolutely world class. The fact that we can even have a debate about measurement-- based on numbers they can calculate-- is laudable.
Oct 3, 2021 18 tweets 6 min read
Since annotating leaks seems to be in vogue these days, here are my notes on this memo. 🧵... nytimes.com/2021/10/02/tec… Maybe not a primary *cause*, but is it an accelerant? And if it were, does FB think it has a responsibility to lessen its contribution? There are many long-standing ills of humanity that tech can make worse. The builders of these technologies should want to do their part to help.
Oct 1, 2021 9 tweets 2 min read
Here is a quick primer for external folks who are seeking to make sense of FB's internal research leaks. There is a lot of context that's critical to grasp.🧵... First recognize that researchers at FB are literally the best in the world at understanding complex issues at the interface of society and technology. They are of the highest character and typically have the most rigorous training. And they are in this to help, not for the $.
Sep 16, 2021 10 tweets 2 min read
Today's WSJ reporting was especially difficult for me to read because it touches on a topic that probably "kept me awake" more than anything else when I was at FB. And that is, how can social networks operate responsibly in the global south? wsj.com/articles/faceb…

🧵... It can't be easily disputed that social networks' rapid expansion into the global south was at times reckless and arguably neocolonialist. And the inadequate attention both within platforms and within the media on these issues is rightly shocking. What can help? Some thoughts...
Sep 15, 2021 11 tweets 3 min read
Was hoping for a quiet day but @JeffHorwitz strikes again. Do I have thoughts on the issues raised? You bet! I share in the spirit of trying to enhance understanding of these complex dilemmas. In short, we need to imbue feeds with a sense of morality. wsj.com/articles/faceb… When you treat all engagement equally (irrespective of content), increasing feed engagement will invariably amplify misinfo, sensationalism, hate, and other societal harms. I wish this weren't the case, but it is so predictable that it is perhaps a natural law of social networks.
Sep 14, 2021 5 tweets 2 min read
To those whose reaction to this story involves saying "I can't believe Instagram wrote that down", would you rather they not write it down? wsj.com/articles/faceb… I see it as a testament to @mosseri's leadership that Instagram is willing to invest in understanding its impact on people-- both the good and the awful-- and spin up dedicated efforts to mitigate even the most intractable and heartbreaking harms.
Sep 13, 2021 11 tweets 3 min read
While I had no involvement whatsoever in @JeffHorwitz's very thorough reporting in the WSJ on FB's x-check system, I was quoted in the article based on a leaked internal post, so I am compelled to give a more full perspective. First, to state the obvious, automated moderation systems inevitably make lots of mistakes because human language is nuanced & complex. In theory, a confirmatory round of review is prudent because it is an awful experience to have your post taken down without cause.