Yoel Roth Profile picture
Former Head of Trust & Safety at @Twitter. PhD from @AnnenbergPenn. @pastrami's human. @yoyoel@macaw.social
3 subscribers
Nov 8, 2022 12 tweets 3 min read
Verification! Impersonation! Twitter Blue! There’s a lot going on around identity on Twitter — let’s break down what our policies are, and some of the big questions we still need to answer… First, impersonation has always been banned on Twitter. Misleading profiles make Twitter worse for everyone. Last year, we banned more than half a million accounts for impersonating people and brands. transparency.twitter.com/en/reports/rul…
Nov 4, 2022 8 tweets 2 min read
Here are the facts about where Twitter’s Trust & Safety and moderation capacity stands today:

tl;dr: While we said goodbye to incredibly talented friends and colleagues yesterday, our core moderation capabilities remain in place. Yesterday’s reduction in force affected approximately 15% of our Trust & Safety organization (as opposed to approximately 50% cuts company-wide), with our front-line moderation staff experiencing the least impact.
Oct 31, 2022 8 tweets 2 min read
Since Saturday, we’ve been focused on addressing the surge in hateful conduct on Twitter. We’ve made measurable progress, removing more than 1500 accounts and reducing impressions on this content to nearly zero. Here’s the latest on our work, and what’s next. Our primary success measure for content moderation is impressions: how many times harmful content is seen by our users. The changes we’ve made have almost entirely eliminated impressions on this content in search and elsewhere across Twitter.
Oct 30, 2022 4 tweets 2 min read
Let’s talk for a minute about slurs, hateful conduct, and trolling campaigns.

Bottom line up front: Twitter’s policies haven’t changed. Hateful conduct has no place here. And we’re taking steps to put a stop to an organized effort to make people think we have. Our Rules prohibit Hateful Conduct. This includes targeting people with dehumanizing content and slurs.

This DOESN’T mean we have a list of words that are always banned. Context matters. For example, our policies are written to protect reclaimed speech. help.twitter.com/en/rules-and-p…
Apr 5, 2022 6 tweets 2 min read
Since the start of the Russian invasion of Ukraine, our aim has been to remediate abuse at scale and be transparent about our work to protect the conversation happening on Twitter. Today, we’re sharing two key updates about government affiliated accounts. blog.twitter.com/en_us/topics/c… Beginning today, we will require the removal of Tweets posted by government or state-affiliated media accounts which share media that depict prisoners of war in the context of the war in Ukraine. help.twitter.com/en/rules-and-p….
Mar 11, 2022 6 tweets 2 min read
We’re adding labels to accounts and Tweets sharing links of state-affiliated media outlets in Belarus after detailed reporting about their role in the war in Ukraine. This builds on our years-long work to add context to state media outlets and limit their reach on Twitter. 🧵 Last week, we launched labels on Tweets sharing links to Russian state-affiliated news media.

Early data suggests that our interventions here are working: We've seen a 30% drop in impressions on Tweets labeled under this expanded policy.
Feb 28, 2022 5 tweets 2 min read
Today, we’re adding labels to Tweets that share links to Russian state-affiliated media websites and are taking steps to significantly reduce the circulation of this content on Twitter.

We’ll roll out these labels to other state-affiliated media outlets in the coming weeks. Image As people look for credible information on Twitter regarding the Russian invasion of Ukraine, we understand and take our role seriously. Our product should make it easy to understand who’s behind the content you see, and what their motivations and intentions are.
Aug 7, 2021 4 tweets 2 min read
Have been sitting with the Apple announcement for a couple of days to try to avoid tweeting a regrettable hot take. This thread from @alexstamos just about captures it for me — as well as @gruber’s writeup: daringfireball.net/2021/08/apple_… More than anything, announcing these 3 fundamentally distinct features together feels like a (rare) colossal PR misstep by Apple. Grouping together anti-CSE tech (broadly good) with parental controls (somewhere between meh and dangerous) muddies the waters unproductively.
May 24, 2020 9 tweets 4 min read
We've seen no evidence to support the claim that "nearly half of the accounts Tweeting about #COVID19 are likely bots.” 🧵 with a few thoughts on the subject... npr.org/sections/coron… First, we should get our terms straight: "Bot" means a lot of different things to a lot of different people — and doesn't necessarily refer to coordinated, manipulative, or inauthentic behavior.
May 28, 2019 6 tweets 2 min read
Earlier this month, we removed more than 2,800 inauthentic accounts originating in Iran. These are the accounts that FireEye, a private security firm, reported on today. We were not provided with this report or its findings. As we conduct investigations into the wider networks and actors involved in information operations, we typically avoid making any declarative public statements until we can be sure that we have reached the end of our analyses.
Apr 8, 2019 8 tweets 2 min read
Today, we lowered the limit on the number of accounts you can follow per day from 1000 to 400. Some people are wondering why we picked 400. Well, I’m glad you asked. Nerdy thread on rate limits and anti-spam technology 👇... First things first: You can’t stop spam, bots, or other types of manipulation with rate limits alone. However, rate limits *do* make each spam account less effective, slower, and more expensive to operate.
Feb 21, 2019 7 tweets 3 min read
A write-up in Politico yesterday made a lot of strong claims about potential foreign disinformation campaigns targeting presidential candidates. I want to share a few quick thoughts and provide additional clarity. politico.com/story/2019/02/… Last fall, we reviewed the #VoterFraud report carefully, and found no substantial evidence indicating “the involvement of foreign state actors” or even malicious coordinated activity. iwr.ai/voterfraud/ind…
Nov 2, 2018 25 tweets 4 min read
We’ve recently seen research about so-called “bots” and misinformation on Twitter and wanted to share our perspective on why findings that might seem remarkable at first are likely inaccurate. We’re working on a more detailed explanation, but some comments for now. We continue to be excited by the research opportunities that Twitter data provides. Our service is the largest source of real-time social media data, and we make this data available to the public for free through our public API. No other major service does this.