Profile picture
Irenes (many) @ireneista
, 26 tweets, 4 min read Read on Twitter
Based on the breadth of reports, it appears that there's a certain word which Twitter's policy team has designated as a slur. The word is an acronym factually describing people who hold a certain category of transmisic belief and attempt to enforce it on others.

This is bad.
What can members of the public do about this?

There's no accountability mechanism. Surely the lack of such a mechanism is an injustice, in and of itself?
We aren't personally going to make any assumptions about the motivation for this apparent designation. There's no way it's an *accident*, but it's the kind of horrifying disaster that it's possible to wind up with by refusing to listen to certain people.
If our cis followers have wondered why they've seen Mastodon mentioned a lot this week, it's because trans people are getting banned for talking about issues crucial to their survival, and are desperate to find a space where that won't happen, despite its flaws.
This is really bad. Twitter has never shown a willingness to listen to criticism, so we don't expect it to be reversed. For those of our followers who don't find Mastodon to be a viable platform, please start thinking about what would make it viable.
Because trans people can't allow our communities to be pushed out of the public space. Visibility is so incredibly important to any mass movement that's about people's right to exist.
This policy decision on Twitter's part is upsetting, disappointing, and terrifying. Thank you for listening.
Explicitly connecting a part of the thread where we respond to this common misconception.
By the way, for those saying "screencap the word". What? And invite people to mass-report it so that we can be banned? You do know that the policy enforcement process involves human review, right?
It occurs to us that not everybody knows this.

Trust and safety policies, in general, have multiple layers of decision making. You can think of it as a funnel.
At the top is the policy itself - the decision of what kinds of speech are allowed on a platform. This has both a public part ("no hate speech") and a private part ("the following words are automatically hate speech; hate speech may also be...")
The public part has to have fewer details than the private part, to slow down people's ability to walk right up to the line of allowable behavior.
Then there's human review - a team of people who make on-the-ground decisions about what falls within the bounds of acceptable speech and what's outside it.
And then, finally, for platforms that leverage machine learning as part of their policy enforcement, there's ML classifiers which attempt to replicate the decisions of the human reviewers.
It's important to understand that this is a funnel, with each step relying on the one before it. You can't train an ML model without the data that serves as ground truth, produced by the human reviewers.
And you can't ask the human reviewers to make decisions about what's okay without having some written guidance for them.
All steps of this process are failure-prone. For example, mass reporting is a common tactic for trying to trick both humans and machines.
The above is based on our knowledge of how policy enforcement necessarily works, across all social media platforms. There's no better way to do it that Irenes are aware of.
So when we see an uptick in people talking about having been unjustly banned, we look at the substance of what they were banned for. And we look for common factors, with the goal of understanding what layer of the process failed to cause this bad result.
When the failure appears to be around a specific word, and it's consistent enough, we blame the highest layer of the process - the policy decisions. And that's what appears to have happened here.
That says, to us, that some decision-maker inside Twitter has been persuaded that this word is indeed a slur, despite the fact that its meaning is roughly "people who hold the position that trans people shouldn't be part of public life".
We hoped this would get attention, but it got more than we really expected it to. As such, we believe it's our responsibility to follow up by reiterating that we have no strong evidence of this, and in fact we think we over-estimated the strength of the evidence we have seen.
We still think that it's very important for everybody to focus on what sort of accountability mechanism could avoid the need for everyone in marginalized groups to be constantly stressed by this category of fear.
And we hope that our explanation of how policy-enforcement teams work, in general, was useful to people. It's based on our industry experience and we think it's stuff more people should understand.
We were pretty scared ourselves yesterday, but we had some conversations that helped us realize we probably over-reacted. But awareness and education are important, too, we just wish we hadn't scared people so much.
And, as we were reminded, there still is a serious problem in how Twitter handles this stuff, and its effect on people is largely the same. That doesn't change just because it probably isn't the exact problem we thought it was at the start of the thread.
Missing some Tweet in this thread?
You can try to force a refresh.

Like this thread? Get email updates or save it to PDF!

Subscribe to Irenes (many)
Profile picture

Get real-time email alerts when new unrolls are available from this author!

This content may be removed anytime!

Twitter may remove this content at anytime, convert it as a PDF, save and print for later use!

Try unrolling a thread yourself!

how to unroll video

1) Follow Thread Reader App on Twitter so you can easily mention us!

2) Go to a Twitter thread (series of Tweets by the same owner) and mention us with a keyword "unroll" @threadreaderapp unroll

You can practice here first or read more on our help page!

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just three indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member and get exclusive features!

Premium member ($3.00/month or $30.00/year)

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!