Today, Facebook/Instagram announced “sensitive content control” for Instagram, giving users the ability to modulate how much “sensitive content” they’re shown in the “explore” recommendation page. Some things to notice: about.fb.com/news/2021/07/i…
Though the accompanying graphic implies that this will be a user-friendly slider, a graphic farther down in the post makes clear that it requires going two pages deep into the settings and choosing one of three options: Allow, Limit, or Limit More.
Notice that “limit” is the default. So, despite this being presented as a tool to manage sensitive content, it in fact gives instagram users one additional position one either side of the current offerings: a stricter standard, and a looser one.
The explanation of what qualifies as sensitive content is left vague: “could potentially be upsetting to some people — such as posts that may be sexually suggestive or violent”.
It does not (yet?) let you set your levels differently for different kinds of content; in other words, you can’t make it more strict on violence and more permissive on sex, or vice versa.
The setting does not (yet?) apply to content from users you follow, only to what is recommended in the “explore” tab.
The setting is not (yet?) available on Facebook or WhatsApp. (Why not?)
This is an example of what I’ve been calling “reduction” policies, or what some platforms call “borderline content” policies, though I don’t love that term. Paper forthcoming— you know, when I finish it.
Let’s highlight what this means. Instagram not only identifies content they believe should be removed as violations of the community guidelines. They also identify content that’s “sensitive” enough not to recommend, but not so bad as to warrant taking down.
The “allow” option also makes clear that they're already limiting this sensitive content from their recommendations, and can now identify another tier of content that’s slightly less sensitive.
LOTS of platforms have some form of “reduction" policies...
Facebook/Instagram already published the list of content they don’t recommend in Aug 2020; their policy of reducing content is older than that: facebook.com/help/instagram…
Facebook now regularly points out that they “reduce the visibility” of problematic content, as in this week’s rebuke to the Biden administration’s criticism that they allow too much COVID-19 vaccine misinfo: about.fb.com/news/2021/07/s…
YouTube has had a “borderline content” policy since before Jan 2019, meant to reduce the circulation of conspiracy videos, hoaxes, and misinformation: blog.youtube/news-and-event…
Twitter says it reduces the visibility of comments that have been labeled as misleading, though not so bad as to be removed: blog.twitter.com/en_us/topics/c…
TikTok has obliquely hinted that some videos, such as particularly graphic medical scenes, “may not be eligible for recommendation” newsroom.tiktok.com/en-us/how-tikt…
Reddit accomplishes a similar kind of reduction through its quarantine policy: a quarantined subreddit is still there, but its posts will never be “recommended” to the front page of Reddit reddit.com/r/announcement…
“Reducing” sensitive content requires three things: (1) distinguishing parts of the platform as having different standards of responsibility - as Instagram has here, applying it to the explore recommendations but not to user’s feed.
Platforms often describe their recommendation tools as warranting greater responsibility, because they bring new content to users’ attention, who weren’t even seeking it.
They feel like they nominated it, put it up for consideration, validated it; rather than it being in a the feed because of the user’s actions - who they follow, what they like, or what they search for. (This is a tenuous distinction.)
The platform (2) must have a way to identify the sensitive content, which typically means yet another machine learning classifier, training data, some human evaluators helping to produce that data.
Some platforms use the same classifiers that look for content to remove, at a lower threshold; others train new ones to identify content at the borderline - an immensely difficult task, given that its about what’s nearly problematic, or sensitive to some and not others.
And (3) that classifier must be used to determine what gets recommended. Content identified as sensitive is excluded from being recommended at all, or it is factored in to make it less likely to be recommended.
With its sensitive content setting, Instagram adds a wrinkle the others haven't: the ability for individual users to adjust - to a very limited degree - how much reduction is going on for them.
Reduction policies are an underexamined part of the suite of moderation tools platforms use, and it’s important if we want to take seriously what we hope platforms will do in response to pernicious problems like hate+misinfo. It is not an unqualified good, nor unqualified bad.
Let’s not kid ourselves, this has always been going on: when recommendation algorithms lift some kinds of things up, others fall away. We're rarely recommended content we’ve already seen, our own content, older content, spam, clickbait, etc.
Dealing with misinfo, esp. in a pandemic, we may need to set aside the sharpened knives of the “free speech debates” to admit that its never been true that everything gets published, nor would we actually want that. This means intermediaries always have some say in what makes it.
But there’s also enormous potential for problems too: what counts as sensitive? Should Instagram decide? how are the classifiers trained? how can you tell your content has been reduced? what should you be able to do about that?
Reduction policies also let platforms safely say they aren’t censoring, because the sensitive content remains on the platform for those who can find it. This not only sidesteps responsibility, it may even let platforms be more permissive.
Most important: how do we ensure that reductions are in the public interest, accountable, and move us towards a robust, pluralistic, good faith public / away from the vortex of misinformation, cruelty, fraud, and tribalism?
If reduction is going to expand as a technique of content moderation, and platform governance more broadly, we need to immediately reimagine how platforms partner with users, trusted organizations, third party software providers, and regulators, to expand how we reduce.

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Tarleton Gillespie

Tarleton Gillespie Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @TarletonG

20 Jul
Pretty excited to share this article, written with a pile of friends and colleagues. If you're interested in a nuanced look at how metrification shapes work in the culture industries, this is for you. "Making Sense of Metrics in the Music Industries" ijoc.org/index.php/ijoc…
We surveyed+interviewed music professionals, to see how metrics shaped their work. We did not find blind faith in numbers, or flat out rejection. Instead, numbers had to be made sense of - narrated into something persuasive to justify making an investment or a taking a risk.
Metrics are powerful, and those who have more access to data enjoy more of that power. But numbers are not by themselves enough. They are approached with skepticism, remain open to interpretation, and must be transformed into something convincing.
Read 4 tweets
27 Apr
I’ve been quietly writing about the “borderline content” policies at YouTube and Facebook for a while, or failing to - it’s taking me more time than I want to get all the words in the right order. But let me drop a few takeaway thoughts:
Both YouTube and Facebook instituted policies in 2018, where they will reduce the circulation of content they judge to be not quite bad enough to remove. The content remains on the platform, but it is recommended less, or not at all.
wired.com/story/youtube-…
They not the only ones. Tumblr blocks hashtags; Twitter reduces content to protect “conversational health”; Spotify keeps select artists off of their playlists; Reddit quarantines subreddits, keeping them off the front page. And other platforms do it without saying so.
Read 25 tweets
5 Aug 19
Thread, to a reporter, about #Cloudflare dropping 8chan: what effect would it likely have, and in which layers of the Net should accountability live? Short version: Decisions matter even if they don’t have a simple effect, and our ideas about responsibility are changing. 1/16
I think in the short term, both guesses are probably right: CloudFlare’s decision certainly doesn’t end 8chan, it will probably rematerialize in some form elsewhere; AND there will probably be some attrition of users, who either don’t find the new site or don’t want to. 2/16
But I do think we can get too focused on whether a single decision will or will not have a definitive effect, and we overlook the cumulative and the symbolic value of a decision like Cloudflare’s. 3/16
Read 16 tweets
6 Nov 18
I think the Gab story is one of the most important new issues in content moderation + the power of intermediaries. How will growing demands for platform responsibility extend downward into other more 'infrastructural' services? wired.com/story/how-righ…
We can see these infrastructural intermediaries, that have traditionally positioned themselves as impartial, struggling to justify supporting Gab: web and domain hosting, payment services, cloud services. Remember, social media platforms positioned themselves as neutral too.
Even those forcefully arguing to keep Gab aren't just proclaiming neutrality: they're protecting speech, defending Gab, accusing others of censorship, etc. Value judgments, cloaked as the absence of value judgments. Even here, the veneer of neutrality is flimsier than we thought.
Read 4 tweets
9 Apr 18
I applaud the @SSRC_org for this initiative, and Facebook for providing their data in a responsible way. Way to leverage current controversies for progressive ends! But... 1/10 ssrc.org/programs/view/…
That said, and along with the lucid comments from @natematias , a few things that come to mind that, as this project develops, I hope the SSRC are thinking about: 2/10
Right now this only includes Facebook data. Makes sense, as a start, given their size and impact. But if the @SSRC_org initiative aims to understand "social media’s impact on society," then This must include more platforms than just Facebook. 3/10
Read 10 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!

Follow Us on Twitter!

:(