This is infuriating, and totally predictable. "In its struggle to accurately moderate a torrent of content and avoid negative attention, Facebook created invisible elite tiers within the social network." wsj.com/articles/faceb…
We're not the only ones to say it, but I'm proud that @robyncaplan and I noted this, that platforms create "tiered governance" systems inside their content moderation efforts, treating categories of users differently, in ways that arent clear from outside. journals.sagepub.com/doi/10.1177/20…
@robyncaplan But YouTube, for all its missteps, was at least treating users differently for a reason: in principle, it is reasonable to treat creators who earn ad dollars differently from other users. YT just handled it poorly. What Facebook is doing here is new-level outrageous...
@robyncaplan Facebook is overwhelmed by their own size, and will slap nearly any band-aid onto the problem. Whitelisting some accounts is either a stopgap measure when your moderation team is drowning, or its an ill-informed idea about who violates the rules and who doesn't.
@robyncaplan Facebook also seems to have less impulse than its competitors to actually abide by its stated ideals. Whitelisting may also be a way to smooth relations with valuable or visible figures, regardless of whatever lip service they pay to being an open, democratic, impartial forum.
@robyncaplan Facebook is either woefully ill-equipped to fairly manage the platform they manage, or they're regularly and systematically deceptive about what they're doing -- or both.
@robyncaplan Whitelisting users, absolving them from the content moderation everyone else faces is a bad idea even as a stopgap measure, even for a small group of users; letting that whitelist balloon to 5.8 million users is indefensible.
@robyncaplan A content moderation whitelist is patently unfair, on its face; and it is certainly chock full of assumptions about who is "safe" enough not to bother with.
@robyncaplan And it is a recipe for disaster, for exactly the kind of cruelty, hate, and misinfo we've been dealing with: the stuff that public figures can speak out of the corner of their mouths, that slips by because they enjoy other kinds of legitimacy: elected office, wealth, celebrity.
@robyncaplan And, if Facebook is doing it, we should probably ask which other platforms have their own whitelist of users who get some kind of a pass on the content moderation process. Facebook is certainly not the only one, though they're usually the most egregious, far too often.
@robyncaplan "Facebook designed the system to minimize what its employees have described in the documents as “PR fires”—negative media attention that comes from botched enforcement actions taken against VIPs." wsj.com/articles/faceb…
@robyncaplan "Historically, Facebook contacted some VIP users who violated platform policies and provided a “self-remediation window” of 24 hours to delete violating content on their own before Facebook took it down and applied penalties."
@robyncaplan "While the program included most government officials, it didn’t include all candidates for public office, at times effectively granting incumbents in elections an advantage over challengers."
@robyncaplan There are good reasons for platforms to have different rule sets for different tiers of users. But they should be public, bright line, and for the right reasons. YT can require paid creators meet a higher standard. Be clear, make it a point of pride: money = responsibility.
@robyncaplan Facebook could have said, "reach X number of friends / followers, you get MORE scrutiny." But instead it gifted this to people who would be a PR problem, and then dropped they ball in setting or imposing standards commensurate with the reach of these users.
@robyncaplan "In response to what the documents describe as chronic underinvestment in moderation efforts, many teams around FB chose not to enforce the rules with high-profile accounts at all... In some instances, whitelist status was granted with little record of who had granted it and why"
@robyncaplan "The XCheck documents show that Facebook misled the Oversight Board, said @Klonick. 'Why would they spend so much time and money setting up the Oversight Board, then lie to it? ...This is going to completely undercut it.'”
@robyncaplan @Klonick Many thanks to @JeffHorwitz for this article, it's excellent. Looking forward to the next in the series. wsj.com/articles/faceb…
@robyncaplan @Klonick @JeffHorwitz Oh god, there are more articles already. wsj.com/articles/faceb…
@robyncaplan @Klonick @JeffHorwitz Facebook has years of internal research on teens, mental health, and Instagram. "Repeatedly, the company’s researchers found that Instagram is harmful for a sizable percentage of them, most notably teenage girls."
@robyncaplan @Klonick @JeffHorwitz The possible effect is large, and is not just about social comparison in general, and not even about social media in general. Thee effect is worse on IG thasn on TikTok and Snapchat, because of the site norms that emphasize professionalized beauty.
@robyncaplan @Klonick @JeffHorwitz Facebook has misrepresented what it knew: “The research that we’ve seen is that using social apps to connect with other people can have positive mental-health benefits,” CEO Mark Zuckerberg said at a congressional hearing in March 2021."
@robyncaplan @Klonick @JeffHorwitz FB did not provide this research to Congress when specifically asked for it: “'Facebook’s answers were so evasive—failing to even respond to all our questions—that they really raise questions about what Facebook might be hiding,” Sen. Blumenthal said."
@robyncaplan @Klonick @JeffHorwitz Here's the kicker: "In a recent interview, VP Adam Mosseri said... he believes Facebook was late to realizing there were drawbacks to connecting people in such large numbers."
@robyncaplan @Klonick @JeffHorwitz For those of us who try to argue that the challenges platforms face aren't the fault of one company, that they're endemic to our embrace of an info architecture where everyone encounters each other on an "open" platform that's actually built for advertising + data collection...
@robyncaplan @Klonick @JeffHorwitz ...Facebook makes this argument very difficult.

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Tarleton Gillespie

Tarleton Gillespie Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @TarletonG

20 Jul
Today, Facebook/Instagram announced “sensitive content control” for Instagram, giving users the ability to modulate how much “sensitive content” they’re shown in the “explore” recommendation page. Some things to notice: about.fb.com/news/2021/07/i…
Though the accompanying graphic implies that this will be a user-friendly slider, a graphic farther down in the post makes clear that it requires going two pages deep into the settings and choosing one of three options: Allow, Limit, or Limit More.
Notice that “limit” is the default. So, despite this being presented as a tool to manage sensitive content, it in fact gives instagram users one additional position one either side of the current offerings: a stricter standard, and a looser one.
Read 31 tweets
20 Jul
Pretty excited to share this article, written with a pile of friends and colleagues. If you're interested in a nuanced look at how metrification shapes work in the culture industries, this is for you. "Making Sense of Metrics in the Music Industries" ijoc.org/index.php/ijoc…
We surveyed+interviewed music professionals, to see how metrics shaped their work. We did not find blind faith in numbers, or flat out rejection. Instead, numbers had to be made sense of - narrated into something persuasive to justify making an investment or a taking a risk.
Metrics are powerful, and those who have more access to data enjoy more of that power. But numbers are not by themselves enough. They are approached with skepticism, remain open to interpretation, and must be transformed into something convincing.
Read 4 tweets
27 Apr
I’ve been quietly writing about the “borderline content” policies at YouTube and Facebook for a while, or failing to - it’s taking me more time than I want to get all the words in the right order. But let me drop a few takeaway thoughts:
Both YouTube and Facebook instituted policies in 2018, where they will reduce the circulation of content they judge to be not quite bad enough to remove. The content remains on the platform, but it is recommended less, or not at all.
wired.com/story/youtube-…
They not the only ones. Tumblr blocks hashtags; Twitter reduces content to protect “conversational health”; Spotify keeps select artists off of their playlists; Reddit quarantines subreddits, keeping them off the front page. And other platforms do it without saying so.
Read 25 tweets
5 Aug 19
Thread, to a reporter, about #Cloudflare dropping 8chan: what effect would it likely have, and in which layers of the Net should accountability live? Short version: Decisions matter even if they don’t have a simple effect, and our ideas about responsibility are changing. 1/16
I think in the short term, both guesses are probably right: CloudFlare’s decision certainly doesn’t end 8chan, it will probably rematerialize in some form elsewhere; AND there will probably be some attrition of users, who either don’t find the new site or don’t want to. 2/16
But I do think we can get too focused on whether a single decision will or will not have a definitive effect, and we overlook the cumulative and the symbolic value of a decision like Cloudflare’s. 3/16
Read 16 tweets
6 Nov 18
I think the Gab story is one of the most important new issues in content moderation + the power of intermediaries. How will growing demands for platform responsibility extend downward into other more 'infrastructural' services? wired.com/story/how-righ…
We can see these infrastructural intermediaries, that have traditionally positioned themselves as impartial, struggling to justify supporting Gab: web and domain hosting, payment services, cloud services. Remember, social media platforms positioned themselves as neutral too.
Even those forcefully arguing to keep Gab aren't just proclaiming neutrality: they're protecting speech, defending Gab, accusing others of censorship, etc. Value judgments, cloaked as the absence of value judgments. Even here, the veneer of neutrality is flimsier than we thought.
Read 4 tweets
9 Apr 18
I applaud the @SSRC_org for this initiative, and Facebook for providing their data in a responsible way. Way to leverage current controversies for progressive ends! But... 1/10 ssrc.org/programs/view/…
That said, and along with the lucid comments from @natematias , a few things that come to mind that, as this project develops, I hope the SSRC are thinking about: 2/10
Right now this only includes Facebook data. Makes sense, as a start, given their size and impact. But if the @SSRC_org initiative aims to understand "social media’s impact on society," then This must include more platforms than just Facebook. 3/10
Read 10 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!

Follow Us on Twitter!

:(