, 30 tweets, 8 min read Read on Twitter
On Networked Censorship: A Thread

For the journalists etc. attempting to understand what’s going on behind the scenes at large social media companies wrt censorship, there are a few conceptual tools you must equip before making statements in either support or suspicion...
Recently I’ve seen debate concerning the degree to which the content of popular (and polarizing) social media users gets censored by platforms like YouTube, Twitter, etc.

Most of these approaches analyze the category of first-order signals to which the public has access...
Namely: recommendations, bans, warnings, demonitization, etc.

But this is a fool’s errand.

You will never get to the heart of what’s either happening, or not happening, by analyzing these data points.

Why not, you ask?

Well the superficial answer is plausible deniability...
When it comes to these high-level symptoms, it is invariably the case that tech companies will claim that any apparent asymmetries stem merely from an even-handed application of their assuredly apolitical and unbiased values...
But, like a magic trick, what’s visibly happening to popular users is a distraction from what’s invisibly occurring up the magician’s sleeve.

The show is not the trick itself.

To understand what tricks may be in play, we must briefly detour through the land of Network Theory...
Bear with me. I will try to keep things as simple and math-free as possible.

If a user is a node in the network, the amount of people to whom they’re connected is called their “degree”.

So if you have 1k followers, your in-degree (the # of links pointing to you) is 1k...
Network scientists care a lot about this “degree”—particularly its “distribution”—which is basically a measure of the # of users with each # of links between zero and the total number of users in the network.

Like so:
As you can see, the degree distribution can tell us a lot about the behavior of a given network, or any specific subset of that network.

This is especially true when it comes to how information flows through the network in question.

For our purposes we’ll focus on power laws...
Networks that follow power laws are also known as “scale-free” networks.

In this type of network, a small proportion of nodes (“hub” users) have far more connections (followers) than the rest of the users in the network.

As mentioned, this has ramifications for network flow...
In the case of social media, “network flow” maps to things like recommendations, inclusion of messages in feeds, etc.

Now, one key thing to understand is that within scale-free networks, hub-users act like signal amplifiers, rapidly spreading the content they decide to share...
So of course, if one wanted to reduce their influence, they *could* simply attempt to reduce a hub-user’s degree of connectedness.

Circling back to the beginning of the thread, this is what people like @timcast have tried to detect.

But this is exactly what a magician hides...
Rather, knowing about the role that hubs play in amplifying content flow, we might put forward another strategy:

Alter what hubs see, but more importantly: alter what they *don’t see*.

Now we may subtly change how content flows the network without detection by crude analysis...
Why won’t these actions be easily detected?

Because you’d need to know how each person’s feed has evolved across time, then subject the data from those feeds to some very advanced network-statistical analysis.

This is why we shouldn’t expect journos to detect such patterns...
Another concept we must consider is “criticality”.

This property of networks describes the likelihood that a network–given the insertion of some new flow element (video / tweet / post)–will generate a cascading flow throughout the network (i.e that the post will “go viral”)...
One interesting thing we've learned about scale-free networks is that they evolve toward this critical state on their own.

This phenomenon is known as “self-organized criticality”, and is contributes to the virality and addictiveness of social media increasing across time...
Given that this phenomenon is well-known within the field of network science, but basically unknown elsewhere, its comprehension presents an asymmetric advantage to those who understand it over those who don’t: namely, those running the networks over those legislating them...
This is important because, in theory, it’s possible to alter characteristics of a network’s structure (i.e. its topology) such that criticality is brought under explicit control.

Translation: it’s possible to alter what goes viral in ways that are nearly impossible to detect...
Now let’s have some fun by considering what happens when we task some very smart engineers with creating a machine learning system capable of detecting and managing these basic network properties according to some set of content-related criterion...
First, we should acknowledge that such development efforts have likely been underway for some time now, as they provide precisely the type of system a social media company would need to monitor and suppress the spread of obviously illegal content: terrorism, child porn, etc...
With these systems as our foundation, we may extend their capabilities–beyond that which is illegal–to content deemed “hateful”, more broadly.

The learning systems will then need to be fed many examples of such “hateful” content in order to be capable of detecting it w/o aid...
This is because most of these “AI” systems represent enhanced versions of what’s known as “supervised learning”, wherein a target set (e.g. cat pictures) are given to algorithms which are very good at extracting shared “features” from the collection of targets...
Next, the learning system is shown examples that may or may not fit the bill (i.e. cat / not cat).

It makes its predictions, which are then evaluated by the “supervisor”.

In Google’s case, *this is you* when you select images containing stoplights to log into a website...
But circling back to content censorship, the question very quickly becomes:

Who decides the definitions of concepts such as “hate” used to train these learning systems?

And at present, the answer is:

Whoever writes the code, or manages those that do...
Which clearly demonstrates another vector by which ideological bias-intentional or otherwise-finds its way into the systems governing the behavior and evolution of our communication networks.

Those who obsess over bias in other spheres are most likely to encode their own here...
In any case, these examples are just scratching the surface of what's possible, but I wanted to demonstrate clearly that:

- You're not going to detect these changes from the outside, using unsophisticated approaches.
- This kind of "network flow management" is all around you...
These are not "conspiracy theories". They are the logical conclusion of centralized technology companies applying modern network science to domains where the technology creators and managers are themselves incapable of removing their own political biases from the equation...
The fundamental takeaway:

Until we ensure the transparency of the processes and algorithms by which our information flow is managed, we will continue to witness the emergence of tools more powerful than humanity has ever known, capable of changing thought patterns at scale...
These distortions won't remain local. They'll seep–invisibly–into every aspect of our lives, subtly adjusting the way we communicate with one another, and by extension the way in which we think.

We're now allowing algorithms to establish the boundaries of acceptable thought..
It is absolutely imperative that we realize this before the unintended consequences lead us inevitably down the same path that China has decided to take, wherein such tools are intentionally used to control the thoughts and actions of the public, without consent...
Sinclair Lewis once said:

"When fascism comes to America, it will be wrapped in the flag and carrying a cross."

But at present, it appears far more likely that our contemporary thought police will cloak themselves in dopamine-fueled clicks and carry a Network Science textbook.
Missing some Tweet in this thread?
You can try to force a refresh.

Like this thread? Get email updates or save it to PDF!

Subscribe to Matthew Pirkowski
Profile picture

Get real-time email alerts when new unrolls are available from this author!

This content may be removed anytime!

Twitter may remove this content at anytime, convert it as a PDF, save and print for later use!

Try unrolling a thread yourself!

how to unroll video

1) Follow Thread Reader App on Twitter so you can easily mention us!

2) Go to a Twitter thread (series of Tweets by the same owner) and mention us with a keyword "unroll" @threadreaderapp unroll

You can practice here first or read more on our help page!

Follow Us on Twitter!

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just three indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3.00/month or $30.00/year) and get exclusive features!

Become Premium

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!