How can social / computational science help make sense of content moderation & platform policies? People shared ~30 questions over the last day. Over the next few days, I'll summarize scholarship & point to others doing important ongoing work
If you don't recognize the ecosystem of actors with money, connections, & influence, you can get distracted by what's visible on a single online platform.
@JessieNYC described structures of white nationalist power in her *2009* book on Cyber Racism
When tech platforms ban hate groups, banks close accounts, or people take white nationalist groups to court, the idea is to increase the cost of organizing to the point that movements' capacity to act is considerably hampered. That's about association more so than speech
Tech policy's obsession with speech/content is a mismatch. Connections & coordination are as much a concern as content.
And, as the UN Doha Declaration summarizes here, we have rich tradition for thinking about association rights & their limits unodc.org/e4j/en/terrori…
On the limits of "content" policy, some asked if we can learn from Wikipedia. The answer? It's fundamentally different—as a shared resource, it's a "communal public good." FB, Twitter, email, Parler are "connective public goods," & they work differently.
Connective public goods include the mail & now mobile phones
While we think of free-riding as the risk to classic public goods, digital public goods struggle with bad actors & manipulation as they become more influential, as @makoshark & @aaronshaw argue: mako.cc/academic/hill_…
Because the impact and desirability of manipulation grows with scale, some like @AdrienneLaF argue that the scale of digital platforms is the root of our problems. If humanity could be less connectable, then platform power could be less dangerous
I'll end tonight's thread by noting that smaller, less connective platforms won't solve white supremacy. White supremacists opposed to inclusive democracy have run the US for most of its history, without the Internet. Change needs to be deeper & wider than tech & also include it
I think that covers about 2/30 questions. I'll return to this thread tomorrow and will likely continue throughout the week.
• • •
Missing some Tweet in this thread? You can try to
force a refresh
What can we learn from social/computational science about policies to govern coordinated actors in a world of overlapping platforms and media?
Yesterday, I summarized a few points on how to understand those actors. Tonight, let's take a closer look at the ecosystem.
Most content/behavior policy debates focus on individual platforms, because that's where governance happens. But we live in a *transmedia* world, where civic life spans many media forms
An excellent case study in transmedia is @schock's (open access) Out of the Shadows, Into the Streets, which looks at the immigrant rights movement. The book illustrates a media ecology approach to understanding media practices linked to civic action mitpress.mit.edu/books/out-shad…
As Twitter fills with opinions on content moderation, online hate, & platform policies, what open questions do you have that social/behavioral/computational scientists can help answer?
I'll compile replies and respond this evening.
As hot takes whizz around Twitter, I'm hoping this thread can be a corner to slow down and identify the hard/important questions that come more slowly.
Questions from any political or identity standpoint are welcome. If you're unsure about asking your question publicly, send a direct message. I'll wade into my DMs this evening when I compile people's questions.
If independently validated, an 8% decrease in sharing of false information is a big deal.
Someday companies will routinely be required/expected to share results of their experiments on us, rather than journalists leaking results. By @CraigSilverman
Think how big an effect an 8% sharing reduction would be, if real (withholding judgment without details).
A platform data scientist is claiming they can reduce sharing of statements by a person with a huge megaphone, whose tweets are newsworthy, & who has a committed base 😮
Debates on online discourse have a baseline problem. It's impractical & undesirable for 0% of a head of state's comments to reach the public. But 100% isn't great if they're false.
That's how policy debates get stuck on arguments that a firm could "do more" & real wins get lost.
Most large online communities have coordinated across multiple platforms for years. While quarantine/bans can disrupt recruitment, they just displace the core group elsewhere.
A few years ago, @TarletonG and I were talking about whether we need to see conten moderation through the concept of assembly as well as speech. It's high time.
By focusing on speech, people have mistaken social/cultural problems for a content problem. And here we are.
In the 18th century, freedom of speech & assembly represented social functions that have now become un-bundled & repackaged online. To name a few:
- spreading ideas
- connecting/recruiting
- raising funds
- building relationships & group identity
- coordinating groups to act
Is support for black lives short-lived? Can movements that organize around events like the death of George Floyd lead to long-term change?
Last year, @EthanZ@rahulbot@fberm@allank_o & I published research on news & social media attention to black deaths, 2013-2016. Thread:
How does an ignored, systemic issue become newsworthy? Comm scholars sometimes describe news coverage as an ocean of overlapping "news waves." Some waves, like sports, have a natural cycle. What about issues like police violence that somehow don't get much coverage?
Kepplinger & Habermeier (1995) proposed that "key events" like an earthquake or a string of deaths can "trigger waves of reporting on similar events." To test this idea, they studied German news on deaths from earthquakes, AIDS, & traffic accidents—before & after key events.
Tidying up, I find a diagram of wisdom from @xuhulk when I was a gradstudent. At the Media Lab, the risk was always to err too far on the side of promotion. But many researchers under-promote.
I remember being told once that researchers should let the scientific process decide the value and attention our own work deserves and receives.
It's a valuable principle when deciding what to amplify. I wish the system worked reliably that way.
My priority in promotion is usually *utilization* - I hope my research will be useful to the people it matters to. That requires different effort from sharing findings with other scientists. Carol Weiss offers a great intro to the idea of utilization acawiki.org/The_Many_Meani…