Jess Miers 🦝 Profile picture
Jun 26 34 tweets 9 min read Read on X
Today, the Supreme Court announced their opinion in Murthy v. Missouri.

This case illustrates the complexities of online content moderation and offers some interesting insight into how the Court might rule on the long-awaited NetChoice & CCIA cases. 🧵
supremecourt.gov/opinions/23pdf…
The COVID-19 era was as confusing as it was terrifying. It was an era of extensive mask wearing, wiping down amazon packages, zoom funerals, online classrooms, and lots and lots of mis and disinformation about the disease.

Horse tranqs, bleach injections, you name it.
At the time, much of this mis/disinformation spread on various online services, Facebook and Twitter included. The sources were a mix of so-called experts, public interest groups, conspiracy theorists, and even our own government.

The truth was about as murky as the cure.
And yet, social media companies were charged with discerning the truth, creating health information policies based on an array of internal guidelines and expert opinions about health topics foreign to the companies and the moderators themselves.
As our understanding of the COVID pandemic changed, so did the platforms' policies and their differing approaches to enforcing those policies.

Inevitably, as with all content moderation decisions, so folks were unhappy.
The unhappiest seemed to be the Biden Administration. As illustrated by the record (containing ~26,000 pieces of evidence), Biden officials reached out to the platforms on numerous occasions.

Sometimes to inquire about their moderation practices... Image
Sometimes to suggest policy updates, and, most of the time, to chastise and berate for failing to do more.

But Biden's team weren't the only ones unhappy w/ Facebook's decisions. Private individuals who felt wronged by actions taken against them, also expressed discontent. Image
That brings us to the present case. Plaintiffs are individuals and States who claim that the government overstepped their boundaries and unlawfully coerced Facebook (and others) to censor Americans.

Strikingly, the 5th Circuit upheld a broad injunction against the government. Image
The injunction prohibited the Biden Administration from engaging with platforms about content moderation, shuttering not just channels of communication but crucial information streams that platforms rely upon to inform their policies and moderation decisions.
SCOTUS was tasked with deciding whether the govt's conduct rose to the level of unlawful "jawboning" (a term we use to describe government coercion of private publishers) that warrants such a broad and severe remedy.

It implicates the rights of Americans and the platforms. Image
In a 6-3 majority opinion authored by Justice Barrett, the Court reached the right result: the state and individual plaintiffs do not have standing to bring their claims.

"We begin and end with standing." Image
The Court began with a nod to last year's Twitter v. Taamneh, stating while billions of pieces of content are uploaded to these services, "not everything goes."

The Court recognizes that private platforms enjoy editorial discretion, the central issue in the NetChoice cases. Image
To reach the standing question, the Court considers whether any of the alleged content policies or subsequent removals could be traced to any single action by a Biden Administration official.

For each plaintiff, the answer was a resounding no.
And while the Court acknowledged that much of the communication from the Administration to Facebook could be described as "aggressive," there is a striking lack of evidence of any specific Facebook decision deriving from government coercion -- a burden left to the Plaintiffs. Image
Displaying a refreshingly robust understanding of the practical aspects of content moderation, the Court notes throughout that while the Government clamored on in emails, press releases, and addresses, it was always the platforms' decision to implement and enforce their policies. Image
Now, practically speaking (and as the dissent notes), perhaps there's some truth to the allegation that some of Facebook's decisions and policy updates were influenced by many outside forces, including the government.

It would be naive to suggest otherwise.
But the Court, again w/a striking display of insight into digital editorial decision making, notes that the extensive evidence fails to connect any specific policy or action to a *specific government actor.*

That's because content moderation is inherently multi-stakeholder. Image
In fact, for one of the Plaintiff's alleged harms, the Court notes that a private organization (the Election Integrity Partnership), not CISA, alerted Twitter to the violative content.

This seemingly small fact is extraordinarily critical when it comes to traceability. Image
Now, some folks claim (including the dissent) that this means that the Court created an impossible bar for future jawboning cases.

I reject that and remind folks that this is a case involving *online* publishers with issues that bring nuances foreign to offline analogs.
The Court cites cases involving census data or bookstores, neither of which deal with the kinds of intricate questions that are inherent to online content moderation.

There's an ocean's difference between bringing police to a bookstore and sending an email to Facebook.
The Court's decision in no way implies a new, heightened standing requirement for proving government coercion.

It states that if you're going to claim that the government is acting as regulator and content moderator, then you'll have to illustrate that link.
And more importantly, that circumstantial evidence of government coercion, at least for online content moderation cases, will not be enough to get Plaintiffs through the door.

Again, this is the Court simply recognizing that the Internet rightfully demands unique treatment.
That conclusion is important because otherwise, the government can realize their censorship agenda by merely speaking into the ether.

Any plaintiff could take a complicated moderation decision and attribute it to some lawmaker's tweet or email that's loosely related.
And importantly, the decision also leaves open the door for the Plaintiffs, in this case and in the future, to provide evidence where a specific government actor complains about a specific piece of content and coerces the private platform to remove that content.
It's just that in this case, that evidence was lacking.

And that's particularly striking because there was SO MUCH EVIDENCE to begin with. If it was there, it would have come up, long before we got to this point.
The jawboning issue completely aside, I was quite impressed with the way in which the majority discussed content moderation (which gives me hope for the NetChoice cases). What follows are some of my favorite examples:

Throughout, we see the Court call-out independent judgment Image
They illustrate the many different ways in which a website engages in editorial discretion -- a key part of what distinguishes a common carrier from a private publisher:

(I was surprised they didn't cite @ericgoldman's remedies paper here! ) papers.ssrn.com/sol3/papers.cf…
Image
The Court recognizes that platforms have been engaging in content moderation long before this even became an issue: Image
Importantly, the Court also recognizes that these companies are responsible for their own policies, regardless of what's being said or discussed around them: Image
And this quote too about self-censorship is a brilliant observation by the Court.

Here, the Court is saying that another reason why traceability is so suspect in this case is simply because of the existence of the platforms' policies. Image
Or to state it differently, people self-censor all the time online because these platforms are clear about what is acceptable and what is not on their services.

That is a key part of the online publication ecosystem.
There's a lot more I could say, but I'll leave it with this.

Justice Alito's dissent made clear that it is abhorrent for the government to interfere with private publication decisions. With that, I hope he and his fellow dissenters carry that energy to the next case.

Image
Image
Image
Because angry emails and tweets are one thing...but enacting laws that prohibit online publishers from creating, implementing, and enforcing their editorial guidelines (as both Florida and Texas did), is precisely the kind of censorship that Alito and pals should most detest.
.@threadreaderapp unroll

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Jess Miers 🦝

Jess Miers 🦝 Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @jess_miers

Jun 26
📢 Some personal news!

I’m excited (and sad) to share that I will be leaving @ProgressChamber. I’ve accepted joint research fellowship positions at @santaclaralaw and @AkronLaw, focused on producing AI scholarship.

In other words, I’m officially in my academic era!
Last year, during my annual evaluation, I told @adamkovac that there was only one thing that could entice me to seriously consider leaving Chamber of Progress.

As many of you know, that one thing is an opportunity to achieve my lifelong dream of becoming a TT law professor.
At the time, I hadn't expected this opportunity to present itself anytime soon. In fact, I told Adam "but don't worry, that's like 5-6 years from now."

Turns out, like my Supreme Court predictions, I was only slightly off...
Read 11 tweets
May 9
I published an article on California SB 1047, a bill that would effectively prohibit new AI model developers from emerging.

The bill does not apply to existing (derivative) AI models or models built upon existing models. It's the worst I've seen yet. 🧵 medium.com/chamber-of-pro…

Image
My post is written for a general audience, so I advise you start there if you're interested in learning as much as possible about the bill.

Other credible experts have chimed in on the bill as well like @psychosort
Read 26 tweets
Apr 15
If you're going to talk about me, why not @ me? Are you afraid of my response?

At no point did I say my tweets are representative of my employer. And you know that -- as you said, I'm tweeting on a Sunday afternoon, outside of working hours.

Let's look at the receipts 🧵
[the following is my own opinion, not my employer's].

Last night, @ CreatureDesigns (Mike Corriero) posted an image of @brianlfrye, a Jewish law professor, depicted as hitler + an image implying Brian's pending execution.

Pure violence and hatred. Image
Prior to that post @ CreatureDesigns was engaged in a "discussion" with myself and Brian about fair use and AI. Brian and I are notoriously pro-AI innovation and pro free expression (for which the Fair Use Doctrine is intended).

This clearly upset @ CreatureDesigns.
Read 25 tweets
Apr 11
That's one of the major issues with the current discourse around Gen AI and 230. We have to understand the Gen AI stack before we can even consider liability.

IMO 230 could apply to Gen AI for some use cases. techdirt.com/2023/03/17/yes…
In assessing liability we have the platforms that provide the Gen AI services, the developers who create and fine tune the models. We have the folks who create the datasets and the folks who implement the datasets to train their models. We have users who supply inputs.
And we also have the platforms (again) that provide the "guidelines" and guardrails to determine what kinds of AI outputs are acceptable and aligned with the platform's overall editorial position.

Each of these aspects can involve different parties.
Read 13 tweets
Apr 9
Yeah so this bill is laughably bad.

The Generative AI Copyright Disclosure Act of 2024 requires anyone using a dataset to train AI to disclose any copyrighted works in the set to the U.S. Copyright Office to be displayed via a public database. 🧵 schiff.house.gov/imo/media/doc/…
Copyright attaches automatically to any creative works fixed in a tangible medium of expression.

So, pretty much all works used to train an AI system will require disclosures, regardless of fair use considerations.

(btw you don't "train" a dataset but details). Image
BUT THAT'S NOT ALL!

Datasets are incredibly dynamic, especially when it comes to AI training. So, each time the set is updated in a "significant manner," the notice requirement is triggered.
Read 11 tweets
Mar 28
Yesterday, the Ninth Circuit filed its order in Diep v. Apple. They had me in the first half...

Strong #Section230 ruling regarding Apple's content moderation efforts. Until the Court got to the UCL claims...creating yet another bizarre 230 loophole. sigh. 🧵
Hadona Diep is a cybersecurity professional.

She downloaded an app called "Toast Plus" from Apple's App store thinking it was the "Toast Wallet" for storing cryptocurrency.

It was not the Toast Wallet.
Long after transferring a reasonable sum of crypto to Toast Plus, Diep discovered that her crypto was missing and her account was deleted.

Among other claims, Diep sued Apple under state consumer protection law + negligence for failing to "vet" and remove Toast Plus.
Read 19 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us!

:(