I'm not sure what to make of how much support Justice Thomas's reading of 230 has among the other eight Justices, particularly because they've denied cert in a few high-profile 230 cases recently. But here is the fundamental flaw with the reasoning behind his interpretation.
He assumes that 230(c)(2), which protects platforms' blocking of objectionable content, might violate a hypothetical state law that requires these platforms to carry that content. The first big problem with that is the assumption that such a law would survive a 1A challenge.
I have not seen any authority that would allow such a requirement to stand. The cases cited involve military recruiting on campuses and must-carry restrictions for cable TV. The far more relevant case is Tornillo, which prohibited forced publication of letters to the editor.
And the bigger issue is Reno v. ACLU, which sets broad 1A protections for the Internet.
So I think it's unlikely that such a state law would pass 1A. But even if it somehow did, the next question is whether (c)(2) violates the First Amendment by limiting those laws. I don't think the cases (involving union membership and cable TV) would come to such a conclusion.
But *even in* the case that (c)(2) somehow violated 1A by limiting these state laws, that does not mean that (c)(1) also is unconstitutional.
(c)(1) is the far more important part of 230 and is relied upon in the vast majority of 230 cases. As the Ninth Circuit has stated, (c)(1) and (c)(2) are two different protections. Image
Chris Cox, one of 230's two co-authors, also has spoken extensively about the two separate protections of 230, how they are distinct, and why they drafted each.
So in short, I think that it's very unlikely that a state must-carry law for social media would survive 1A scrutiny. Even if it did, it's unlikely that 230(c)(2) would be found to violate 1A. And even in that case, (c)(1) would not be struck down.
Of course, this analysis relies on existing interpretations of the law. We know that at least one SCOTUS justice believes otherwise. Do not know if four others do as well.
I do think that Thomas's statement increases the chances that at least two judges on a randomly chosen circuit court panel will rule in favor of must-carry rules for social media platforms.

• • •

Missing some Tweet in this thread? You can try to force a refresh

Keep Current with Jeff Kosseff

Jeff Kosseff Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!


Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @jkosseff

7 Apr
I'm pretty certain that we'll have a lot more hearings about Section 230 and content moderation. I think we've heard enough from Dorsey, Pichai, and Zuckerberg. These are just some of the people I'd like to hear from in the future hearings to build an informed record.
1. Advocates for less aggressive moderation: I'd like to know when, if at all, it would be permissible to moderate . I've encountered some who say platforms should get out of the moderation business, but most say there should be *some* moderation, like for illegal content.
Based on my talks, there is a pretty wide variation within this group. Such as whether platforms should have discretion to remove legal but harmful speech such as hate speech. If so, what are the decision mechanisms?
Read 11 tweets
6 Apr
One of the interesting questions posed in light of Justice Thomas's concurrence is whether a holding that Section 230(c)(2) is unconstitutional (and I don't think it is) would render Section 230(c)(1) unconstitutional.
(c)(1) is the 26 words and by far the more commonly cited provision of 230. (c)(2) is relied upon less frequently, and protects good faith efforts to block objectionable material. (I think it's constitutional because the 1A protects such moderation, though others disagree).
For the sake of argument, let's say that (c)(2) is unconstitutional. Does that render (c)(1) unconstitutional? I think that hinges in part on whether you look at (c)(1) and (c)(2) as inextricably linked. And that's where the legislative history is messy.
Read 16 tweets
6 Apr
I've seen some profs recently tweet examples about their students' bad behavior/etiquette. We're in a pandemic, everyone is stressed out, and I don't see the value in publicly shaming students. Even when the profs don't use identifying info, the subject of the tweet may see it.
If there's any time to cut students a break, it's this year.
I'm all for proper etiquette but I think it's something that's best addressed in a personal conversation with the student and not on a public forum.
When I was a student, I certainly had my share of lapses in decorum, and there wasn't even a pandemic. I can't imagine how I'd feel if I saw a professor mocking me on social media (which fortunately did not exist back then).
Read 5 tweets
6 Apr
This analysis is spot-on. Data breach notification laws are not going to solve our cybersecurity problems. It is frustrating to see states continue to modestly amend their notice laws, as if it will change anything. We need effective security regulation.
I understand why we need to rely on states, as federal proposals for serious security regulation have stalled for more than a decade. It's unfortunate, as the state efforts are not terribly effective, but they are all we have for now.
And we need to stop conflating security regulation with privacy regulation.
Read 4 tweets
5 Apr
Must-carry obligations for social media sound reasonable until you look at the sort of content that platforms block at a massive scale.
The typical response is, "we'll just exclude it from the must-carry obligation." OK, so we'll exclude "illegal content." What if it's a close call? With a must-carry obligation, the platform will err on the side of leaving it up.
We probably don't want to prevent platforms from blocking spam -- which is what a large portion of their moderation systems routinely do. But do we want to penalize them if they inadvertently classify something as spam?
Read 7 tweets
26 Mar
At yesterday's hearing, some suggested that Congress intended Section 230 to cover only "neutral platforms." Not true at all. Here's a thread that sets forth the facts. (I had promised to write a thread for every mention of "neutrality," but I have stuff to do).
Congress passed 230 in 1996 to correct a perverse incentive in the common law/1A rules for early online services, which suggested that platforms reduce their liability by not moderating. Congress did *not* want that outcome.
A judge dismissed a 1991 defamation lawsuit against CompuServe based on a third-party newsletter because the judge concluded that CompuServe did not exercise sufficient "editorial control" and therefore faced the limited liability standard of a newsstand.
Read 17 tweets

Did Thread Reader help you today?

Support us! We are indie developers!

This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!

Follow Us on Twitter!