Ari Cohn Profile picture
22 Jun, 36 tweets, 11 min read
1/ Florida filed its brief opposing the motion for a preliminary injunction against the state's new social media law.

I have thoughts. In this part I'll talk about what they got wrong about #Section230. In a second part, the First Amendment argument.

2/ The brief: courtlistener.com/docket/5994220…

Florida desperately wants to change the conversation to #Section230 instead of the First Amendment, because that's the conversation they've always wanted this to be about; it's the political hot button they want to feverishly mash.
3/ So they frontloaded the 230 discussion.

But they get off to a bad start by claiming that 230 was prompted only by the Stratton Oakmont, which held Prodigy liable for user content because it engaged in *some* content moderation.
4/ But as many do, they omit the *other* case that 230 was aimed at: Cubby Inc. v. Compuserve: casetext.com/case/cubby-inc…
5/ Under Cubby, websites could be liable if they knew or should have known about the nature of the content in question, which provided a heavy incentive to not monitor or moderate at all.
6/ That's important because Florida is arguing Congress didn't intend to preempt laws regulating how content moderation decisions are made. But the entire *point* of Section 230 was to remove disincentives for websites to make these decisions for themselves, even imperfectly.
7/ Subjecting platforms to government regulation and enforcement of "consistency" places them right back in the moderator's dilemma: moderate at all, and you'll find plenty of people alleging that it's done inconsistently (because it's *impossible* to *do* consistently)
8/ So better, then, to just not moderate at all (which is exactly what they want) rather than risk reprisal from state regulators over their subjective feelings about your consistency and fairness. That's *exactly* what Congress *didn't* want.
9/ Florida argues that 230(c)(1) is inapplicable because it only applies to decisions to leave up content, not those to take down content. But that's wrong, and their citation to a footnote in a case that wasn't decided on 230 grounds doesn't even support the stated proposition.
10/ That footnote in Almeida, which again, the 11th Circuit decided did not need to reach the 230 question, said that most courts have held that (c)(1) protects against liability for refraining to moderate. Not that it's "at most" what (c)(1) does. casetext.com/case/almeida-v…
11/ To be sure, saying "some courts" is a clever way to try to minimize "damn near all of them." But that's not going to fool the court. At the end of the day, 230(c)(1) applies to the editorial discretion to post or remove content.
12/ The brief goes off on what amounts to a verbatim recital of Justice Thomas's "without the benefit of briefing" statement on the denial of cert in Malwarebytes, repeating all of the same, factually and legally incorrect arguments that have been refuted ad nauseum.
13/ Florida also argues that even if 230(c)(1) was relevant (and it is), SB 7072 is consistent with the provision because it doesn't mandate certain content standards, it just regulates *how* they are enforced.
14/ But that's nothing more than obfuscation. The "how" here *is* effectively the content standards. To make content moderation decisions, platforms necessarily have to subjectively analyze content, and make the call about what is allowed--and those criteria don't line up 1-to-1.
15/ After going on for many pages under the incorrect belief that 230(c)(2)(A) is the correct focus, Florida tries the old "230 only protects against damages, not injunctive relief" chestnut to avoid preemption. That too, is wrong.
16/ Injunctions requiring platforms to undo their content moderation decisions are not just mildly inconsistent with Section 230—they eviscerate its core purpose of removing obstacles that would discourage websites from developing and enforcing content rules.
17/ Ok so I lied: there will be *some* First Amendment discussion in this part, because Florida argues that Section 230 might itself be unconstitutional.
18/ Plaintiffs will no doubt say they are private actors not bound by the First Amendment, because it is true. But Section 230 doesn't create platforms' ability to "censor however they like." They have a First Amendment right to do that (as will be discussed in Part 2)
19/ Skinner v. Railway Labor might seem applicable at first blush, but a closer read reveals that it doesn't actually support Florida's argument that Section 230 makes social media platforms state actors. Let's take a look: casetext.com/case/skinner-v…
20/ Skinner involved a challenge to federal regulations that, in part, allowed railroads to perform breath or urine tests on employees in certain circumstances.
21/ Florida cites the "removed all legal barriers" language to draw an analogy to Section 230.

And indeed, the regulations preempted all state laws that would impair such testing. But those regs also did much more than Florida wants to tell you.
22/ They preempted collective bargaining, gave the government access to samples and test results, barred a railroad from divesting itself of the granted authority (as "inconsistent with its duty to promote the public safety"), and mandated removal of those who refused testing.
23/ That high level of government coercion and rather explicit indications that railroads should be testing (and would be answerable to regulators should they not and an accident occur) is why the Supreme Court found state action.
24/ In contrast, Section 230 has none of the coerciveness that the FRA regulations had. Section 230 does not mandate or even strongly recommend *any* content moderation, let alone specific results. It merely facilitate websites' ability to make the decision for themselves.
25/ The Court's subsequent "permissive law" jurisprudence makes clear that Florida's argument fails.
26/ In American Manufacturers, the Court rejected claims that a law allowing insurers to withhold workers comp payments for review (where previously prohibited) created state action.
27/ The Court held that creating a permissive law wasn't sufficient "encouragement" to make the insurers' decisions state action--they were simply given the option which they were free to take or not at their discretion. casetext.com/case/american-…
28/ So too with Section 230: websites have the option to moderate content and to decide how to do so, at their discretion, and the govt has decided to step out of that argument and not provide a legal remedy. There is simply no government coercion whatsoever.
29/ And just this past January, the Northern District of California rejected Florida's exact argument, noting that Section 230 doesn't require anything of anyone, or give the government any stake whatsoever. casetext.com/case/divino-gr…?
30/ Surely, as I've said before, if the government actively leaned on a platform to remove certain speech, there could be a plausible argument that the moderation constituted state action. But the argument that Section 230's broad application creates state action is not credible
31/ Perhaps sensing that argument's weakness, Florida rips off a Volokh article (really all its arguments here are rather unoriginal, pulled from various op-eds and musings) and cites to Railway Employees' Dept. v. Hanson: casetext.com/case/railway-e…?
32/ Hanson found state action in a union agreement, authorized permissively by Congress, preempting contrary state law, because while the agreement was between private parties, the law was "the source of the power and authority by which any private rights are lost or sacrificed."
33/ I think that argument is unconvincing for two reasons. First, the Hanson Court said that the union shop agreements necessarily bore the imprimatur of federal law. I think that argument is less compelling when dealing with something more abstract like content moderation.
34/ Second, and more concretely, is the matter of "the source...by which private rights are lost or sacrificed."
35/ If content moderation is protected by the First Amendment (and it is), the "private rights" that Florida claims Section 230 provides the power to curtail are not its to dole out, and Hanson is inapt.
36/ And that, friends, is why this is *really* all about the First Amendment, which I'll discuss in Part 2.

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Ari Cohn

Ari Cohn Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @AriCohn

23 Jun
1/ Yesterday I explained how Florida got #Section230 wrong in its opposition to the motion for a preliminary injunction against its social media law.

As promised, here in Part 2, I will tell you why they got the First Amendment wrong too.

2/ As I said yesterday, this case is really about the First Amendment. Florida tried to frontload Section 230, appealing to judicial restraint. But even if the court ruled on Section 230 preemption in Florida's favor, it would then still have to address the First Amendment issue.
3/ On the other hand, if the court rules on the First Amendment issue favorably to the law's challengers, it doesn't need to decide on how expansively or restrictively to read Section 230, thus avoiding a landmine. The First Amendment *is* the issue, and should be the prime focus
Read 47 tweets
22 Jun
Much agitation against "big tech" is misguided & First Amendmently problematic (on both sides), but I do share two concerns:

1) Giving a govt agency regulatory power over platofrms is a bad, bad idea

2) Govt communication with platforms re: what should be banned is problematic.
Damnit give me that edit button.
Point blank: the government should not be advising social media platforms about what content they should moderate. Platforms should not be asking government. And if asked, the government should not answer (haha like the government has ever missed an opportunity to exert its will)
Read 4 tweets
21 Jun
The Supreme Court pretty recently expressed its unwillingness to expand the state action doctrine in Halleck.

And Paul Domer was a student who wrote a law review article; he's not an expert. Marsh is inapt, and again, SCOTUS has been clear that it has no interest in expanding it
Repeat after me: "traditionally and exclusively performed by the state"
Holy cow, this part of Paul Domer's "expert" law review article:

1) Actually, they do

2) Packingham suggests *nothing of the sort*
Read 7 tweets
21 Jun
Maybe attorneys should refuse to represent you. (And lord know you probably need a few given how notoriously shitty property management companies are)
It would surprise me if @RLpmg wasn't doing this because they're engaged in some questionable practices.
Oh @pslohmann & @rlpmg, you thought you could scrub this didn't you. Too bad the Internet is forever and it's also...as you kindly pointed out...right there on your website, which has been archived just in case you try to weasel out of it: web.archive.org/web/2021062113…
Read 4 tweets
20 Jun
How to be wrong about Section 230 in one paragraph.
To explain, I suppose:

1) No, Section 230 wasn't originally designed just to let websites remove pornography. Porn was the target of the rest of the CDA, which was held unconstitutional. 230 was intended to make it easier for sites to decide what kind of place they wanted to be.
2) There's no "serious argument" that Section 230 only applies to "obscene, violent, or equally valueless content." At all. And "equally valueless" is a phrase entirely without meaning or legal import. The point is that sites can decide for themselves what content to allow.
Read 5 tweets
24 May
1/ Today the Texas House of Representatives votes on SB 12, a half-baked and unconstitutional "social media censorship" bill introduced by @SenBryanHughes after a similar bill failed in 2019.

This bill is no better than the last, and the house should vote it down.

#txlege
2/ The bill would forbid platforms from removing content / banning users based on viewpoint (even viewpoints expressed *not* on the platform) and allow aggrieved parties to seek a court order (backed by mandatory contempt findings for non-compliance) to reinstate the user/content
3/ Not for nothing, the whole premise of the bill is flawed: there is vanishingly little support for the claim that platforms are removing content for ideological reasons as opposed to violations of platforms' rules, as this NYU Study found: static1.squarespace.com/static/5b6df95…
Read 18 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!

Follow Us on Twitter!

:(