Much agitation against "big tech" is misguided & First Amendmently problematic (on both sides), but I do share two concerns:
1) Giving a govt agency regulatory power over platofrms is a bad, bad idea
2) Govt communication with platforms re: what should be banned is problematic.
Damnit give me that edit button.
Point blank: the government should not be advising social media platforms about what content they should moderate. Platforms should not be asking government. And if asked, the government should not answer (haha like the government has ever missed an opportunity to exert its will)
If a plaintiff could plausibly allege that the government actually leaned on a platform to moderate certain, specific content, you won't hear me crying about asking the courts to decide whether that's sufficient to state a claim for violating First Amendment rights.
• • •
Missing some Tweet in this thread? You can try to
force a refresh
2/ As I said yesterday, this case is really about the First Amendment. Florida tried to frontload Section 230, appealing to judicial restraint. But even if the court ruled on Section 230 preemption in Florida's favor, it would then still have to address the First Amendment issue.
3/ On the other hand, if the court rules on the First Amendment issue favorably to the law's challengers, it doesn't need to decide on how expansively or restrictively to read Section 230, thus avoiding a landmine. The First Amendment *is* the issue, and should be the prime focus
Florida desperately wants to change the conversation to #Section230 instead of the First Amendment, because that's the conversation they've always wanted this to be about; it's the political hot button they want to feverishly mash.
3/ So they frontloaded the 230 discussion.
But they get off to a bad start by claiming that 230 was prompted only by the Stratton Oakmont, which held Prodigy liable for user content because it engaged in *some* content moderation.
The Supreme Court pretty recently expressed its unwillingness to expand the state action doctrine in Halleck.
And Paul Domer was a student who wrote a law review article; he's not an expert. Marsh is inapt, and again, SCOTUS has been clear that it has no interest in expanding it
It would surprise me if @RLpmg wasn't doing this because they're engaged in some questionable practices.
Oh @pslohmann & @rlpmg, you thought you could scrub this didn't you. Too bad the Internet is forever and it's also...as you kindly pointed out...right there on your website, which has been archived just in case you try to weasel out of it: web.archive.org/web/2021062113…
1) No, Section 230 wasn't originally designed just to let websites remove pornography. Porn was the target of the rest of the CDA, which was held unconstitutional. 230 was intended to make it easier for sites to decide what kind of place they wanted to be.
2) There's no "serious argument" that Section 230 only applies to "obscene, violent, or equally valueless content." At all. And "equally valueless" is a phrase entirely without meaning or legal import. The point is that sites can decide for themselves what content to allow.
1/ Today the Texas House of Representatives votes on SB 12, a half-baked and unconstitutional "social media censorship" bill introduced by @SenBryanHughes after a similar bill failed in 2019.
This bill is no better than the last, and the house should vote it down.
2/ The bill would forbid platforms from removing content / banning users based on viewpoint (even viewpoints expressed *not* on the platform) and allow aggrieved parties to seek a court order (backed by mandatory contempt findings for non-compliance) to reinstate the user/content
3/ Not for nothing, the whole premise of the bill is flawed: there is vanishingly little support for the claim that platforms are removing content for ideological reasons as opposed to violations of platforms' rules, as this NYU Study found: static1.squarespace.com/static/5b6df95…