It would surprise me if @RLpmg wasn't doing this because they're engaged in some questionable practices.
Oh @pslohmann & @rlpmg, you thought you could scrub this didn't you. Too bad the Internet is forever and it's also...as you kindly pointed out...right there on your website, which has been archived just in case you try to weasel out of it: web.archive.org/web/2021062113…
Guess we can add "tweeting" to the list of things he doesn't do to lawyers
• • •
Missing some Tweet in this thread? You can try to
force a refresh
Florida desperately wants to change the conversation to #Section230 instead of the First Amendment, because that's the conversation they've always wanted this to be about; it's the political hot button they want to feverishly mash.
3/ So they frontloaded the 230 discussion.
But they get off to a bad start by claiming that 230 was prompted only by the Stratton Oakmont, which held Prodigy liable for user content because it engaged in *some* content moderation.
Much agitation against "big tech" is misguided & First Amendmently problematic (on both sides), but I do share two concerns:
1) Giving a govt agency regulatory power over platofrms is a bad, bad idea
2) Govt communication with platforms re: what should be banned is problematic.
Damnit give me that edit button.
Point blank: the government should not be advising social media platforms about what content they should moderate. Platforms should not be asking government. And if asked, the government should not answer (haha like the government has ever missed an opportunity to exert its will)
The Supreme Court pretty recently expressed its unwillingness to expand the state action doctrine in Halleck.
And Paul Domer was a student who wrote a law review article; he's not an expert. Marsh is inapt, and again, SCOTUS has been clear that it has no interest in expanding it
1) No, Section 230 wasn't originally designed just to let websites remove pornography. Porn was the target of the rest of the CDA, which was held unconstitutional. 230 was intended to make it easier for sites to decide what kind of place they wanted to be.
2) There's no "serious argument" that Section 230 only applies to "obscene, violent, or equally valueless content." At all. And "equally valueless" is a phrase entirely without meaning or legal import. The point is that sites can decide for themselves what content to allow.
1/ Today the Texas House of Representatives votes on SB 12, a half-baked and unconstitutional "social media censorship" bill introduced by @SenBryanHughes after a similar bill failed in 2019.
This bill is no better than the last, and the house should vote it down.
2/ The bill would forbid platforms from removing content / banning users based on viewpoint (even viewpoints expressed *not* on the platform) and allow aggrieved parties to seek a court order (backed by mandatory contempt findings for non-compliance) to reinstate the user/content
3/ Not for nothing, the whole premise of the bill is flawed: there is vanishingly little support for the claim that platforms are removing content for ideological reasons as opposed to violations of platforms' rules, as this NYU Study found: static1.squarespace.com/static/5b6df95…
I am personally excited for the option I can select to make sure I see posts from Shirley Phelps-Roper that provide an opposing point of view and make sure that I am not stuck in an echo chamber and failing to properly think about whether or not God does, in fact, hate fags.
Thanks, Martha Minow.
I'm terribly sure that QAnon folks will love the "show me things that contradict the irrational beliefs I have adopted contrary to observable reality and facts" option, or that Israel supporters will be wanting to make sure they see the "Israel is murderous apartheid" takes.