1. This will not be a good hearing: 4 witnesses (3 lawyers) who want to fundamentally change #Section230 and only 1 to defend the law that made today's interactive Internet possible, who isn't a lawyer, so won't be able to debunk the misinformation about how 230 actually works]
2. @SenateJudiciary failed to publish their written testimony in advance. SOP is to publish testimony 1-2 day before the hearing to foster a more informed discussion.
But, of course, that's not really the purpose of this hearing...
3. Blumenthal: we have bipartisan consensus on reining in Big Tech. We don't agree on everything but I want to thank Sen. Hawley for his leadership on this issue
Hawley be all like:
4. The time when the Internet could be regarded as a neutral or passive conduit has passed. Um:
- 230 was intended to encourage sites NOT to be neutral or passive
- Why is Blumenthal mouthing Josh Hawley's talking points?
5. Hawley: #Section230 has been completely rewritten by the courts at the behest of Big Tech
Um you mean Zeran v AOL (1997), a decision written by perhaps the most acclaimed conservative on the any appeals court of the day?
7. But if 230(c)(1) really protected only against defamation & like claims, there would have been no need for 230(e). IP law, privacy law & federal criminal law are nothing like defamation law. That Congress excluded these means 230(c)(1) must be far broader than she claims
8. She claims the "Good Samaritan" heading means (c)(1) has to be interpreted not to protect sites that don't do enough to stop bad content--essentially, not against distributor liability. That's not how statutory interpretation works. Headings can be useful cues but only that...
9. It's easy to see why Congress put the "Good Samaritan" label on (c)(1). Stratton Oakmont v. Prodigy (1995) disincentivized websites from moderating content by treating them as publishers. (c)(1) overruled that, thus preventing the Moderator's Dilemma
10. Franks is making Justice Thomas's argument: (c)(1) protects only against publisher liability, not distributor liability. Google's Gonzalez brief debunks that argument succinctly.
11. Hany Farid: 230 should be interpreted to protect platforms exclusively for their *hosting* of 3rd party content
Dude doesn't realize he just made the the Trump admin argument: that 230 (or at least (c)(1)) shouldn't protect content moderation (*removal* of content)
12. This is why we don't rely on non-lawyers to parse hard legal distinctions...
14. @SullivanISOC: The Internet is for people. The Internet enables everyone to talk to everyone. That interactivity can be a problem, but it also brings huge benefits. Congress intended #Section230 to protect that interactivity
15. Sullivan: Outright repeal of #Section230 would be a calamity. Ordinary features like retweeting would invite endless litigation. It would be bad to create special rules that only the richest companies could afford to meet.
16. Sullivan: Because #Section230 protects the entire Internet, including the ability of users to interact with each other, the law is a poor vehicle for addressing concerns about the Internet. Focus on competition policy, privacy law, etc.
17. Schnapper (who argued Gonzalez for the plaintiffs so badly that the Justices kept saying: "I don't understand") just opened his testimony by saying: "Section 260... I mean, Section 230..."
This guy is definitely one of Kagan's "nine greatest experts on the Internet!"
18. Schnapper: what we have learned since 1996, absolute immunity can breed absolute irresponsibility
230 doesn't immunize sites from:
- federal criminal law
- content they develop, even in part
- competition law
So, yeah, no
19. Check out Jess's livethread of this #Section230 hearing
20. Hawley: What would be the best way to amend #Section230
Schnapper: I'd prefer not to toss out language as I sit here
So after arguing two cases before the Supreme Court, Schnapper hasn't thought through yet what he'd change?
Maybe because it's not so easy?
21. Padilla introduces a letter signed by 3 dozens organizations across the ideological spectrum, including @TechFreedom, explaining that #Section230 protects not just big platforms, but Internet users—not business models but interactivity we all use daily static1.squarespace.com/static/5716817…
22. Franks: Internet services should be subject to the same rules as any other industry
But the Internet IS different. Billions of users interact daily. Each interaction could lead to a lawsuit. Without 230, America's uniquely expensive litigation system would kill the Internet.
23. Farid: Small platforms have small problems. We've seen in Europe that modest regulation doesn't kill small platforms.
He's talking about countries where plaintiffs don't sue because they have to pay defendants' costs if they lose and there are no class actions, etc etc?
24. The US is the most litigious society on earth. We have 15 times more lawyers per capita than Canada, for instance riponsociety.org/article/dont-r…
25. @ma_franks blithely asserts that changing or reinterpreting 230(c)(1) is nbd because (c)(2) protects content moderation. In fact, nearly all content moderation cases are resolved under (c)(1).
26. The Trump admin wanted to force sites to rely on (c)(2)(A) to make it easier to sue websites for moderating content "unfairly."
27. The difference between the two is key: if you have to prove that content was moderated "voluntarily in good faith," you'll never win a motion to dismiss.
28. Every suit will require expensive discovery into motive and courts to decide hard questions on motions for summary judgment or at trial.
@ma_franks doesn't seem to understand that she's making the same arguments the Trump Admin made for an Internet Fairness Doctrine
29. Klobuchar: The hypocrisy of tech companies agreeing to do in Europe things they told us would break the Internet was the final dagger forcing me to change #Section230
Total non-sequitur. EU requirements aren't remotely comparable to American law...
30. In Europe, it's easy for a regulator to mandate or ban something because everything is soft law enforced at the discretion of a regulator.
The same legal requirement filtered through America's litigation system would produce radically different results
31. In Europe, regulators have room to be pragmatic, to avoid interpretations that would break the Internet. America, plaintiffs drive everything, especially attorneys general with political motives. And massive damages create huge risks, and thus overdeterrence.
32. A particularly low blow from @ma_franks: that users are so concerned about changes to #Section230 breaking the Internet just proves how addictive tech services are!
Yeah, so breaking the I̶n̶t̶e̶r̶n̶e̶t̶ Matrix is actually good, Neo.
33. Franks wants to amend 230 to replace "information" with "speech" and exclude sites acting with "deliberate indifference" to unlawful content. Then adds that a plaintiff should be able to sue if they can show a plausible connection between conduct and harm, which is different
34. All three ideas would have the same practical result: 230 might be ultimately a shield against liability in some cases, but it would no effectively never be a shield against lawsuits. Huge numbers of lawsuits would proceed past a motion to dismiss.
35. For Franks, that's a pure win, but it would fundamentally change the Internet. Few, if any sites, would be able to handle the expensive of fighting off lawsuits. They would have a perverse incentive to take down content they might get sued for--a Heckler's Veto
36. Conservatives might like the sound of the idea, but they'd find that they'd be "censored" far more often than they are today.
37. Sullivan: The problem isn't just that changing #Section230 "could break the Internet." It's not a binary. Changing the law could make it harder for users to interact with each other or make that functionality impossible.
38. Blumenthal: But what about KOSA? That law won't break the Internet. It just requires sites to let parents and teens opt-out from the algorithm
39. Actually, KOSA would require age-verification, which the courts have repeatedly declared unconstitutional, as we & leading scholars explain: techfreedom.org/wp-content/upl…
40. Hawley: I worry about CSAM every day
Really? Then why do you keep lauding Parler, Getter, Gab, Truth Social and other "free speech" sites that take ZERO proactive measures to block known images/videos of children being raped (unlike every Big Tech company)?
41. Hawley: #Section230 is a massive subsidy to Big Tech
Oh so that's why Trump relied on 230 to get a court to dismiss a a lawsuit against him for retweeting someone else's content?
42. Hawley, Cruz and other MAGA nutters keep claiming that 230 is a subsidy to Big Tech. Nonsense. The law protects both providers and users of any interactive computer service, a term that includes blogs, podcast hosts, etc. Without it, most interactivity features wouldn't exist
43. 230(c)(1) has never blocked valid competition suits. If you're actually suing for a company's business practices, 230 doesn't protect them anymore than the First Amendment would. (c)(2)(A) requires good faith, so courts have allowed competition suits...
44. Blumenthal abruptly adjourns the hearing with "Bipartisan unanimity that we need change"
He just pretends not to see that what Hawley and other Rs want is to break moderation of some of the worst kinds of speech online: hate speech, barely legal incitement to violence etc
• • •
Missing some Tweet in this thread? You can try to
force a refresh
1. I've disagreed with @gigibsohn about the biggest telecom issues for 15 years—but those issues aren't why her nomination floundered. Multiple Dem Senators feared supporting someone who had called out Fox for what it was in the Trump years: "state-sponsored propaganda"
2. In 2020, Senate Republicans summoned Twitter, Facebook & Google CEOs for a hearing on their alleged "bias" against conservatives. The Dem chair asked why broadcasters weren't there. Gigi tweeted this:
3. 🐘s claim Gigi wants to "censor" Fox News. Nonsense. She merely objected to🐘s weaponizing the hearing against new media companies they hate for nakedly political reasons, while saying nothing about old media that spew MAGA propaganda
If that were all 230 did, why did Congress spell out things didn't 230 affect completely unrelated to defamation and the like?
@ma_franks .@MA_Franks relies entirely on the "Good Samaritan" heading for 230(c) to argue that (c)(1) must require good faith efforts to block content. If Congress had intended to make (c)(1) immunity contingent on good faith, it would have said so, as it did in (c)(2)(A)
The court refused to strike down the TX law as facially unconstitutional because of overbreadth, suggesting that it would have to be challenged as to specific applications
Just like Florida's 1903 must-carry mandate was unconstitutional as applied to all newspapers all the time?
lol no
The Packingham Court referred to tech companies as "town squares" in a purely colloquial sense. The case involved a state law compelling tech companies not to host sex offenders, so the Court didn't say anything about whether they were public fora absent such compulsion
1/ Today, the #FTC will vote to issue a staff report about last year's workshop on Dark Patterns—at which Prof. @harrybr, who helped coined the term, warned that it was "vague." Let's hope the report gets a lot more specific about what kind of cases the FTC will bring
2/ The concept of “darkness” implies that consumers are necessarily unaware of what is happening. This kind of opacity may be problematic, but by itself, insufficient under Section 5(n) of the FTC Act.
3/ An unfair practice must involve harm that is not “reasonably avoidable by consumers themselves.” In other words, it is the harm, not the practice that must be obscure to consumers.
.@SenateJudiciary is marking up #EARNITAct, which claims to crack down on child sexual abuse material but will really jeopardize prosecutions. Forcing tech firms not to use strong encryption & to monitor users makes them state actors who need a warrant 🧵
#EARNITAct's sponsors say they've fixed the bill. They haven't. Making the "best practices" "voluntary" doesn't help. The 4th Amd./privacy problem has always been come from exposing tech companies to such vast liability that they *must* monitor what users say & abandon encryption
#EARNITAct was changed in 2020 to "fix" the liability it enables under federal law (by tying it to "actual knowledge", but it then does exactly the same thing through the back door: enabling states to enforce criminal & civil laws that turn on mere recklessness or negligence