1. the 230 (c)(1) immunity overwrites the (c)(2) good faith moderation provision;
2. 230 operates like a "super immunity" reaching beyond publication torts (i.e. defamation).
Except that 230(c)(2) has always served as a back-fill provision for instances where (c)(1) fails (i.e. Barnes, Fyk).
and 230 was always intended to apply beyond defamation, signaled by the existence of statutory exceptions under 230(e)
That content moderation line between 230(c)(1) and (c)(2) is imaginary.
"a finding for Gonzalez would render Section 230 a dead letter." @SteveDelBianco
Steve rightfully pushes back on Justice Kagan's suggestion in Gonzalez that #Section230 was a "pre-algorithm" law.
Algorithms underpin the web. There is no such thing as a "pre-algorithm" era of the Internet. Hell, TCP / IP is an algorithm in itself...
Matt Wood suggests a line between rote distributive liability versus when a platform "knows" of the harm caused by 3P content.
But as we saw in Taamneh, such a test would render future 230 case law a convoluted, unnavigable mess -- a Plaintiff's paradise.
.@ma_franks reiterating Justice Kagan: "every industry has to deal with lawsuits."
Right -- but let's be concrete. We're talking specifically about the industry of providing access to speech and expression on the Internet.
Treating the Internet like oil & gas or automotive or pharma industries is, respectfully, absurd.
Alex Abdo -- I'm concerned about human rights advocates and government officials who need to blow the whistle but the platforms may be discouraged from hosting that speech. #Section230
Matt Wood -- we need to take a look at #Section230 when these platforms claim "tough luck", "nothing you can do" in the face of harm.
Strawman alert. Websites have immense incentives to do the right thing / mitigate harms. Not to mention, 230 isn't limitless.
.@ma_franks brings up jawboning as a reason to curb #Section230 and notes the status quo for websites is to do whatever will make them money.
Problematic take aside, yes -- we know that websites act non-neutrally. Editorial discretion always implies bias. That's 1A for you.
.@SteveDelBianco also points out that both sides of the aisle are excited to force websites to host their own speech and block their opponents. #Section230 empowers websites to keep the balance.
Remember: anything one party bakes into 230 can be weaponized by the other.
.@rmack: Wikimedia will be affected by a ruling for Gonzalez. Such ruling would make it impossible for us to empower volunteers to self-govern their communities.
She notes the panel keeps referring to "the platforms" forgetting that 230 also protects users / smaller services.
.@SteveDelBianco agrees and notes that whenever wikipedia suggests a link to another source that a Gonzalez holding could render wikipedia liable for those suggestions. Same for Yelp.
Same for any websites that engages with user created content.
.@BerinSzoka presses @ma_franks on her point that 230(c)(1) only applies to publication torts. Berin notes that if we're going to take a textualist approach to 230, then why would Congress add a separate provision for exceptions, having nothing to do with publisher torts.
Berin rightfully calls out Prof. MAF for making identical arguments that the Trump administration makes regarding jawboning. He's not wrong. #Section230
• • •
Missing some Tweet in this thread? You can try to
force a refresh
ICYMI Texas' latest compelled-birth bill (HB 2690) enables private claimants to target websites that aid / abet abortions.
This only further raises the stakes for Twitter v. Taamneh; a case that will consider whether Twitter aided / abetted terrorism under the ATA.
How could this play out?
If SCOTUS holds that Twitter--in merely providing access to its service and enforcing its community guidelines against terrorist content--aided / abetted terrorism, the same can be said for *any* website that happens to host abortion related content.
Think about Facebook groups, subreddits, discord servers, group chats, etc., dedicated to providing a safe space for discussions about abortion resources.
Getting ready to kick off the Future of Children's Online Privacy panel at #SOTN2023
Jane Horvath suggests that more states need to implement kids privacy leg.
Privacy for all is an important goal. But state-by-state solutions will only make the current convoluted patchwork problem worse.
If anything, we should be focused on getting to Yes on federal privacy.
Key point from Jamie Susskind -- conversations regarding online harm to kids clouds the federal privacy discourse making it impossible to pass legislation. Those conversations are important but separate.
Next #Section230 SOTN panel starting with @joellthayer noting that FOSTA was important for taking down Backpage...
The DOJ took Backpage down before FOSTA was enacted. But details.
Yael Eisenstat (ADL): "where does Section 230 stop? where are the lines?"
Section 230(e) is a good starting place.
@MattPerault importantly reiterating those limits. #Section230 is not a defense to federal criminal prosecution. Congress has the tools to create legislation in this area if they feel it necessary.
JCT kicks it off driving at the substantial assistance Q.
Hypo: JCT's friend is a mugger and JCT loans him a gun knowing that the friend *may* use it to commit a crime. Does he need to know more to qualify aiding / abetting?
Petitioners note that the facts in Twitter's case are much more remote than JCT's example. Twitter doesn't have any reason to know or even infer the same of its users.
Most surprising for me was Justice Thomas. Right out the gate, he essentially questioned why this case was even being heard.
Which would be totally fair had he not been begging for a 230 case to opine on since 2019 but I digress.
Another surprise:
The Court seemed to appreciate that algorithms and content moderation are essential to the way the Internet functions today and that attempts to create imprecise legal and technological distinctions could have irreparable effects on the modern web.