Reading the new big tech censorship bill from @HawleyMo right now

It's straight fire
First - who does it cover? Well - three metrics.

1) 30 million active users in the United States;
2) 300 million active users worldwide; OR
3) 500 million in global annual revenue.

Simple, right? This captures FB/Twitter/Insta/Google, avoids hitting startups.
I'll continue this thread in about two hours
BACK on @HawleyMO's bill and why it would likely solve the problem.

Here's the key - it makes BOTH of Section 230's liability waivers contingent on a "covered company" getting an "immunity certification" from the FTC.

See where it says "paragraphs (1) and (2) shall not apply"?
Paragraph 1 is the publisher/platform distinction, affirming that platforms are not the "publisher" of third-party content

In The Section 230 illusion, @RonColeman and I argued that tweaking that wasn't enough, because defamation lawsuits are near impossible to win.
But @HawleyMO's masterstroke is putting the liability waiver under paragraph 2 in play.

That's the one that protects the platforms' ability to remove "obscene" and "objectionable" content in good faith.

They need that one, folks. Otherwise moderation is a legal nightmare.
The lobbyists for FB/Twitter/Google know what's at stake.

They'll have to worry about content removal CONSTANTLY if they lose the protection of paragraph (2).

That's the crown jewel - not the "publisher/platform" distinction. Which is precisely what @RonColeman and I argued.
Now, these lobbyists are exaggerating the KKK problem - there is an exception for "business necessity" which could likely be read to allow banning literal Nazis and unlawful speech.
Here's the tricky part. The bill uses the FTC - which means regulators.

To get the "immunity certification," companies must prove to the FTC by "clear and convincing evidence" that the platform does to "moderate information...in a politically biased manner."
The bill defines "politically biased moderation" in some detail:

1) an "intent" standard - is the moderation designed to hurt a political party/candidate/viewpoint

2) a "results" standard - does the moderation disproportionately restrict access to information about the same
Specific regulations are left to the FTC - another potential place where the bill could falter. (But if you're using regulators, that's a risk you have to take)
One benefit of the regulatory approach, though: the FTC can supervise *all* forms of moderation, including algorithmic moderation

And @HawleyMO's bill makes sure that's within the FTC's purview
Overall? @HawleyMO's bill will SCARE THE DAYLIGHTS out of FB, Google, and Twitter

Their crown jewel - the liability carveout under § 230(c)(2) protecting good faith moderation decisions - is fully in play

Will it pass? Who knows

But it's one hell of a shot across the bow

FIN
Missing some Tweet in this thread?
You can try to force a refresh.

Like this thread? Get email updates or save it to PDF!

Subscribe to Will Chamberlain 🇺🇸
Profile picture

Get real-time email alerts when new unrolls are available from this author!

This content may be removed anytime!

Twitter may remove this content at anytime, convert it as a PDF, save and print for later use!

Try unrolling a thread yourself!

how to unroll video

1) Follow Thread Reader App on Twitter so you can easily mention us!

2) Go to a Twitter thread (series of Tweets by the same owner) and mention us with a keyword "unroll" @threadreaderapp unroll

You can practice here first or read more on our help page!

Follow Us on Twitter!

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just three indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3.00/month or $30.00/year) and get exclusive features!

Become Premium

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!