At yesterday's hearing, some suggested that Congress intended Section 230 to cover only "neutral platforms." Not true at all. Here's a thread that sets forth the facts. (I had promised to write a thread for every mention of "neutrality," but I have stuff to do).
Congress passed 230 in 1996 to correct a perverse incentive in the common law/1A rules for early online services, which suggested that platforms reduce their liability by not moderating. Congress did *not* want that outcome.
A judge dismissed a 1991 defamation lawsuit against CompuServe based on a third-party newsletter because the judge concluded that CompuServe did not exercise sufficient "editorial control" and therefore faced the limited liability standard of a newsstand.
In a 1995 case, Stratton Oakmont v. Prodigy, a judge refused to apply the limited liability standard to Prodigy in a suit arising from a user post. Prodigy was found to be just as liable as the poster. The reason? Prodigy had more extensive user content policies and moderation.
This incentive structure did not make sense to many members of Congress. They wanted to encourage online services to provide tools to users to block harmful content, and to engage in moderation themselves. They also did not want to solve this problem by regulating the Internet.
So the House overwhelmingly added what would become known as Section 230 to the massive overhaul of telecom laws. It contains these 26 words, which prevent online services from being treated as publishers, as Prodigy was in the Stratton Oakmont suit.
They also included this additional protection for moderating objectionable content.
Nothing in Section 230 is conditioned on platforms being "neutral." The point of 230 is to give the companies space to *not* be neutral and get rid of user content that they believe is harmful.
But what about the congressional intent? That is what we hear from many people who try to advance the lies about 230. Fortunately, we have the transcript of the Aug. 4, 1995 floor debate on 230. And it shows that there was *no* intent for neutrality. Quite the opposite.
Chris Cox, one of 230's two authors, discussed the problems with the Stratton Oakmont case in detail and then said this.
Joe Barton of Texas said 230 was a far better proposal than the Senate's Communications Decency Act, which criminalized indecent online content (and was soon struck down by the Supreme Court).
Rick White, who represented the Washington state district that includes Microsoft's headquarters, said this.
Bob Goodlatte said this.
And in the conference report for the final telecom bill that included 230 in the text, the committee wrote this, leaving no doubt that 230 was intended to eliminate the perverse incentive of Stratton Oakmont v. Prodigy.
There are a lot of valid criticisms that people can provide regarding 230 and how platforms have used its protections. The 230 discussion is long overdue. But it has to be grounded in reality.
The "230 requires neutrality" line of argument is absolutely, completely false. People who suggest otherwise either have not looked at its history, or they have done so but are willfully misrepresenting it.
A separate question is whether Congress *should* amend Section 230 to condition it on neutrality. Of course Congress could do that. But in addition to serious constitutional problems, "neutrality" would render the Internet unusable.

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Jeff Kosseff

Jeff Kosseff Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @jkosseff

31 Jan
Rather than write another long Section 230 thread, I'm going to write a long thread about the topic of my next book - First Amendment protection of anonymity. I write this thread because there are rumblings about proposals to require people to use their real names online.
Of course, some platforms have such policies (of questionable efficacy). But they are free to make those choices. A federal law that would *require* real name policies could not survive First Amendment scrutiny.
People have used anonymous and pseudonymous speech since the colonial times and founding. Common Sense, Letters from a Pennsylvania Farmer, the Federalist Papers - the authors all had good (and nuanced) reasons for separating their identities from their words.
Read 13 tweets
18 Jan
I try to take a lighthearted approach to the relentlessly inaccurate portrayals of Section 230, but it is a serious problem that reflects the broader struggles that we are having with misinformation.
I can't count the number of times I have heard from people who are convinced that 230 is the "publisher or platform" law that requires social media to be neutral. Most think they can sue Twitter for being biased. Some think it has criminal penalties for tech company employees.
These are largely well-intentioned people who truly believe that Section 230 requires neutrality. They aren't lawyers and they don't know how to look up Title 47 of the U.S. Code, so they trust other people.
Read 8 tweets
17 Jan
Thanks for asking! This is exactly the bad sort of advice that far too many news companies and their lawyers have given since 230’s passage. Editing user content does *not* remove 230 protections for that user content. Let’s say there is a user post that accuses someone ...
of crimes twice within the post, and the moderator deletes one of the accusations but not the other. That doesn’t make the platform liable for the accusation that remains online. Now, if a user comments “John Doe is a murderer” and the site editor adds ...
a comment that says “Yes, John Doe murdered three people,” then the platform can be liable for the comment from the editor, because that is content the platform created. But it still would not be liable for the user post.
Read 5 tweets
17 Jan
This is a good point, and one that leads to confusion about what 230 does. Congress did pass 230 to overrule a NY state trial court decision that held that an online service that maintains "editorial control" is responsible for all of the user content that it leaves up.
This decision relied on flawed legal reasoning, and the highest court in NY state would later disagree with it in another case. But it got a lot of media attention, and was one of the driving forces for 230's passage in 1996.
Had Section 230 not passed, I think that there is a very good (though not certain) chance that courts would reject the NY trial court's reasoning. If they did not, then moderation could lead to a platform being liable for the content that it failed to remove.
Read 5 tweets
17 Jan
Lots of thinkpieces about Section 230 these days. More discussion of 230 is excellent, but many of the pieces contain misstatements about 230 and the First Amendment. Those takes are then repeated in an echo chamber. In this thread, I try to correct the most common inaccuracies.
1. 230 is not responsible for platforms deleting content or deactivating accounts. The First Amendment allows companies to make those decisions. 230 allows easier dismissal of lawsuits arising from these decisions, but those suits never would succeed even without 230.
2. Relatedly, nobody has a First Amendment right to force a private platform to carry their speech. The First Amendment restrict the actions of the government, not private companies. Courts have been crystal clear on this point.
Read 12 tweets
16 Jan
Didn’t think that one person could do this... salon.com/2021/01/15/how…
Not again with “contrary to its official name.”. cc:@blakereid
I really give up.
Read 16 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!

Follow Us on Twitter!