Jeff Kosseff Profile picture
Asst. Prof, Cybersecurity Law, U.S. Naval Academy. Righteous philosopher-academic. Author of The 26 Words That Created the Internet. Tweets don't represent DOD.
David T.S. Fraser Profile picture 1 added to My Authors
25 Apr
Not sure what started all of this talk about preparing for 1L in the summer before law school. I’d be all for it if I thought it could actually help. The big problem is that you don’t know who your profs will be, and profs have unique understanding of the law and how to test it.
One of my 1L profs was a great lecturer but his understanding of the law was ... a bit dated. My study group puzzled over the huge gaps between what was in the commercial outlines and our notes from class.
Those who took the exam based on the prof’s less complete picture of the legal rules did far better than those who incorporated the more complete and up-to-date legal rules.
Read 5 tweets
7 Apr
I'm pretty certain that we'll have a lot more hearings about Section 230 and content moderation. I think we've heard enough from Dorsey, Pichai, and Zuckerberg. These are just some of the people I'd like to hear from in the future hearings to build an informed record.
1. Advocates for less aggressive moderation: I'd like to know when, if at all, it would be permissible to moderate . I've encountered some who say platforms should get out of the moderation business, but most say there should be *some* moderation, like for illegal content.
Based on my talks, there is a pretty wide variation within this group. Such as whether platforms should have discretion to remove legal but harmful speech such as hate speech. If so, what are the decision mechanisms?
Read 11 tweets
6 Apr
One of the interesting questions posed in light of Justice Thomas's concurrence is whether a holding that Section 230(c)(2) is unconstitutional (and I don't think it is) would render Section 230(c)(1) unconstitutional.
(c)(1) is the 26 words and by far the more commonly cited provision of 230. (c)(2) is relied upon less frequently, and protects good faith efforts to block objectionable material. (I think it's constitutional because the 1A protects such moderation, though others disagree).
For the sake of argument, let's say that (c)(2) is unconstitutional. Does that render (c)(1) unconstitutional? I think that hinges in part on whether you look at (c)(1) and (c)(2) as inextricably linked. And that's where the legislative history is messy.
Read 16 tweets
6 Apr
I've seen some profs recently tweet examples about their students' bad behavior/etiquette. We're in a pandemic, everyone is stressed out, and I don't see the value in publicly shaming students. Even when the profs don't use identifying info, the subject of the tweet may see it.
If there's any time to cut students a break, it's this year.
I'm all for proper etiquette but I think it's something that's best addressed in a personal conversation with the student and not on a public forum.
When I was a student, I certainly had my share of lapses in decorum, and there wasn't even a pandemic. I can't imagine how I'd feel if I saw a professor mocking me on social media (which fortunately did not exist back then).
Read 5 tweets
6 Apr
This analysis is spot-on. Data breach notification laws are not going to solve our cybersecurity problems. It is frustrating to see states continue to modestly amend their notice laws, as if it will change anything. We need effective security regulation.
I understand why we need to rely on states, as federal proposals for serious security regulation have stalled for more than a decade. It's unfortunate, as the state efforts are not terribly effective, but they are all we have for now.
And we need to stop conflating security regulation with privacy regulation.
Read 4 tweets
5 Apr
Must-carry obligations for social media sound reasonable until you look at the sort of content that platforms block at a massive scale.
The typical response is, "we'll just exclude it from the must-carry obligation." OK, so we'll exclude "illegal content." What if it's a close call? With a must-carry obligation, the platform will err on the side of leaving it up.
We probably don't want to prevent platforms from blocking spam -- which is what a large portion of their moderation systems routinely do. But do we want to penalize them if they inadvertently classify something as spam?
Read 7 tweets
5 Apr
I'm not sure what to make of how much support Justice Thomas's reading of 230 has among the other eight Justices, particularly because they've denied cert in a few high-profile 230 cases recently. But here is the fundamental flaw with the reasoning behind his interpretation.
He assumes that 230(c)(2), which protects platforms' blocking of objectionable content, might violate a hypothetical state law that requires these platforms to carry that content. The first big problem with that is the assumption that such a law would survive a 1A challenge.
I have not seen any authority that would allow such a requirement to stand. The cases cited involve military recruiting on campuses and must-carry restrictions for cable TV. The far more relevant case is Tornillo, which prohibited forced publication of letters to the editor.
Read 11 tweets
26 Mar
At yesterday's hearing, some suggested that Congress intended Section 230 to cover only "neutral platforms." Not true at all. Here's a thread that sets forth the facts. (I had promised to write a thread for every mention of "neutrality," but I have stuff to do).
Congress passed 230 in 1996 to correct a perverse incentive in the common law/1A rules for early online services, which suggested that platforms reduce their liability by not moderating. Congress did *not* want that outcome.
A judge dismissed a 1991 defamation lawsuit against CompuServe based on a third-party newsletter because the judge concluded that CompuServe did not exercise sufficient "editorial control" and therefore faced the limited liability standard of a newsstand.
Read 17 tweets
31 Jan
Rather than write another long Section 230 thread, I'm going to write a long thread about the topic of my next book - First Amendment protection of anonymity. I write this thread because there are rumblings about proposals to require people to use their real names online.
Of course, some platforms have such policies (of questionable efficacy). But they are free to make those choices. A federal law that would *require* real name policies could not survive First Amendment scrutiny.
People have used anonymous and pseudonymous speech since the colonial times and founding. Common Sense, Letters from a Pennsylvania Farmer, the Federalist Papers - the authors all had good (and nuanced) reasons for separating their identities from their words.
Read 13 tweets
18 Jan
I try to take a lighthearted approach to the relentlessly inaccurate portrayals of Section 230, but it is a serious problem that reflects the broader struggles that we are having with misinformation.
I can't count the number of times I have heard from people who are convinced that 230 is the "publisher or platform" law that requires social media to be neutral. Most think they can sue Twitter for being biased. Some think it has criminal penalties for tech company employees.
These are largely well-intentioned people who truly believe that Section 230 requires neutrality. They aren't lawyers and they don't know how to look up Title 47 of the U.S. Code, so they trust other people.
Read 8 tweets
17 Jan
Thanks for asking! This is exactly the bad sort of advice that far too many news companies and their lawyers have given since 230’s passage. Editing user content does *not* remove 230 protections for that user content. Let’s say there is a user post that accuses someone ...
of crimes twice within the post, and the moderator deletes one of the accusations but not the other. That doesn’t make the platform liable for the accusation that remains online. Now, if a user comments “John Doe is a murderer” and the site editor adds ...
a comment that says “Yes, John Doe murdered three people,” then the platform can be liable for the comment from the editor, because that is content the platform created. But it still would not be liable for the user post.
Read 5 tweets
17 Jan
This is a good point, and one that leads to confusion about what 230 does. Congress did pass 230 to overrule a NY state trial court decision that held that an online service that maintains "editorial control" is responsible for all of the user content that it leaves up.
This decision relied on flawed legal reasoning, and the highest court in NY state would later disagree with it in another case. But it got a lot of media attention, and was one of the driving forces for 230's passage in 1996.
Had Section 230 not passed, I think that there is a very good (though not certain) chance that courts would reject the NY trial court's reasoning. If they did not, then moderation could lead to a platform being liable for the content that it failed to remove.
Read 5 tweets
17 Jan
Lots of thinkpieces about Section 230 these days. More discussion of 230 is excellent, but many of the pieces contain misstatements about 230 and the First Amendment. Those takes are then repeated in an echo chamber. In this thread, I try to correct the most common inaccuracies.
1. 230 is not responsible for platforms deleting content or deactivating accounts. The First Amendment allows companies to make those decisions. 230 allows easier dismissal of lawsuits arising from these decisions, but those suits never would succeed even without 230.
2. Relatedly, nobody has a First Amendment right to force a private platform to carry their speech. The First Amendment restrict the actions of the government, not private companies. Courts have been crystal clear on this point.
Read 12 tweets
16 Jan
Didn’t think that one person could do this... salon.com/2021/01/15/how…
Not again with “contrary to its official name.”. cc:@blakereid
I really give up.
Read 16 tweets
31 Dec 20
So tired of explaining why the premise and logic behind these sorts of questions are off base. fortune.com/2020/12/30/sec…
tell me more
The bulletin boards and forums on AOL and Prodigy want an apology.
Read 7 tweets
7 Dec 20
Every year, I advise a lot of recent and soon-to-be law school grads who are interested in careers in tech law. And it's consistently disappointing and concerning to hear stories about about law school profs who do not make it a top priority to help their students find jobs.
I'm sure these cases are the exceptions to the rule; indeed, my professors in law school were key to my career development. And many of my friends who teach at law schools routinely reach out on behalf of their students. That's how it should work.
But I also hear stories about law profs who simply don't respond when their students ask for advice, suggestions of job leads, and references. It is deeply disappointing.
Read 5 tweets
7 Dec 20
A repeal of Section 230 likely would lead to more content moderation and fewer opportunities for user-generated content. To understand why, it's helpful to look at a dispute that Eddie Haskell had with an adult bookstore 40 years ago.
After Leave it to Beaver went off the air, Ken Osmond, the actor who played Eddie Haskell, became an LAPD officer who had a small family and led a pretty quiet life. Until the early 80s, when he found out that a chain of LA porn stores was selling a film starring John Holmes.
The cover of the film's carton said that it starred "John Holmes, who played 'Little Eddie Haskell on the Leave it to Beaver show.'" Ken Osmond was the only one who played Eddie Haskell, and he never was in porn.
Read 20 tweets
5 Dec 20
I love Powell Books in Portland. My favorite store in the country. If Powell’s decides not to sell my books, can I sue them for censorship? No, of course not. It’s just as ludicrous to think that I could sue Twitter for deleting my tweet, even in a world without Section 230.
There is not a viable cause of action to force a private company to distribute particular speech. The First Amendment does not require it. And more importantly, the First Amendment does not allow me to force a private company to distribute my speech.
Now you might point to Section 230, which explicitly provides immunity for platforms’ good-faith efforts to restrict access to objectionable content. Why did Congress pass that if the First Amendment already protects moderation? A few reasons.
Read 12 tweets
4 Dec 20
Today's Section 230 examines the increasingly common complaint that 230 is the "censorship protection law." tl;dr: even in a world without 230, platforms could "censor" users as they want because they are not limited by the First Amendment.
I'm not exaggerating when I say that is the argument. This is a question that the Washington Examiner asked two weeks ago in its interview of an incoming U.S. Senator.
This kind of characterization is commonplace in the 230 debate, and it leaves the impression that without Section 230, platforms would have no choice but to allow users to post whatever they want.
Read 25 tweets
3 Dec 20
What happens on the day after Section 230 is repealed? I'm going to prognosticate a bit in today's Section 230 thread. The tl;dr is that we'll likely see a lot less speech, especially if the speech even borders on being controversial.
Recall that Section 230's core immunity says that platforms shall not be treated as the publishers of third-party content. So unless an exception applies, a social media site, for instance, cannot be successfully sued for a user's post.
The user who posted it always could be sued, but not the platform. This provides platforms with great flexibility to set their own moderation standards.
Read 20 tweets
2 Dec 20
I'm inexplicably getting a lot of DMs, emails, phone calls, telegrams, carrier pigeons about 230 this morning. So I thought that today's 230 thread could address a high-level question: what does Section 230 do? There's quite a bit of confusion on this.
The tl;dr version is: Section 230 allows the person who posted illegal/defamatory content to be sued, but it generally prohibits a successful lawsuit against the platform where the person posted that content.
230 still applies if platforms moderate content, and protects their liability both from keeping content up and taking content down. But platforms do not receive 230 protections for content that they created.

Now we get into the weeds:
Read 22 tweets