Ari Cohn Profile picture
1 Dec, 270 tweets, 35 min read
1/ Going to be tweeting through the House Energy & Commerce "Big Tech" hearing here at 10:30 EST. You can watch at energycommerce.house.gov/committee-acti…
2/ Like most of these hearings, this looks to be another Techlash Festivus, with the parties having selected witnesses that largely align with their predetermined grievances against social media compahnies.
3/ Some don't understand how 230/the First Amendment work. Others have never met a civil liberty they wouldn't run over to achieve a desired outcome. Some peddle absurdly flawed theories of how things ought work. Some simply have an agenda and don't care about collateral effects.
4/ But there are bright spots:

@ProfDanielLyons has been a sane voice who actually grapples with concerns that people have about tech companies without losing sight of the First Amendment or practical ramifications of drastic change.

His testimony: energycommerce.house.gov/sites/democrat…
5/ Coming from the "other side" is @mattfwood. I don't agree with everything in his testimony, but I found it to be thoughtful and to ask the right questions in a way that reflects a sober, realistic assessment of solutions instead of feel-good approaches: energycommerce.house.gov/sites/democrat…
6/ And we're off. Doyle kicks off by telling a story about how an addict was connected to a drug dealer on the Internet and died from a fentanyl overdose.

Moves on to talk about the facts of Herrick v. Grindr (the plaintiffs' lawyer is testifying at the hearing)
7/ Doyle says that they might not have won lawsuits, but should have been able to try.

Tjhey actually almost certainly would have lost, as many other lawsuits have found.
8/ Doyle thinks he has bipartisan concensus to reform Section 230, which is bizarre. The two sides want totally different things.

Ds want platforms to remove more Bad Posts(tm). Republicans want platforms to be forced to carry more Bad Posts(tm).
9/ Doyle launches into a "think of the childen" bit, which you know always leads to good legislative ideas.
10/ Eshoo says she still believes in 230's value, but thinks that algorithmic amplification makes a platform "no longer a conduit"
11/ Latta's opening statement actually highlights the difference between Ds and Rs. He expressly says that social media companies should be forced to carry any constitutionally protected speech.
12/ Latta says that Big Tech has "abused" 230 by censoring content and algorithmically suppress content on basis of viewpoint.

He complains about no recourse for this. Of course, even without 230 they would have no recourse. The First Amendment would get in the way of that.
13/ I feel kind of drunk just listening to Latta slur through this.
14/ Latta, without a hint of irony says that we must protect our First Amendment rights, while expressly advocating for the infringement of First Amendment rights.
15/ Pallone speaks. More talk about bipartisan calls for reform, noting the number of hearings. Which is EXACTLY THE POINT. You can never get anything done because all of you want opposing things.
16/ Pallone thinks it's a time for more regulation because he doesn't like some of the speech, like COVID misinformation and conspiracy theories, appear on social media.

Are the Republicans listening?
17/ Pallone says he thinks courts have interpreted 230 more broadly than intended. Except, of course, that the authors of the bill say that's not at all true.
18/ The committee keeps talking about how these are "targeted" bills. But they aren't. Scattershot attacks on algorithms are facile, not targeted.
19/ Pallone complaining that companies say they have values and if they really believe that, they should be accountable.

But that's ridiculous. Congress has no business policing companies' adherence to their own values.
20/ McMorris Rodgers strarts off spitting nonsense, saying that "Big Tech has not been a good steward of their platforms." That's also not Congress' purview.

She says "they are no longer operating as public squares." Fact check: they never have.
21/ She compares social media platforms to authoritarian governments.

Of course, Facebook can't put you in jail, so that's pretty stupid.
22/ McMorris Rodgers highlights her beef with the D bills, noting JAMAA would lead to more removal of content.

She says Republicans are fighting for free speech. Again, trying to defend free speech by violating the First Amendment is not actually defending free speech.
23/ She highlights her bill that she's co-sponsored with Jim Jordan, who famously looked the other way while his players were sexually abused.
24/ McMorris Rodgers says their bill will allow conservatives to fight back against censorship.

She should have read the preliminary injunction decision in NetChoice v. Moody, because that's gonna get your bill some strict scrutiny.

Whoops.
25/ Now for the witnesses. Haugen, who has been busy testifying before some authoritarian bodies too, goes first.
26/ Haugen says Facebook has under-invested in fixing harms and needs the incentives to change. She says they put profits before people (which literally every company does).
27/ Haugen says that the documents she handed over "speak for themselves." She isn't really saying a lot except that "Facebook causes harms." Very unspecific.
28/ Haugen says that there are ways to make platforms safer that don't require picking and choosing ideas.

She says it benefits them to "run hot" and leave up divisive material.

Does she not understand that she's actually implying picking and choosing ideas?
29/ It's like the people who think that it's viewpoint neutral to ban hate speech.

That's not how it works.
30/ Haugen, forced to wrap up, says that Congress should talk to human rights experts about how badly SESTA-FOSTA was a complete fuckup.

She is absolutely right about that.
31/ Steyer, lead of "won't someone think of the children" is now up.
32/ Interesting that Steyer says he has taught First Amendment law.

His testimony seems to ignore that the First Amendment hasn't looked kindly on trying to regulate the Internet for everyone to protect kids.
33/ "Kids and teens are uniquely vulnerable online...so we need to regulate."

Again, you cannot just regulate expression willy-nilly by invoking children. Further, he pretends that parents have no role to play here.
34/ Steyer says that platforms prey on kids' desire to be accepted and liked.

A criticism that is applicable to like, literaly every single thing used by children.
35/ Steyer wants revised COPPA, which is...yea
36/ Fredrick is up. She talks about joining Facebook after military service (relevance?).

She says "big tech is an enemy of the people." Not hysterical or Trumpy at all. This is exactly what you'd expect from Heritage.
37/ Frederick is going to spend all this time whining about how social media platforms censor conservatives, calling the evidence irrefutable.

Not for nothing, NYU issued a report that...refuted the claim.
38/ This is just a complaint about social media platforms engaging in activity that she disagrees with.

She's talking about how the Hunter Biden laptop thing was election interference (the FEC disagrees)
39/ This is not coming off as at all intelligent; just partisan hackery.

She has no idea that the First Amendment is what protects Twitter's right to remove content it wants to, not 230.
40/ "The First Amendment should be the standard from which all Section 230 flows."

What?

"American lawmakers have a duty to protect the rights given to us BY GOD (she almost yelled it)"

Woof. This is hysterical.
41/ Now Rashad Robinson, who teamed up with Aspen Digital and Prince Harry to lecture us about how the First Amendment is a "straw man" the other week:
42/ Robinson tries to draw a comparison to tobacco companies. I really should not need to tell anyone that speech is not cigarettes.
43/ "You are responsible for what you sell," says Robinson, "and social media platforms sell content." So Congress must allow courts and juries to impose liability.

What he ignores is that there is almost never liability for the ideas contained in products.
44/ Perhaps if Robinson didn't dismiss the First Amendment issues, he would know that suing people for Bad Content is usually a losing proposition.
45/ Opening statements are done. Now on to member questions.

Doyle recognizes himself, and asks Haugen: "Facebook knew that algorithms were allowing hateful and harmful content to spread to minority groups. Setting law aide (lol), does FB have a moral duty and do they meet it?"
46/ Haugen says that they have a moral duty to be transparent and once they know the harm exists they have a duty to address it.
47/ Doyle: Can you tell us how teen girls are harmed by Instagram?

Haugen: Research shows it's more harmful than other platforms because IG is about bodies and social comparison, as oposed to tiktok/snap
48/ Doyle to Robinson: How can 230 reform address platform actions?

Robinson: we're in a position where we have to go to FB and rely on their "benevolence" to address harms. In other industries we have created tools to hold companies accountable.
49/ Again, Robinson does not comprehend the difference between speech and not speech.

He also thinks that platforms are currently not at all answerable to civil rights law. But see the Roommates.com case.
50/ Latta to Frederick: You've done research on how platforms censor. What is your response to claims that they don't censor politically?

Frederick: "believe your lying eyes," :angry bile:, talk to Rand Paul or Steven Crowder.
51/ Of course I could find instances of left people being suspended and just list them off too.
52/ Latta asks about criminal liability under federal law, but not sure if he knows that 230 doesn't immunize against federal prosecution.
53/ Listening to Frederick use the words "denying reality" while she rages is really something.
54/ Pallone: I asked Zuck if he knew about his algorithms recommending joining fringe extreme groups. But he never addressed it. (The First Amendment says hi)

So what are the most compelling examples of this, Haugen?
55/ Haugen: Facebook has pushed people more aggressively towards large groups because it lengthens your session, which makes them more money.

The algos pick the most extreme content from these groups and pushes it to drive engagement.
56/ Pallone to Robinson: as we work through these proposals, how does JAMMA protect marginalized Robinson.

Robinson: first of all you *have* a bill so thank you (something MUST BE DONE).
57/ Robinson: we've seen how the algorithms allow platforms to exclude black people from jobs and housing (that's not entirely accurate)
58/ Pallone: some people say that 230 changes would lead to a deluge of frivolous lawsuits. Would this help or hurt minrotieis.

Robinson, not a lawyer or expert in this: giving people the ability to hold platforms accountable is part of our fabric.
59/ Yes there will be more lawsuits, but that's how things change. There will be a tradeoff.

Talks about how companies are liable for the toys they sell, again ignoring the difference between physical products and speech producs.
60/ McMorris Rodgers to Haugen: do you support FB censorship of conservatives?

Haugen: what do you define as censorship?

Rodgers: Removing constitutionally protected speech. YES OR NO?!
61/ Haugen: we should have fact checking
61/ McMorris Rodgers, airing her grievances, goes to Frederick, who I assume will just do the same thing over and over again:

What's the difference between misinformation and disinformation.
62/ Frederick: Goes off on a rant about the Wuhan lab leak theory, like it was impossible to talk about it without Facebook.
63/ Frederick, still ranting about Hunter Biden.

Rodgers: can you talk about government regulation of disinformation
63/ Frederick talks about the Psaki press conference about pushing platforms to remove disinformation.

I don't disagree with her, I think Psaki should never have said any of that and the government should not be pressuring platforms to remove anything.
64/ Rodgers to Frederick: what do you think of my bill? What a fucking stupid question.
65/ McNerney to Haugen: You discussed the changes in 2018 to FB's algos, to favor content more likely to be shared by others. Research showed that this favored divisive, angry content. How difficult is it to change the algos to lessen the impact?
66/ Facebook knows that there are terms that if you remove them, you get less Bad Content, but FB doesn't because it would reduce profits.

Again, FB has a First Amendment right to promote divisive content. Congress cannot legislate that, like it can't regulate bookstore shelves.
67/ NcNerney to Robinson: You testified that FB isn't just a tool of discrimination, but that the algos are drivers of discrimination. Can you explain?

Robinson: algos cut people off from opportunities or lead people down rabbit holes (what does that mean)
68/ Worth noting that when the government wants to enforce civil rights, 230 doesn't typically pose much of an obstable.

Asking the DOJ for more enforcement is a far better solution that should be used before changing 230.
69/ Guthrie: the question is how to define misinformation.

To Frederick: I have been looking at the Wuhan theory and this is a real instance of information being blocked. Goes on some long setup about Fauci and not really making a point or asking a question.
70/ Guthrie using up his five minutes on a monologue that is all over the place instead of actually asking a question. This is literally pointless.
71/ Talking about how COVID got from bats to humans. What the fuck are you doing, Guthrie.

Guthrie: Facebook took down all mentions of the lab leak theory, and then it unblocked those claims after a while.
72/ Guthrie my point is (FINALLY): ::does not make his point at all and keeps rambling::
73/ Guthrie runs out of time before he even asks a goddamn question. That was stupid.
74/ Clarke: Now is the time for action, as platforms move away from chronological to targeted displays, while the algos are opaque.

This results in promotion of harmful content (protected by the First Amendment, stop trying to legislate it) and lead to discriminatory outcomes.
75/ Clarke to Robinson: can you explain how the lack of transparency hurts marginalized communities and how the Civil Rights Modernization Act would help?

Robinson: your bill would remove 230 for targeted advertisement
76/ Robinson: when it comes to hateful content, platforms are profiting from shouting fire in a crowded theater. I understand that there's the First Amendment, but there are limits to what you can say."

Sorry @rashadrobinson, now you get tagged in because that is nonsense
77/ You can't just wave "fire in a crowded theater", @rashadrobinson and eliminate the First Amendment.

Congress absolutely *cannot* pass a law banning hateful content. That content is constitutionally protected, and nothing that you said addressed that in any way. Meaningless.
78/ This is exactly the blithe consideration of the First Amendment issues that rendered the Aspen Digital effort so unserious.
79/ Kinzinger thinks that this is tiring, and on that I agree.
80/ Kinzinger says he previously wanted to avoid changing 230 so he introduced account validation and fraud prevention bills. But now he is more open to amending Section 230.
81/ Kinzinger says the devil is in the details, but there will be no policy solutions without bipartisanship. Yea, get used to it, because none of you want the same thing.
82/ Kinzinger to Haugen: Zuck testified that FB has a responsibility to protect users? Do you agree? Are they fulfiulling:

Haugen: I do believe there is a duty. For most countries in the world Facebook *is* the Internet so they have a higher duty. They are not living up to it.
83/ Kinziner to Frederick: same question.

Frederick: They do. The days of blaming the addict, not the dealer are over (wut). Now she's talking about responsibilities to protect free speech, same nonsense.
84/ Kinzinger to Frederick: what are the nat sec concerns with the promotion of divisive speech?

Frederick: I want FB to be hostile towards Islamic terrorists.
85/ McEachin: When we say immunity, what we're saying is "we don't trust juries." That's the only reason we have immunity.

That's absolutely wrong. McEachin completely ignores nuisance litigation and its costs. That is a completely inane statement.
87/ McEachin to Robinson: You said laws/regs must be crystal clear. I don't even know what an algo is, but I do know how to say that immunity is not available when you violate civil rights. Will the SAFE TECH Act help?
88/ Robinson: it will. Facebook has claimed that it is exempt from civil rights law.

(again, it isn't, and again: more enforcement is more likely to get you what you want)
89/ @rashadrobinson: fre speech isn't protection from repercussions. If you incite hate, you should be held accountable, that's not a free speech violation.

Except @rashadrobinson is again absolutely wrong about the First Amendment, which protects against more than jail time.
90/ @rashadrobinson is just woefully wrong about the First Amendment every time he tries to talk about it.

You can't sue someone for hate speech any more than the government can jail you for it. The First Amendment jurisprudence is clear about that.
91/ Bilirakis wants to talk about CSAM, which I don't think anyone will disagree is a scourge.

To Frederick: does 230 protect against liability for knowingly refusing to take down CSAM?
92/ Frederick doesn't know the answer.

Bilirakis: we shouldn't give platforms more protection than other companies, which just ignores the scale issue. "This despicable protection," he called it.
93/ Veasey: I think platforms are a major source of news in our lives.

That only heightens the First Amendment concerns.
94/ Veasey: literally says "we must do SOMETHING" about misinformation and harmful content.

SOMETHING MUST BE DONE
95/ Johnson: IMMEDIATELY launches into the "publisher vs. platform" canard, so that's a new record speed for dumbest thing said.

He's yelling about how social media platforms aren't the arbiters of First Amendment rights. Which is true, because it only protects against govt.
96/ Johnson: I don't want burdensome government regulation.

Then why are you proposing burdensome government regulation?
97/ Johnson: How do big tech's actions compare to the actions of the Chinese Communist Party, Ms. Frederick?

DOINK.

Frederick: I am troubled by the government's increasing entanglement in content moderation decisions.

I absolutely agree with her on that, again.
98/ Johnson: I am troubled by the thinking that govt involvement in private companies to regulate content is even on the table.

HAVE YOU READ ANY OF YOUR PARTY'S BILLS, DUDE?
99/ Soto: Deliberate lies are not free speech under New York Times v. Sullivan.

@RepDarrenSoto needs a crash course in both civics and reading comprehension, because WOW.
No, @RepDarrenSoto, that's not what Sullivan says. Sullivan was a defamation case (and a defendant-friendly one at that), it has absolutely nothing to do with whether false speech is generally protected under the First Amendment (it is).

That was an extremely silly thing to say.
Members of Congress should be automatically expelled for such idiocy.
102/ Soto asks about vaccine/COVID misinformation. Robinson talking about how there is no way to tell what the problem is until it's too late because algos are opaque.
103/ Soto: would you say misinformation reduces vax rates?

Irrelevant to anything, really.
104/ Long: Wants to talk about Big Tech's relationship with China. Reports suggest that TikTok collaborates with China to harm Uighur Muslims. How does TikTok's model make it ripe for Chinese censorship?
105/ Haugen: TikTok works by pushing people towards limited pieces of content. It was designed that way so that you could easily censor high distribution content.
106/ Long to Frederick: We don't know how TikTok censors. Do you have recommendations for transparency?

Frederick: Incentivize transparency. Some companies give quarterly reports. There has to be teeth to incentivize this.
107/ O'Halleran: As a father or grandfather I am outraged to learn about the inner workings of FB. Its disregard for the wellbeing of kids/teens is unacceptable.

As a father and grandfather, take some responsibility for what your kids/grandkids use, bub.
108/ O'Halleran: SOMETHING MUST BE DONE.

We need to know what they are showing our children and why, and identify how algos come together and what impact they will have. We can't have FB take advantage of our children with algorithms.
109/ "Think of the children" is such a classic avenue for attacking the First Amendment,
110/ Haugen telling him that FB's problem is its devotion to making connections is more valuable than anything, and is more important than kids killing themselves.
111/ Walberg to Haugen: Big Tech lied to us when they said that they are increasing safety while really just increasing censorship.

This speaks to the fundamental divide. Ds and Rs don't agree with on what "safety" means.
112/ Walberg says he doesn't want to be treated like a child.

Under a lot of this legislation, like much past unconstitutional legislation, you pretty much would be.
113/ Walber to Haugen: how does China threaten us?

Haugen: TikTok banned content from "disabled and homosexual users"
114/ Walberg: Do you think my bill will actually help push platforms to change their ways?

Haugen: It tries to address problems, but you run into a problem with definitions, you have to tighten definitions. (lol good luck getting him to understand this)
115/ Rice: I was a D.A. before I came to Congress, and I was in a position to understand where the law fails to address antisocial behavior, and I went to Albany to ask for new criminal laws.

Go figure, the cop wants to DO SOMETHING.
116/ Rice to Steyer: Why are teen girls particularly vulnerable online?

Steyer: How your peers see you is an important part of teenagers' self esteem.

Groundbreaking.
117/ Steyer: we ALL care about kids, surely?!

Freedom of speech is not "freedom of reach," Steyer says.

See but you get in trouble there, Steyer. Government attempts to limit the reach of speech actually is a big First Amendment problem. You taught 1A law? I have doubts.
118/ Duncan: I asked tech reps if they think they are the arbiters of absolute truth.

Invokes 1984, like the wholly unserious person that he is.
119/ It actually seems like Duncan hasn't even read 1984...
120/ I love when they talk about the Big Tech "onslaught of conservative thought."

With apologies to Dril:

Go ahead. Keep talking about liberal bias. It only makes my level of constitutional scrutiny stronger.
121/ @RepJeffDuncan quotes Ben Shapiro, "facts don't care about your feelings," while whining about his feelings about private companies not wanting speech he likes on their platforms.

The crying man derides feelings.
122/ Yes Duncan, we get the gist of your whiny tirade
123/ Eshoo, thanks Haugen, pronouncing it in the 30th different way we've heard today.
124/ Eshoo to Haugen: what did the research find about FB algos leading people to extremism?

Haugen: research showed that you could take a blank account and follow centrist views, and just by clicking on suggested content, it will get more extreme.
125/ Steyer tells Eshoo that this is "an arms race for attention."

So are movies, TV, books, video games, literally EVERYTHING. Stop saying this like it's some revelatory thing. It makes you looks silly.
126/ Eshoo to Robinson: can you elaborate on how the things you included in your testimony are matters of design, not user generated content?

Robinson: this is about what gets amplified (and he thinks this is not a First Amendment issue...why?)
127/ Robinson: Your bill, the Online Privacy Act, is important because we need infrastructure to meet these new needs.
128/ Curtis: I think we understand censorship (narrator: they do not understand censorship).
129/ I want to talk about attempts to control thought by feeding harmful content. It's "brainwashing, propaganda"

I have a feeling he's about to not understand censorship at us.
130/ Curtis: the problem is the content provided to people without their consent.

Ahh so speaking at people without their consent is verboten now.

I am suing every network for their commercials.
131/ Curtis to Haugen: My son is a data scientist (brags about paying for much of the PhD). Do we even have a way to present algos to lay people and have them understand it?

Haugen: there are ways to design that imbue values like privacy.
132/ Curtis: so these algos can be designed to influence how people think in a way that causes harm?

Haugen: it has happened.

Curtis: could an algo be designed to influence how someone votes?

Haugen: it could *influence it*
133/ Curtis: Should there be immunity for an algo to decide how people vote? Or influence how someone votes?

Does this guy know how...reality works? Or human beings? Like, what the hell is that question?
134/ It's complicated if I'm the mayor and choose what can be said in the town square.

THAT'S THE POINT. THE MAYOR IS THE GOVERNEMENT.
135/ These people are exhaustingly stupid.

Stop voting for stupid people.
136/ Matsui: **again "something must be doine"**
137/ Matsui talks about the HUD action against Facebook over housing ads.

Without a hint of recognition of how this undercuts the stated need to change 230.
138/ A constitutional amendment to bar anyone who says "as a grandparent" from holding elected office.
139/ Welch: We need more than one-off litigation. We need to create agencies.

Both of those statements ignores the government's total failure in doing both of those things.
140/ @PeterWelch wants a commission to ensure that algorithms don't amplify content that the government finds harmful. He says that this is "not a free speech issue."

Peter Welch is 100% wrong, and adds his name to the list of Congressmen ignorant of First Amendment law.
141/ No, @PeterWelch, the government cannot regulate what information platforms can promote or suggest, any more than it can regulate the staff picks section at a bookstore.

That is a fundamentally atrocious suggestion.
142/ Schrader: can we trust platforms to police themselves for our benefit?

Companies honestly don't have a responsibility to act "for your benefit."
143/ Schrader asks Frederick about how to avoid viewpoint censorship.

Frederick: you anchor legislative reforms to First Amendment standards.

None of these people understand how things work.
144/ Cardenas is up, hold on to your butts. Cardenas is openly hostile to the First Amendment. See thehill.com/blogs/blog-bri…
145/ They're talking about third party fact checkers in other languages. Go ahead and encourage that. Regulating fact checking, and the effort they put into it, to reduce disinformation, again, poses a First Amendment problem.
146/ Cardenas to Steyer: given that moderation isn't effective with humans and algorithms, can 230 reform change moderation tactics.

Uh what non-algorithmic or human options are there? Moderation by divine intervention?
147/ Carter (wearing a stupid tie) to Frederick: we want to keep free speech, all of us. But we also want to ensure that free speech is not subject to political bias, like fact checkers?

WUT.
148/ The Republicans talking about free speech really don't have any idea what free speech is.

Carter to Frederick: why is it important to pass these laws to protect free speech?
149/ Carter: "I've told the CEOs that I don't want to do this, but if you don't do what I want, I will do this!"

Ok.
150/ Frederick: Starts talking about something that has nothing to do with anything, before saying absolutely nothing.
151/ I'm a pharmacist by trade (which explains a lot). My question: would this proposal to carve out 230 liability for drug trafficking help prevent this?

Frederick thinks theoretically it should.

Scale scale scale (and not the drugs kind)
152/ Kelly: the misinformation and disinformation cannot persist.

To Steyer: why are parents unable to hold platforms accountable?

(well the First Amendment for one)
153/

Steyer: Because there's no law that allows them to
154/ @RobinLynneKelly, much as you want to, you may not legislate misinformation/disinformation. That is outside of your power.
155/ Kelly to Robinson: how do we make sure that platforms don't discriminate by moderation?

What?
156/ Content moderation doesn't create a civil rights claim. Maybe she was confused and mixed up the question?
157/ Robinson's mic is out. I suspect we aren't worse off for it.
158/ Haugen seems to want the US to regulate on behalf of the whole world. That's not workable.
159/ @RepMullin says under 230 platforms are supposed to be the town square.

That is a complete lie. 230 was intended to allow sites to decide what content they want to allow on their sites.

Stop lying, Mullin.
160/ Frederick lies as well, saying that 230 wasn't intended to allow websites to make content decisions based on their own preferences.

The authors of the law say you're wrong. The law says you're wrong.

These people are dishonest.
161/ Frederick says that 230 is being "abused," which is as dumb as everything else she's said.
162/ Craig: parents have a lack of control.

Or maybe parents just don't want to parent?
163/ ALGORITHMS LITERALL KILL PEOPLE

Friends, Congress is full of insane people
164/ Fletcher: Is it possible to change the algos or practices to promote healthy engagement?

I'm sure it is! But the government can't mandate "healthy engagement." It simply IS NOT THE GOVERNMENT'S PLACE to regulate the "health" of what expressive platforms promote.
165/ Now it's non-subcommittee members (who should probably just STFU)

Burgess: How can Congress incentivize fair content moderation given the amount of content?
166/ Frederick, being stupidly repetitive: tie 230 to the First Amendment and mandate easy appeals.

We should let people sue them for breach of contract/fraud if they don't live up to promises.

That would also fail without 230. I believe @BerinSzoka has written as much.
167/ Burgess talks about teen suicide. He seems to be implying that platforms should inform on kids feelings to parents and doctors.

Do we care about privacy or not?
168/ Schakowsky now tags in, probably to say something asinine.
169/ Ok good she stuck to whistleblower stuff.
170/ Pence: blah blah censoring hard working hoosiers blah blah.

"These platforms are monopolies. Most people have no choice but to use their services."

Sir, every word of that is incredibly wrong.
171/ Pence asks Haugen about a proposal to eliminate 230 protection if you "generate revenue" from content.

Haugen says that's functionally impossible, which like yea, because platforms basically make money from EVERY post.
172/ Frederick: I'm also not a lawyer, but I like money.

File under: no shit.
173/ Castor: We don't allow this stuff to happen in the real world, we shouldn't let it happen in the online world.

Which is an absolutely meaningless statement that exhibits a complete lack of understanding.
174/ Castor: it's so bad that judges are asking us to revise 230.

Nobody tell her what some judges think of abortion...
175/ Steyer had to leave. And we are lucky for that.
176/ Crenshaw wants to know what criteria are used to decide what is misinformation. Do you see problem with outsourcing fact checkers ::whines about being a "victim" of fact checkers::
177/ Crenshaw thinks platforms want to not be neutral in content moderation. That is their right, you boob. Forcing companies to host speech they don't want to is a First Amendment violation.
178/ Crenshaw banging about how the parties actually want different things. He is right. He decries the desire to enable lawsuits over hurt feelings, which also isn't wrong.
179/ Trahan: surely we can all get behind PROTECTING THE CHILDREN
180/ Trahan wants to create a government office for platform oversight.

No. Might as well have an office for newspaper oversight too, there, Ms. Trump.
181/ Joyce: talks about the stupid Republican discussion bills that contradict each other and violate the First Amendment at best.
182/ Frederick says that big tech threatens Americans' ability to access information and cites Parler.

Except: that's not the same as social media platforms, Parler is back up, and Gab never went down. Quit your whining.
183/ Frederick again says 230 protection should be stripped when viewpoints are discriminated against.

Except there will still be no cause of action.
184/ HOLY SHIT, the mask really slipped there.

Frederick: "we should use the anti-CRT model to gin up the population."
185/ Throw me in jail, because I am in contempt of Congress.

They're bringing in the second panel with NO BREAK?
186/ The second panel is going to be even more infuriating than the first. Represented: the plaintiff's bar, law profs against civil liberties, and friends of Adam Candeub.
187/ Carrie Goldberg (the plaintiff's bar) says what is illegal offline should also be illegal online.

Um, it IS.
188/ Carrie has to tell clients, she says, that she can't help them because of Section 230. Of course that's not the whole story, because there are non-platform actors who can be held liable.
189/ "Allow me to tell you some cherry-picked emotionally manipulative sad stories."

Hard facts. bad law, etc
190/ Goldberg seems to think that it would be reasonable to require Grindr to search for, find, and moderate impersonating profiles.

I don't think that it's nearly as simple as she thinks.
191/ Wood's opening statement sees promising ideas in the bills, but many problems with their execution. Free Press Action wants to protect the benefits of 230 that allow platforms to make moderation decisions, and allow users to post speech without cumbersome approval.
192/ Wood notes that powerful/rich people have the ability to bring lawsuits to try to get platforms to remove content they just don't like.

Wood says repealing 230 is a bad idea, and that the First Amendment is going to prohibit much of the underlying liability people want. YES
193/ Wood says that 230 wrongly has been interpreted to also eliminate distributor liability.

I disagree with this, strongly.
194/ Wood sees problem in regulating the technology of algos
195/ Kornbluh: 230(c)(1) must be clarified or we will lose certain rights.

What?
196/ Kornbluh doesn't seem to understand the operation of 230, from her c(1)/c2A tirade there.
197/ Franks (who has never met a civil liberty she didn't want to ignore): 230(c)(1) has created a DYSTOPIAN MORAL HAZARD.

She complains about the presence of speech that is entirely constitutionally protected.
198/ Franks: "everyone here would be liable if we caused or contributed to harm to others"

She launches into a list of things that are not expression. If I could sue Franks for the insane things she said, I would. But I can't.
199/ Franks: People will say that "speech is special," but newspapers and TV are in the speech business and can be liable when they cause or contribute to harm.

Actually, most cases against papers and tv/radio fail, so...
200/ Franks thinks that 230 should be limited to speech rather than information, which is just a nonsensical, unworkable distinction.
201/ Recess for floor votes, and a break for me to find alcohol for the onslaught of stupidity yet to come.
I have designed an algorithm that takes over your brain like one of those ant zombies, forces you to vote the way I want and then bring me sustenance before exploding your brain.

Regulate me, Curtis. I dare you.
202/ Now that they have voted on whatever nonsense they had to vote for, Volokh is recognized for opening statements.
203/ Volokh came with powerpoints, which aren't visible to us? You really don't need a slideshow for a 5 minute opening statement but ok.
204/ Volokh will be talking about the technical language of bills.

For JAMAA, he says it's a strong disincentive for any personalized recommendations because it might contribute to some kind of harm. He's right about that.
205/ Volokh: the consequence is that content from big business (more careful and vetted) will win, and typical user generated content will lose.

Fair point as well.
206/ Now he's talking about a Republican bill (not actually under consideration for this hearing) stripping protection if the platform discriminates politically or when algorithms are used.

But V points out that literally everything is algorithmic.
207/ Volokh seems to think that either the bill is ineffective, or problematic
208/ Volokh talks about the appeals and transparency requirements of the Republican bill. He doesn't know how much explanation/clarity is required.

Of course this is not going to be meaningfully addressed in the bill's language. It just can't be, adequately.
209/ For SAFE TECH, he notes paid hosting will enjoy no immunity, same with YouTube content posted by creators who monetize. He says this is not necessarily a great idea.
210/ @ProfDanielLyons is up for opening statements next. Two themes:

1) 230 provides critical infrastructure; tinker at your peril
2) Regulating algos risks doing more harm than good and risks a bevy of unrelated frivolous litigation.
211/ Lyons: it's not just "big tech," a huge array of businesses use this protection, which promotes competition by removing barriers to entry.

We know that the Internet ecosystem is complex and dynamic, which magnifies the potential effects. Incumbents can absorb more costs!
213/ Lyons: SESTA-FOSTA was a road to hell paved with good intentions. It made it harder to find offenders, hurt free speech, and made sex workers less safe.

Algos are socially beneficial even though they can have harmful effects.
214/ The genius of the web is the reduction in information costs. The downside is the increase in filtering costs.

Companies compete to sort that info for users, through algos.
215/ The vagueness of these bills reduces usefulness in the face of prospective liability for minimal harm reduction. It also creates the possibility of a ton of unrelated litigation that might fail, but will increase costs, especially for startups.
216/ Doyle to Goldberg, sucking up to plaintiff's bar: I am told that these reforms would lead to overwhelming litigation that would hurt companies. What hurdles would plaintiffs still face?
217/

Goldberg says there won't be a groundswell because it's hard to overcome pleading standards and it's sanctionable to bring frivolous suits.

LOL. That's an argument only the plaintiff's bar could love
218/ Goldberg claims that economics and anti-SLAPP laws will prevent lawyers from taking frivolous cases.

She ignores the lack of national anti-SLAPP laws, dishonestly. She also ignores the reality of lawyers willing to bring frivolous nuisance cases to extract settlement.
219/ That was either a naive or dishonest peformance by Carrie Goldberg.
220/ Doyle to Wood: What's your view on small biz exemptions.

Wood: that may not be the way to go. Says Goldberg's answer is "amazing" which is only true in opposite land.
221/ Latta, slurring to Volokh:

Complains about Twitter's new CEO for saying that they aren't bound by the First Amendment. Says new media policy is an abuse of policy and asks how the policy would be seen if it was govt actor, which is a DUMB QUESTION BECAUSE TWITTER ISN'T GOVT
222/ Volokh answers that govt couldn't do it but newspapers could.

He thinks that the question is whether Twitter is more like a newspaper or...the post office?

The answer to that is easy and belies the nuttiness of his position.
223/ Latta slurs about his "bad samaritan" carveout in a rather unintelligible way, and Wood gives a non-specific answer that talks about holding platforms accountable when they "know" their decision is causing harm. What does "knowing" mean?
224/ Easy for platforms to make it hard to give them notice, though
225/ McNerney: Asks Wood would specific reforms could be used to make sure profits aren't overcoming harm to users.

Wood says they have not endorsed any specific approach. Talk of accountability with no actual ideas as to how. Kind of unhelpful.
226/ McNerney: Asks Kornbluh what she thinks about the product design features as Aspen focused on (which really makes no sense)
226/ Kornbluh thinks we should be thinking less about content and more about what causes hate and misinfo to go viral.

What she fails to understand is that recommendations are First Amendment activity. Nobody wants to grapple with the fact that product design can be speech.
227/ Kornbluh focuses on the fact that algos aren't neutral.

But THAT'S THE POINT. It is editorial discretion.
228/ Mary Anne Franks thinks 230 is what allows platforms to host "misinformation."

She is wrong. That would be the First Amendment (which she also doesn't particularly like)
229/ Guthrie is up, probably to give a monologue again.
230/ Guthrie asks Volokh: what provisions of 230 gives immunity when platforms know of specific cases where opioids are up for sale on their platforms?
231/ Volokh: Doesn't think this requires 230 modification. Points out that federal prosecutions are not immunized.

Notes that civil liability is precluded, but Volokh (correctly) doesn't think there's going to be much liability there
232/ The plaintiff's bar is back, with Goldberg talking about how she represents family where one bad pill killed their client's family member.

She thinks she could get products liability for someone selling drugs on the site?

Not fucking likely.
233/ I get it, Carrie has clients. But the plaintiff's bar should not be dictating policy. Sorry bout it.
234/ Clarke: 230 has aided in a culture that lacks accountability.

What the fuck does that even mean.
235/ Clarke: 230 already provides certain exceptions. Targeted advertisements can be used to exclude people from voting, housing, jobs, etc based on race.

Again, the federal govt has rarely been stymied by 230 in enforcing its laws/regs
236/ Clarke asks Wood why targeting law is important.

Wood correctly points out that the question to "people should be able to sue" must be "but for what?"

You need to look at whether there's even underlying liability.
237/ Clarke asks Mary Anne Franks to talk about collective responsibility (and presumably her disdain for civil liberties)

Franks: ::nonsense::
McMorris Rodgers: wants to talk about removing immunity for companies that take down protected speech. How does this impact speech online?
239/ Volokh: the advantage is states could step in requiring nondiscrimination. He thinks there's a lot to be said for that because platforms are powerful. Do Fox News next

Downsides: there would be a lot of nuisance litigation which would chill platforms from removing bad stuff
240/ Rodgers: what would JAMAA's effect be?

Volokh think it would silence individual voices, because personalized recommendations would be heavily discouraged.
241/ Volokh: platforms would only provide no recommendations (bad for business) or only the most generic recommendations or professional, mainstream content.

I'd like if he addressed the fact that recommendations are protected by the First Amendment, but that's not helpful to Rs
242/ Rodgers asks Lyons about whether people could sue over emotional harm caused by recs/

Lyons agrees with Volokh and thinks there would be a chill on recs.
243/ McEachin still thinks all of 230 means we don't trust people.

First of all, no we don't. We also don't trust you. He is approaching this as a total luddite, and someone with no understanding of how litigious our society is.
244/ He asks Goldberg about something that is unidentifiable.

Goldberg said that there needs to be injunctive relief and a carveout for court-ordered conduct.

I literally have no idea what she is saying.
255/ Goldberg also thinks there should be a carveout for products liability.

But for physical products sold, 230 has not provided immunity to prods liability suits. That's the state law definition of "seller" that does the work.
256/ For platform as a product, courts do not like to impose products liability on the basis of the words or ideas contained in expressive products.

But unclear what she's even saying.
247/ McEachin does not seem to understand the bills that he's asking about here, kind of flummoxing Wood (understandably).

Mandatory retirement age for Congress NOW.
258 (last should have been 257): Walberg wants to talk about his cyberbullying bill which is not under consideration here (Rs seem to want to hijack this).
259/ He wants to strip immunity for a course of conduct that is foresseable and places a minor in fear of death/injury or would reasonably be expected to cause suicide.
260/ Volokh thinks it would cause changes, but isn't sure they'd be good. If platforms are liable for not taking down bullying, they have to be policemen.

N.B.: the suicide rule would preclude most liability anyway, as the makers of D&D and TV stations have learned when sued.
261/ Walberg asks Wood if his carveout would help parents bring harassment claims.

Wood responds that it would but he doesn't favor carveouts.
262/@RepDarrenSoto he thinks that platforms are allowed to do illegal things that "real world" business couldn't do.

He's wrong.
263/ Carrie Goldberg is giving some sob story about how she can't get an injunction against Twitter (she probably couldn't get it against a newspaper either)
264/ @RepDarrenSoto thinks that an offline business could be held liable for promoting content that "harms children" like with the material promoting damaging self images to girls. But that's not entirely true. Try suing Vogue for giving someone body images. Soto is just wrong.
265/ Rice: References that the last modification was SESTA-FOSTA. Asks Goldberg if it has affected her cases.

Goldberg: SESTA-FOSTA is problematic because it conflates child abuse with consensual sex work. That's not wrong.
266/ Rice asks about the effect of criminal carveout in SESTA-FOSTA.

Goldberg replies that there's only been one case, and wishes that...a state prosecutor would have Zuck arrested for sex trafficking.

Plaintiffs' lawyers...
267/ Eshoo asks about terrorists, because YAY PATRIOT ACT (which you voted against).

Kornbluh thinks that people should be able to sue platforms because terrorists use it and algorithms suggest people/things similar to what you've engaged with. It's a stupid argument.
268/ "But terrorists" is equally not fucking cute whether it comes from Democrats or Republicans.

Eshoo thinks platforms are *undertaking* to connect terrorists, which is patently absurd. But she's 78 so maybe she just doesn't understand how this stuff works.
269/ Honestly very tired of people who don't understand technology trying to regulate it.

I don't care if it's ageist. Fucking stop.
270/
271/ Cardenas talks about his collaboration with Lujan and Klobuchar over disinformation, which is not exactly something you should be bragging about, because both of them spread First Amendment disinformation.
272/ Cardebas asks about legislation to enforce even content moderation across languages, which highlights the problem:

People elected to Congress are Karens, who think that there is nothing they can't legislate into compliance with what they want.
273/ Cardenas seems to think that he could change Section 230 to force platforms to prevent harm to people.

That is just NOT how liability works.
274/ Kelly asks a question about how advertising revenue could have prevented the tiktok trend of slapping teachers? This is a question that only someone who knows your ideological cue will understand.
275/ Me, to every member of the committee: is it plugged in? Have you tried turning it off and then on again?
276/ Oh you're not calling it quits now, Doyle. It's only 6pm your time, coward.
277/ We're adjourned. I'm going to go drink to forget every bit of that absolute clown show.
278/

PS: Not for nothing, but if these people were actually interested in sober conversations about the issues rather than political grandstanding, @ProfDanielLyons would have had more than like two opportunities to speak.

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Ari Cohn

Ari Cohn Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @AriCohn

19 Nov
Just gonna get this out of the way:

Without commenting on the acquittal, the fact that a jury acquitted Rittenhouse does not mean it is defamatory to call him a "murderer," and it *especially* does not render any *past* statements defamatory.
2/ An acquital is just that: the government did not meet its burden of proof for a criminal conviction. It does not establish facts for all of time; it establishes whether the jury thought the government proved its case.
3/ Even if it *did* establish fact/truth, it would still be irrelevant to 99.99% of the claims people are yammering about. Why?

Because those statements were all made before the acquittal. Defamation requires fault. In Rittenhouse's case, he'll get public figure treatment
Read 6 tweets
18 Nov
I remember when someone told me with a straight face that Clare Locke is a First Amendment lawyer.

I guess they didn't specify pro- or anti-...
*Libby Locke of Clare Locke
Lol I wonder what Libby thinks "refrain from further disseminating or publishing" means.

Read 4 tweets
18 Nov
This is only "masterful" in its absurdity and failure to understand the law.

The hapless authors envision state AG tobacco-like lawsuits.

Unfortunately for them, different things are different, no matter how much elected showboaters we call "state AGs" might wish otherwise.
2/ They spend a bunch of time on mealy-mouthed hysterics about Haugen's testimony, without ever actually saying what the harms that are being caused are--and that's going to be pretty important, as different things are, again, different.
3/ But first they try to clear a first hurdle: Section 230.

They start out by recognizing that 230 immunizes platforms for liability on the basis of third-party content, and that 230 applies to algorithmic content suggestions.

So far so good!
Read 12 tweets
17 Nov
It's a good thing our constitution and laws take such decisions out of @DWStweets' hands.
🧐🧐🧐 Image
What does @DWStweets think about these? ImageImageImage
Read 4 tweets
9 Nov
I totally believe that this "university" invoking John Adams and featuring Sohrab Ahmari is deeply committed to the principles of free speech.
When @uaustinorg gets funding for a building, they will inevitably name it "Yes THAT Joe McCarthy Hall"
"It is vitally important that we create our own institution to escape these illiberal people who are hostile to free speech, but essential that we include people who are illiberal and hostile to free speech"
Read 6 tweets
3 Nov
1/ I usually hesitate to crap on student newspaper articles, but this piece was actually written by the paper's advisor, @mdgiusti, who is the journalism *chair* and clearly not doing students any favors in the "make sure what you're writing isn't totally wrong" department.
2/ Off to a bad start, completely mangling the history. First of all, "everyone saw they were different?" No.

Second, Cubby Inc. v. Compuserve (one of the cases leading to 230) was actually about distributor liability, which has nothing to do with the decision to publish. Image
3/ Instead, the court in Cubby found that Compuserve was a distributor, and thus would have had to have actual or constructive knowledge of content's defamatory nature (and plaintiffs hadn't produced any such evidence) before liability could be imposed for speech on its service.
Read 17 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us on Twitter!

:(