1/ So I watched the Senate Commerce hearing on Instagram today, and while it was far too boring and mundane to live-tweet, there are some observations/takeaways worth sharing.
2/ Blumenthal (D-CT) (of "will you commit to ending finsta" ignominy) is your prototypical "think of the children something must be done may I speak to the manager Karen."
He likes to broadly wave his hands about some purported evil and declare it HIS JOB TO FIX IT.
3/ It's interesting that he brings up that the Surgeon General's report also talks about the impact of "video games and other media."
Those are good comparisons: hysterical moral panics over alleged impending doom to the children!
4/ Also worth noting that of course the government, to justify the need for its meddling, is going to cite to...the government.
Vivek Murthy is a doctor damnit, not a First Amendment expert.
His word is only worth so much in determining the proper scope of government power.
5/ From the horse's mouth: "our mission is to DO SOMETHING."
Something must be done. This is something. Ergo, we must do this. Et cetera et cetera et cetera.
6/ @SenBlumenthal says it directly: he wants government "overseers" to police the *content* that social media carries/displays.
That's a First Amendment red flag.
7/ "Think of the children" is not a talisman that wards off the First Amendment.
Speaking of video games, here's what the Supreme Court had to say about California's "but children" justification for its video game law in Brown v. EMA: casetext.com/case/brown-v-e…
8/ The government is not free to regulate the ideas that can be disseminated to teens just because it thinks they are harmful.
9/ Much ado is made about content supposedly promoting suicide/eating disorders/etc, but a lot of what is proposed would actually also curtail speech that *helps* teens looking for support on issues that they are too embarrassed to talk to adults/IRL friends about. It's shameful.
10/ I know that Blumenthal has a limited grasp on tech, but does he really think IG can meaningfully and accurately tell when kids have eating disorders or are self harming?
T parents he thinks IG should "warn" can't blame IG for them not knowing about their own kids' problems.
11/ First of all, @SenBlumenthal created a FINSTA. Tsk tsk, Senator.
Second, "I followed accounts with a certain message and you recommended more of the same message" isn't exactly groundbreaking stuff. Again, I know Richard doesn't really get the tech.
12/ But yea, algorithms designed to recommend content relevant to a user's interests are...surprise...going to do that based on the user's expressed interests. IG isn't programming to show kids anorexia content. The algorithms act on what an account has followed. Basic stuff.
13/ Parents were also furious at video game makers, musicians, comic book publishers, etc.
Parents don't need to trust anyone, and they don't have a right to trust anyone. *They* are responsible for their kids' wellbeing, and have the authority to make decisions accordingly.
14/ The whole "parents are furious at this company whose product they let their kids use" thing kind of smacks of "I should have a right to let my kids do whatever without me caring and have it turn out the way I want." You--not the screen--are responsible for your kids.
15/ Imagine my surprise that @SenBlumenthal doesn't understand why all legislation passed in the UK can't be passed here...
16/ Claiming Section 230 is a "unique" immunity for "big tech" is how you know @SenBlumenthal doesn't actually know anything about 230.
Serious, sober, discussions about 230 acknowledge that it protects way more than "big tech," with huge implications. But he is not serious.
17/ It's not even worth it to say anything about what Blackburn talked about, because she is a fundamentally unserious human being.
18/ @SenAmyKlobuchar comes in with this truly bizarre rant about how parents are scared because their kids are addicted to Instagram at age ten and parents just want them to do their homework instead of being on IG.
(Also let's ban legislating based on what parents want, please)
19/ Maybe @SenAmyKlobuchar@amyklobuchar should tell those parents that they don't *have* to give their 10 year old a cell phone, tablet, or unsupervised computer access that they don't really *need* to have.
20/ Maybe @SenAmyKlobuchar@amyklobuchar should tell them that instead of saying "well we can solve your terrible parenting decisions by more government regulation."
But of course, telling people that the government isn't their kids' parents, they are, doesn't win votes, does it
21/ This is actually pretty noteworthy. Thune (R) calls social media algorithms "persuasive technology."
That is a damning admission, because an admission that something is "persuasive" is more or less an admission that it is expressive, so regulating is a First Amendment issue.
22/ I'm quite sure that there's "bipartisan consensus" about "transparency" for content moderation, @SenJohnThune. But Congress can't demand that WSJ explain its editorial decisions either, so you are SOL.
23/ Mike Lee stepped up to bat, and in characteristic fashion just got red in the face and yelled about something he doesn't actually understand.
Yes, peopls, the recommendations that come will be based on those follows. It's not IG saying "we want you to see this content."
24/ @SenLummis@CynthiaMLummis seems to be perturbed that...not everything is open source? Of course platforms' algorithms are going to be proprietary. That's kind of how these businesses work, if you bothered to like...understand them.
25/ You know what else operates outside public scrutiny, @SenLummis@CynthiaMLummis? What HarperCollins chooses to publish, how Fox News decides what segments to air, and how WSJ decides which op-eds to run.
The public isn't entitled to know everything about every business.
27/ This is an interesting line of questioning from Cantwell, about whether platforms are being forthcoming to advertisers about the hateful content that their ads run with.
28/ It's interesting because states that have tried (and failed) to pass constitutional content moderation laws, and some commentators like @VolokhC argue in part that the First Amendment isn't of concern because platforms are not associated/held responsible for user content.
29/ This, along with much other data regarding boycotts and pressure on platforms, indicates that indeed platforms are held to account for what they allow on their platforms, which adds to the First Amendment problem with making them host all speech indiscriminately.
30/ Ted Cruz, like Blackburn, is an unserious person unworthy of actually discussing, and with that, the issues of note in this hearing are at a close.
As always, Congress thinks its power unlimited and its intentions pure--and it is wrong about both.
PS/ The real takeaway from this is that if you're worried that you're not *smart* enough to run for Congress, just go for it. There is no floor.
• • •
Missing some Tweet in this thread? You can try to
force a refresh
Hate speech is protected by the First Amendment precisely because it is impossible to objectively define. This lawsuit asks the courts to define it, which they will not do. Creative pleading won't get them around the First Amendment's application.
2/ There is no way to resolve this lawsuit without a court assessing the subjective meaning of Facebook's terms of service, and subjectively determining whether or not Facebook has lived up to them.
3/ Like Republican attempts to regulate social media, this runs headfirst into the reality that those judgements are protected by the First Amendment.
Trying to frame it as a "consumer protection" issue is transparent. The courts aren't going to fall for it.
1/ Whoever at the @bostonherald is responsible for this: toss your stash, you got a bad batch.
Seriously, this is a mishmash nonsense from people who thought "hey, Section 230 is hot, I should write about that even though I know nothing about it."
2/ So I don't know the age of whoever wrote this, but presumably they are old enough to realize that Section 230 predates Facebook and Twitter by, uh... a long time.
The phrase "public utility" has a meaning, and this aint it. Nor has any social media platform claimed to be one.
3/ And this has nothing to do with what Facebook/Twitter think of themselves.
Section 230 merely says that they are not *liable* as a publisher. If they weren't at risk of being deemed publishers, the law would be pointless.
If you knew anything about 230, you'd realize this.
2/ Like most of these hearings, this looks to be another Techlash Festivus, with the parties having selected witnesses that largely align with their predetermined grievances against social media compahnies.
3/ Some don't understand how 230/the First Amendment work. Others have never met a civil liberty they wouldn't run over to achieve a desired outcome. Some peddle absurdly flawed theories of how things ought work. Some simply have an agenda and don't care about collateral effects.
Without commenting on the acquittal, the fact that a jury acquitted Rittenhouse does not mean it is defamatory to call him a "murderer," and it *especially* does not render any *past* statements defamatory.
2/ An acquital is just that: the government did not meet its burden of proof for a criminal conviction. It does not establish facts for all of time; it establishes whether the jury thought the government proved its case.
3/ Even if it *did* establish fact/truth, it would still be irrelevant to 99.99% of the claims people are yammering about. Why?
Because those statements were all made before the acquittal. Defamation requires fault. In Rittenhouse's case, he'll get public figure treatment
2/ They spend a bunch of time on mealy-mouthed hysterics about Haugen's testimony, without ever actually saying what the harms that are being caused are--and that's going to be pretty important, as different things are, again, different.
3/ But first they try to clear a first hurdle: Section 230.
They start out by recognizing that 230 immunizes platforms for liability on the basis of third-party content, and that 230 applies to algorithmic content suggestions.