The S Ct will review the must-carry provisions of the TX and FL laws, and the requirements for "individualized notice" to users of content moderation decisions, but not other transparency requirements in the laws.
It says cert is for Questions 1 and 2 in the SG's brief. 1/
What statutory provisions does that actually encompass? The SG brief says it includes Texas's appeals provision, too. The Texas statutory sections it mentions as part of Q2 are 120.103 and 120.104, unless I am missing something. 2/
The SG's brief does not cite another section of the law that seems like it should be in scope if the S Ct is reviewing Texas's notice and appeal requirement. Section 120.101 requires platforms to build a portal to track the status of appeals -- like tracking a package. 4/
That's a pretty stupid amount of work for rather little benefit. And since the SG's reasoning is that notice/appeal burdens platforms every time they make an editorial decision, and thus chills content moderation, building and maintaining the portal seems relevant. 5/
The SG's brief also does not mention Texas's weird disparate for notices alleging that content is ILLEGAL, rather than TOS-violating. For users whose content is removed based on those allegations, there is no notice and appeal. 6/
If an accuser alleges that content is illegal, then platforms must evaluate the claim within 48 hours. And then... not do anything in particular. The law is silent on this. I have a big annotation in my public copy of the law digging into whether platforms even CAN remove. 7/
Probably platforms can remove content identified in these notices of illegality, without triggering the viewpoint-neutrality rule and having to take down all posts that are somehow of the opposite viewpoint? But the law is so badly drafted it's actually a messy question. 8/
I guess the Court can pick and choose which of these provisions are in scope, beyond the 120.103 and 120.104 rules specifically called out in the SG's brief. 9/
The Court's reasoning about the notice and action mandates will likely be extremely relevant for the other transparency rules that are not under review. So can compliance with those be stayed while the case is pending? 10/
If the 1st Am problem with notice/appeal is about burden on editorial acts, lots of the other mandates (like vastly expanded transparency reports) are hell of burdensome.
If the 1st Am issue is about the scope of the Zauderer case, that 100% impacts those other mandates, too. 11/
My article about the 1st Am issues with the TX and FL transparency mandates is here.
I have an almost-published final version pending in a journal, and will presumably now need to revise it... 12/papers.ssrn.com/sol3/papers.cf…
Please, please do not fall prey to the "but platforms do this already under the DSA" logic employed by the 5th Circuit. PLEASE. 13/
The "they already do this" logic is 1. Wrong factually. This expands volume & mandates are not the same 2. MASSIVELY favors big incumbents who've invested in EU already 3. Makes U.S. 1st Am protections change depending on speech compulsions in countries without 1st Am rules. 14/
I want transparency from platforms. I think laws making that transparency happen can be drafted in a constitutional manner. These are NOT THOSE LAWS. These are a sloppy, wasteful mess that will let TX and FL change platforms' actual speech rules. 15/15
• • •
Missing some Tweet in this thread? You can try to
force a refresh
The EU's database is live! In theory it should include every Statement or Reasons sent by platforms explaining content moderation decisions. I've groused about it, but it's an amazingly ambitious effort and already pretty interesting. 1/
When I first opened it half an hour or so ago, the database had 3.4 million entries. Now it's 3.5 million. 2/
Tiktok has submitted 1,764,373 Statements of Reason. X has submitted TWO.
You can hear the enforcers in Brussels salivating from all the way over here in California. 3/
The statements from Thierry Breton of the European Commission about shutting down social media during riots are shocking. They vindicate every warning about the DSA that experts from the majority world (aka global south) have been shouting throughout this process. 1/
Breton asserts authority under the DSA to make platforms remove posts calling for “revolt” or “burning of cars” immediately. If platforms don’t comply, they will be sanctioned immediately and then banned. (He says “immediately” four times in a short quote.) 2/
As someone who generally defends the DSA as an instrument that (1) has a lot of process constraining such extreme exercise of state power and (2) will be enforced by moderate and rights-respecting regulators, Breton’s take on the DSA me feel like a naive chump. 3/
Monday is apparently the deadline (!) for comments on one of the most under-discussed transparency measures in the DSA: the public database of every (!) content moderation action taken by platforms. 1/ digital-strategy.ec.europa.eu/en/news/digita…
Comments can be general, or can be specific to the technical specs the Commission has published. I hope this could be a longer and iterative discussion, bc the spec is (understandably) very much a first draft.
2/
Unfortunately, I’m not sure how much iteration is possible. I think the VLOPs have to start submitting information in this format Aug 25. Which means they will design larger systems around it. Which then makes it very hard to turn the ship. 3/
In the injunction against Biden administration officials "jawboning" social media companies, the judge makes a classic legal and logical error. He thinks he can protect "free expression" while leaving the govt free to restrict content he personally considers bad or dangerous. 1/
The injunction has a long list of things the government officials can and can't do. They CAN'T encourage platforms to suppress "protected free speech." But they CAN urge them to suppress content in 7 listed categories -- which include a bunch of 1st Am protected speech. 3/
One important message: SIMMER DOWN about the The Algorithm, wonks. You do not actually need to speculate and make things up.
Ranking systems are actually quite well understood among CS people, who can explain things calmly and rationally if you let them.
@randomwalker's point about TikTok natively using vertical/portrait orientation framing is really interesting. I've mostly been tuning out the whole "future is mobile" discussion for years, but this seems like a very concrete example of why that matters.
At a quick skim, OFCOM appear to walk a very fine line in its guidance for "illegal content risk assessments."
It doesn't *require* platforms to proactively monitor users' posts. But it's hard to say how platforms could comply without doing so, at least for sample sets. 1/
The sample questions a platform might ask in risk assessments are all about something other than looking at specific user content.
Much of the guidance appears to be based on how risk assessments work in industries that are *not* in the business of carrying the individual expression of vast numbers of people, or of providing those people with access to information.
This was probably unavoidable. 3/