Daphne Keller Profile picture
Platform Regulation Director, Stanford Cyber Policy Center. Former Google AGC. This is roughly my zillionth rodeo.
4 subscribers
Aug 28 8 tweets 2 min read
I can't express how unutterably tired I feel after reading this absurd 3rd Cir ruling. It denies TikTok 230 immunity for a claim that is (very thinly) framed as liability based on algorithmic promotion, instead of liability based on user content. 1/

cases.justia.com/federal/appell… This issue was fully briefed to the Supreme Court, with approximately infinity amicus briefs covering every possible angle, a little over a year ago. The Court decided not to decide. 2/

scotusblog.com/case-files/cas…
Aug 26 6 tweets 1 min read
The Telegram CEO arrest in France seems unsurprising, and like something that also could have happened under U.S. law. 1/ It has long been rumored (and maybe reliably reported?) that Telegram fails to remove things like unencrypted CSAM or accounts of legally designated terrorist organizations even when notified.

That could make a platform liable in most legal systems, including ours.
2/
Aug 19 11 tweets 3 min read
I’ve been playing with a special “Trust and Safety regulation expert” version of ChatGPT. It is remarkable how much it thinks that US and EU law require platforms to suppress expression that is perfectly legal in those jurisdictions. 1/ As it turns out, regular ChatGPT gives very similar answers, telling platforms to remove "harmful" user speech. This isn’t AI going rogue or hallucinating. This is AI reflecting what the laws and secondary materials it trained on actually say. 2/
Sep 29, 2023 15 tweets 3 min read
The S Ct will review the must-carry provisions of the TX and FL laws, and the requirements for "individualized notice" to users of content moderation decisions, but not other transparency requirements in the laws.

It says cert is for Questions 1 and 2 in the SG's brief. 1/ What statutory provisions does that actually encompass? The SG brief says it includes Texas's appeals provision, too. The Texas statutory sections it mentions as part of Q2 are 120.103 and 120.104, unless I am missing something. 2/
Sep 26, 2023 12 tweets 2 min read
The EU's database is live! In theory it should include every Statement or Reasons sent by platforms explaining content moderation decisions. I've groused about it, but it's an amazingly ambitious effort and already pretty interesting. 1/ When I first opened it half an hour or so ago, the database had 3.4 million entries. Now it's 3.5 million. 2/
Jul 21, 2023 15 tweets 5 min read
The statements from Thierry Breton of the European Commission about shutting down social media during riots are shocking. They vindicate every warning about the DSA that experts from the majority world (aka global south) have been shouting throughout this process. 1/ Breton asserts authority under the DSA to make platforms remove posts calling for “revolt” or “burning of cars” immediately. If platforms don’t comply, they will be sanctioned immediately and then banned. (He says “immediately” four times in a short quote.) 2/
Image
Image
Jul 13, 2023 16 tweets 5 min read
Monday is apparently the deadline (!) for comments on one of the most under-discussed transparency measures in the DSA: the public database of every (!) content moderation action taken by platforms.
1/ digital-strategy.ec.europa.eu/en/news/digita… Comments can be general, or can be specific to the technical specs the Commission has published. I hope this could be a longer and iterative discussion, bc the spec is (understandably) very much a first draft.
2/
Jul 5, 2023 16 tweets 4 min read
In the injunction against Biden administration officials "jawboning" social media companies, the judge makes a classic legal and logical error. He thinks he can protect "free expression" while leaving the govt free to restrict content he personally considers bad or dangerous. 1/ Here's the Missouri v. Biden injunction. It's 7 pages. 2/int.nyt.com/data/documentt…
Mar 23, 2023 10 tweets 3 min read
This is really good. One important message: SIMMER DOWN about the The Algorithm, wonks. You do not actually need to speculate and make things up.
Ranking systems are actually quite well understood among CS people, who can explain things calmly and rationally if you let them.
Mar 16, 2023 17 tweets 6 min read
At a quick skim, OFCOM appear to walk a very fine line in its guidance for "illegal content risk assessments."

It doesn't *require* platforms to proactively monitor users' posts. But it's hard to say how platforms could comply without doing so, at least for sample sets. 1/ The sample questions a platform might ask in risk assessments are all about something other than looking at specific user content. Image
Mar 9, 2023 15 tweets 4 min read
In researching platform transparency, I've come across many examples of really troubling state enforcement, happening now.
AGs claim to be investigating whether platforms stated their policies correctly. But they also have clear agendas of changing platforms' speech policies. 1/ The most well-known example is Texas AG Ken Paxton's suit demanding that Twitter hand over all records about content moderation. The suit was explicitly filed in retaliation for Twitter deplatforming Trump. 2/

politico.com/news/2021/03/0…
Mar 9, 2023 25 tweets 6 min read
I have a new draft article about platform transparency mandates and the First Amendment. It is an attempt to take very seriously the questions likely headed to the Supreme Court in the NetChoice Texas/Florida law cases. 1/

papers.ssrn.com/sol3/papers.cf… Neither the parties nor the lower courts have really taken the time to examine the transparency issues.

There is a very big iceberg here. So much case law uncited, and practical considerations unaddressed. My paper is long and it still only gets to a fraction of the issues. 2/
Mar 8, 2023 4 tweets 1 min read
I generally am trying not to engage on AI issues. But for the record, I love the passive aggressive gloating evil AI song from the end of Portal I so, so much.

At one point, every person in my household could sing this all the way through.
Feb 22, 2023 5 tweets 1 min read
What situation *should* expose platforms to U.S. ATA liability, according to their own lawyer in the S Ct today?

Not taking down user speech when Turkish police tell them to. This was literally his example. Twitter fired all the lawyers who have actually dealt with Turkish/Erdogan government takedown demands. So I guess they forgot everything they ever knew on that topic.
Feb 21, 2023 4 tweets 1 min read
You know that natural human tendency to focus on things you're good at and can accomplish more easily?
Maybe that's why the Supreme Court in Gonzalez today kept asking about aiding and abetting. Which is the topic of *tomorrow's* case. I wish I could lounge in a work chair like Alito. He's a master.
Nov 19, 2022 5 tweets 1 min read
I try to avoid rant mode on here, but seriously, this is so painful to watch. Trust and Safety workers right now are like surgeons whose operating room has been seized by frat boys. The patient is on the table. The boys have their scalpels out, and are confidently explaining where they plan to start cutting.
Nov 19, 2022 8 tweets 3 min read
The most important thing about the @nytimes op ed by @yoyoel about Twitter under Musk is that it’s great and insightful and you should read it. His departure from Twitter is a huge loss for the company, but perhaps a great gain for open public discussion about Trust and Safety. The second most important thing is that in an op ed ostensibly about Musk, he spends eight paragraphs explaining the enormous hidden influence, capriciousness, and lack of clear standards or processes employed by another great unchecked power: APP STORES.
Sep 22, 2022 4 tweets 1 min read
Like an idiot, I took the 5th Circuit at its word when it said the NetChoice plaintiff trade associations represented every company covered by Texas’s law. That is so, so far from the truth. Read Corbin’s attachment from @ericgoldman and weep. You should especially be weeping if you’re one of the many companies that have to comply with Texas’s wild requirements to carry hate speech, porn, etc. and you had NO IDEA and were NOT represented in a lawsuit driven by only the very biggest incumbent platforms.
Sep 20, 2022 17 tweets 4 min read
I think there are legitimate First Amendment issues with platform transparency mandates. I wish there weren't, and I think they can be addressed with good drafting.
This blog post explains one of the main issues: *state* influence over online speech.

cyberlaw.stanford.edu/blog/2022/09/s… 1/ The post lists five examples of culture war flashpoint issues that could be framed as legitimate areas for inquiry and enforcement by state AGs like Ken Paxton in Texas, or Rob Bonta in California, under transparency laws. 2/
Sep 20, 2022 8 tweets 2 min read
Confused about what legal arguments have been raised, resolved, preserved, or teed up for Supreme Court review in the NetChoice cases?

Possibly me too. An RA and I made this tracker doc, listing and linking to everything that's happened so far.

docs.google.com/document/d/1U8… I think Florida's cert petition is due tomorrow. (Yes? Anyone know different?) 2/
Sep 17, 2022 25 tweets 4 min read
OK, on to potshots. 1/ It's kind of alarming how sloppy the 5th Circuit opinion about the Texas must-carry law is. It's like the bully who knows that what he says doesn't have to be true or make sense, because he'll get his way anyhow.
techfreedom.org/wp-content/upl…
2/