Some initial thoughts on this "JAMA" bill repealing 230 immunities for algorithmically targeted content.
energycommerce.house.gov/sites/democrat…
It's odd to me that the bill targets *personalized* ranking but not general engagement-based ranking. If the problem is that ranking is personalized, you would think the legislative fix lies in privacy rights rather than changing a content law like CDA 230. 1/
As usual, the kind of prohibited ranking sweeps very broadly, and would seem to discourage platforms from making beneficial ranking changes in response to e.g. coordinated inauthentic behavior. (Maybe the idea is to address that by covering only personalized ranking?) 2/
The scienter element is wacky. Liability comes not from knowing/reckless disregard about illegal content. It comes from knowing/etc. about the basic, omnipresent fact that the service is ranking third party content. Drafting error or deliberate? I assume the latter. 3/
Rather than tying this to existing, known, illegal things, this covers recommendations that "materially contributed to a physical or severe emotional injury to any person." That... seems to me to run into Item #1 on my list of six const. hurdles. 4/ cyberlaw.stanford.edu/blog/2021/01/s…
That part is especially weird paired with the scienter part, where it seems like the platform does not need to have known or had reckless disregard about the harm itself, or the content itself. 5/
I really like that it has exemptions for small businesses and for infrastructure providers. That should be part of platform regulation drafting 101. 6/
I don't know that this small business definition is the right one. I mean, I very seriously don't know. I think a real effort by economists and Internet experts to quantify this before we make any laws about it would be incredibly useful. 7/
The infrastructure carve-out has a nice list of examples. But like another bill (PACT, maybe?) it weirdly only applies if the DNS/caching/CDN/etc. is acting as a back-end service provider for a *different* interactive computer service. 8/
I assume the idea is that someone, somewhere in the technical stack should hold liability for illegal content. But that disregards other big issues, including the bluntness and inaccuracy of removals from things like AWS (taking down a whole website for one bad comment, e.g.). 9/
Anyhow... Bottom line is that I am not too surprised to see this. I'm sure you aren't either. 10/
It's also a big flashing sign: Anyone who thinks (1) amplification causes harms, but also (2) any regulation needs a lot of nuance: Your work is done on (1).
If you believe in (2) – the nuance part – it is time to start talking about that. Or we'll get laws like this one. 11/11
Oops. I failed to plug my article about regulating amplification. It goes into way more detail, especially on First Amendment issues. knightcolumbia.org/content/amplif…
Good point from @BerinSzoka. The "with respect to information" phrasing causes two problems. First, as he points out, stripping (c)(1) immunity 100% opens the door to more "must-carry" claims from -- and more platform concessions toward -- the far right.
Second, I think it means that the inevitable false or erroneous allegations about newsfeed content lead to removal from the entire service. Bc the platform loses immunity for that piece of "information" anywhere it sits.
And let's not even get into what "information" means and whether this opens up claims that platforms must filter for "identical" or "equivalent" information.

If we do go there, though, man does the EU have some lessons for us. This is a whole narsty kettle of fish.
If you want to know more about must-carry claims, my take on the Con Law issue is here, in 2nd half: lawfareblog.com/who-do-you-sue…
And @ericgoldman has a piece, I think with @jess_miers, showing how often these are resolved on grounds of 230(c)(1). NOT (c)(2).

Pay attention, Dems. This (c)(1) / (c)(2) stuff matters for the substantive issues you care about.
@ericgoldman @jess_miers If you want to know the speech, privacy, and possibly equal protection problems with a law that de facto requires platforms to build filters...
The short version for a US audience is item #4 here: cyberlaw.stanford.edu/blog/2021/01/s…
The long version, in my best EU legalese, is here. academic.oup.com/grurint/articl…

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Daphne Keller

Daphne Keller Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @daphnehk

15 Oct
I’ve been known to criticize Microsoft for having a lot of bark and not much bite when it comes to protecting Internet users’ rights. But this LinkedIn China pullout is a serious move and they are to be congratulated. bbc.com/news/technolog…
Of course, as the article notes, “LinkedIn had been the only major Western social-media platform operating in China” and blocked journalists critical of China. Still. They tried something, they wound up yielding ground in ways they surely didn’t like, and they decided to stop.
Let’s congratulate Microsoft on waking away from the Chinese market (for this at least) and go back to scrutinizing the major, human rights-damaging compromises that platforms are being pressured to make in Russia and Turkey right now.
Read 4 tweets
13 Oct
This AG Opinion in a pending CJEU case about the tensions between GDPR and journalists' rights to access court records is pretty amazing.
curia.europa.eu/juris/document…
It's full of things like "[in] an administrative law dispute between Z (‘Citizen Z’) and the Mayor of Utrecht (‘Mayor M’)... X (‘Lawyer X’) acted as Citizen Z’s representative."
Reading it, I wondered if the AG was deliberately making things sound as Kafkaesque as possible.
And it turns out the AG -- a respected legal expert working for one of the most important EU institutions -- really WAS trying to sound like Kafka. Check out this footnote!
Read 9 tweets
5 Oct
Some of my favorite responses to the "Facebook Files" coverage, in no particular order. 1/
Hands down winner is @persily with not only a concrete proposal for better research transparency, but actual draft legislation to get it done. washingtonpost.com/outlook/2021/1…
2/
@Klonick brings calm and sanity, and rightly emphasizes the very serious issues the FB Files reveal for users outside major markets, who are getting a much less safe version of the product. nytimes.com/2021/10/01/opi…
3/
Read 7 tweets
17 Sep
There is always a cat-and-mouse game between (1) makers of ranking algorithms and (2) content providers who profit from high rankings.
That background fact should inform every analysis of algorithms and amplification.
Sometimes we call that "spam." Sometimes it's "content farms" or "inauthentic behavior." Sometimes it overlaps with clear societal harms, other times it just degrades service quality. But it is always there, and it always shapes the available choices for platforms and regulators.
Will someone *please* write about this for a policy/news reporter audience? There is literally an entire industry of experts who could easily explain it. SEO conferences and publications like searchengineland.com talk about it all the time.
Read 6 tweets
8 Sep
I finally read Øe’s Opinion in the CJEU’s pending case about Article 17 filtering/fundamental rights, and it is amazing. Here comes a long thread about what stood out to me. curia.europa.eu/juris/document…
Of course, I don’t like the upshot: Article 17 stands. Øe reconciles that with users' rights by, as @bjjuette and @giuliapriora put it, confining the law in a “tight corset of conditions to safeguard compliance with EU fundamental rights.” copyrightblog.kluweriplaw.com/2021/07/20/on-…
@bjjuette @giuliapriora (That blog post is a great overall explainer of the issue and the Opinion, BTW, with lots of useful links.)
Read 28 tweets
30 Jul
Something terrible is happening in Canadian Internet law, and the people who care in the rest of the world are mostly stretched too thin to pay attention. We’re counting on people like @mgeist, @EmilyLaidlaw, @tamir_i, and @vivekdotca to somehow fix it. 1/
@mgeist @EmilyLaidlaw @tamir_i @vivekdotca This is a thread listing some of the law’s problems as identified by @mgeist, and flagging a few resources showing the law’s major human rights problems. Others who know of more that might be useful for those working on this in Canada, please add on. 2/
Many of @mgeist's recent posts are about the rushed and secretive lawmaking process. This latest one lays out the current proposals. michaelgeist.ca/2021/07/online…
3/
Read 26 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!

Follow Us on Twitter!

:(