Jess Miers 🦝 Profile picture
Jul 2 26 tweets 7 min read Read on X
Quotes from yesterday's NetChoice opinion, organized by issue, and what I think they mean for the future of Internet regulation🧵
1. Social media platforms are entitled to First Amendment protections.

"To the extent that social media platforms create expressive products, they receive the First Amendment’s protection." Image
In other words, social media companies are *not* common carriers.

"The principle does not change because the curated compilation has gone from the physical to the virtual world." Image
"It is no job for government to decide what counts as the right balance of private expression—to “un-bias” what it thinks biased, rather than to leave such judgments to speakers and their audiences. That principle works for social-media platforms as it does for others." Image
"We have repeatedly faced the question whether ordering a party to provide a forum for someone else’s views implicates the First Amendment. And we have repeatedly held that it does so if, though only if, the regulated party is engaged in its own expressive activity..." Image
"However imperfect the private marketplace of ideas, here was a worse proposal—the government itself deciding when speech was imbalanced, and then coercing speakers to provide more of some views or less of others." Image
Implications:
--Online services enjoy the full extent of 1A protections.

--States cannot simply classify online services as common carriers in an effort to strip their 1A protections.

--Laws that touch the expressive capabilities of services must face strict scrutiny.
2. Content moderation is an expressive activity.

"the major platforms cull and organize uploaded posts in a variety of ways." Image
"the current record indicates that the Texas law regulates speech when applied in the way the parties focused on below—when applied, that is, to prevent Facebook (or YouTube) from using its content-moderation standards to remove, alter, organize, prioritize, or disclaim posts" Image
"Deciding on the third-party speech that will be included in or excluded from a compilation—and then organizing and presenting the included items—is expressive activity of its own. And that activity results in a distinctive expressive product." Image
"Like them or loathe them, the Community Standards and Community Guidelines make a wealth of user-agnostic judgments about what kinds of speech, including what viewpoints, are not worthy of promotion." Image
"The individual messages may originate with third parties, but the larger offering is the platform’s. It is the product of a wealth of choices about whether—and, if so, how—to convey posts having a certain content or viewpoint." Image
"The choice of material,” the “decisions made [as to] content,” the “treatment of public issues”—“whether fair or unfair”—all these “constitute the exercise of editorial control and judgment.” Image
"That those platforms happily convey the lion’s share of posts submitted to them makes no significant First Amendment difference." Image
Implications:
-content moderation is protected speech.

-platforms have a variety of tools they use to moderate and display third-party content. All of those tools are expressive.

-state laws that interfere with any of these tools or editorial decisions are unconstitutional.
3. Algorithmic curation is also protected expression.

(Stay with me. Many folks have claimed the opinion says otherwise. But read carefully, SCOTUS absolutely protects the algorithmic curation process. and it's not even a close call).
"In constructing certain feeds, those platforms make choices about what third-party speech to display and how to display it. Image
"[the Texas law] prevents a platform from compiling the third-party speech it wants in the way it wants, and thus from offering the expressive product that most reflects its own views and priorities." Image
"A user does not see everything—even everything from the people she follows—in reverse-chronological order. The platforms will have removed some content entirely; ranked or otherwise prioritized what remains; and sometimes added warnings or labels." Image
"So too we have held, when applying that principle, that expressive activity includes presenting a curated compilation of speech originally created by others." Image
"The key to the scheme is prioritization of content, achieved through the use of algorithms. Of the billions of posts or videos (plus advertisements) that could wind up on a user’s customized feed or recommendations list, only the tiniest fraction do." Image
"The platforms write algorithms to implement those standards—for example, to prefer content deemed particularly trustworthy or to suppress content viewed as deceptive (like videos promoting “conspiracy theor[ies]”)." Image
"When the platforms use their Standards and Guidelines to decide which third-party content those feeds will display, or how the display will be ordered and organized, they are making expressive choices. And because that is true, they receive First Amendment protection." Image
Implications:
-SCOTUS articulated an impossible test for future government actors: if algorithmic curation is, even in-part, driven by the expressive choices of the private publisher, it is protected.

-all curation algorithms are a reflection of expression. Including AI.
-hence, any state law that attempts to end-run the First Amendment by attacking the service's "design" or "algorithms" will be doomed.

-laws like the New York SAFE Act or Age Appropriate Design codes inherently interfere with how services compile/curate third party content.
.@threadreaderapp unroll

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Jess Miers 🦝

Jess Miers 🦝 Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @jess_miers

Jun 26
Today, the Supreme Court announced their opinion in Murthy v. Missouri.

This case illustrates the complexities of online content moderation and offers some interesting insight into how the Court might rule on the long-awaited NetChoice & CCIA cases. 🧵
supremecourt.gov/opinions/23pdf…
The COVID-19 era was as confusing as it was terrifying. It was an era of extensive mask wearing, wiping down amazon packages, zoom funerals, online classrooms, and lots and lots of mis and disinformation about the disease.

Horse tranqs, bleach injections, you name it.
At the time, much of this mis/disinformation spread on various online services, Facebook and Twitter included. The sources were a mix of so-called experts, public interest groups, conspiracy theorists, and even our own government.

The truth was about as murky as the cure.
Read 34 tweets
Jun 26
📢 Some personal news!

I’m excited (and sad) to share that I will be leaving @ProgressChamber. I’ve accepted joint research fellowship positions at @santaclaralaw and @AkronLaw, focused on producing AI scholarship.

In other words, I’m officially in my academic era!
Last year, during my annual evaluation, I told @adamkovac that there was only one thing that could entice me to seriously consider leaving Chamber of Progress.

As many of you know, that one thing is an opportunity to achieve my lifelong dream of becoming a TT law professor.
At the time, I hadn't expected this opportunity to present itself anytime soon. In fact, I told Adam "but don't worry, that's like 5-6 years from now."

Turns out, like my Supreme Court predictions, I was only slightly off...
Read 11 tweets
May 9
I published an article on California SB 1047, a bill that would effectively prohibit new AI model developers from emerging.

The bill does not apply to existing (derivative) AI models or models built upon existing models. It's the worst I've seen yet. 🧵 medium.com/chamber-of-pro…

Image
My post is written for a general audience, so I advise you start there if you're interested in learning as much as possible about the bill.

Other credible experts have chimed in on the bill as well like @psychosort
Read 26 tweets
Apr 15
If you're going to talk about me, why not @ me? Are you afraid of my response?

At no point did I say my tweets are representative of my employer. And you know that -- as you said, I'm tweeting on a Sunday afternoon, outside of working hours.

Let's look at the receipts 🧵
[the following is my own opinion, not my employer's].

Last night, @ CreatureDesigns (Mike Corriero) posted an image of @brianlfrye, a Jewish law professor, depicted as hitler + an image implying Brian's pending execution.

Pure violence and hatred. Image
Prior to that post @ CreatureDesigns was engaged in a "discussion" with myself and Brian about fair use and AI. Brian and I are notoriously pro-AI innovation and pro free expression (for which the Fair Use Doctrine is intended).

This clearly upset @ CreatureDesigns.
Read 25 tweets
Apr 11
That's one of the major issues with the current discourse around Gen AI and 230. We have to understand the Gen AI stack before we can even consider liability.

IMO 230 could apply to Gen AI for some use cases. techdirt.com/2023/03/17/yes…
In assessing liability we have the platforms that provide the Gen AI services, the developers who create and fine tune the models. We have the folks who create the datasets and the folks who implement the datasets to train their models. We have users who supply inputs.
And we also have the platforms (again) that provide the "guidelines" and guardrails to determine what kinds of AI outputs are acceptable and aligned with the platform's overall editorial position.

Each of these aspects can involve different parties.
Read 13 tweets
Apr 9
Yeah so this bill is laughably bad.

The Generative AI Copyright Disclosure Act of 2024 requires anyone using a dataset to train AI to disclose any copyrighted works in the set to the U.S. Copyright Office to be displayed via a public database. 🧵 schiff.house.gov/imo/media/doc/…
Copyright attaches automatically to any creative works fixed in a tangible medium of expression.

So, pretty much all works used to train an AI system will require disclosures, regardless of fair use considerations.

(btw you don't "train" a dataset but details). Image
BUT THAT'S NOT ALL!

Datasets are incredibly dynamic, especially when it comes to AI training. So, each time the set is updated in a "significant manner," the notice requirement is triggered.
Read 11 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us!

:(