Yesterday we filed an amicus brief in support of App Stores, developers, and consumers, urging the Ninth Circuit to affirm #Section230 protections for in-app payment processing.
The alternative would cause chaos for financial privacy/security, and harm the creator economy.
Plaintiffs in this case are relying primarily on a loophole from the HomeAway case which abridged 230 protections for "transactions" involved with the underlying content at issue (i.e. illegal home sharing listings).
The same result would hose small app developers.
In-app payment processing is core to app revenue for creators and the app marketplace. Holding App Stores liable for providing their in-app payment tools to developers is a surefire way to discourage in-app payments generally.
App Stores would not only be forced to scrutinize every app that uses their payment processing tools, but also monitor every. single. update. pushed. to those apps.
Now, I'm sure we can agree that there are certain apps that developers shouldn't be able to profit from.
Scams, fraud, farmville clones (jk).
But what about reproductive and gender affirming healthcare apps, available in states that have criminalized autonomy?
Thanks to state-bred nonsense inspired by GOP culture wars, those apps too will be the first to go.
When App Stores provide their in-app payment tools to developers, they do so neutrally, assuming developers will adhere to the content guidelines and restrictions to which they agreed.
Obviously that's not always the case. Violating apps get removed, and developers get banned.
Affirming #Section230 guarantees this balance that in turn makes the app ecosystem thrive.
App Stores can trust their developers and provide them with the tools necessary to earn revenue from their creations.
But they can also intervene when bad actors enter the scene.
Otherwise, App Stores will be forced to assume the worst in their developers and users.
And in turn, everyone loses out.
@threadreaderapp unroll
• • •
Missing some Tweet in this thread? You can try to
force a refresh
CA9 upheld the server test in Hunley v. Instagram which says that an image is not 'displayed' when it is not fixed in a computer's memory.
Importantly, CA9 also reiterated hyperlinking does not constitute direct infringement (a blow to CJPA and its copycats).
The Court distinguished 'embedding' from 'hyperlinking.' Hyperlinks redirect users to the original content. Embedding tells the web browser to automatically retrieve and show the content from the host website.
Neither requires local storage of protected materials.
Plaintiffs tried three theories to get around the server test:
(1) the test is limited to Search Engines: court says no, not even close;
(2) the test is inconsistent with the Copyright Act: Court punts, suggests en banc review;
Today, I testified in front of the California Assembly Privacy Committee opposing SB 680; a bill that broadly restricts Internet companies from using designs, algorithms, and features that could cause online "addiction" for kids.
Their response was deeply disappointing. 🧵
My testimony highlighted several unintended consequences of SB 680. The broad definition of addiction would discourage websites from hosting California youth users, cutting them off entirely from crucial resources, support, and information that teens regularly rely upon.
Further, because social media companies are not in the position to judge what types of content will trigger addiction in any individual youth user, the companies will steer clear of any designs, algorithms, or features intended to improve online experiences for teens.
Exactly. This is a hard one for me personally. I identify as queer. My family members are queer. My best friends are queer.
But as a First Amendment lawyer, I've also seen how the government weaponizes and compels speech for their own hateful agendas. This was the right result.
Texas and Florida are actively fighting to force websites to carry and associate with hateful speech from political candidates. Texas also restricts websites from carrying reproductive health info while also prohibiting the spread of abortion care information.
2016 remains a stark reminder that enabling govt. to decide what kinds of speech categories are acceptable is dangerous.
No doubt, today's decision will be used to combat govt assault on information in right wing extremist states like Texas and Florida.
(I'm already getting the #Section230 questions...)
I agree w/Prof. Volokh's take overall. I can see the complaint failing without needing to even reach the 230 issues. It doesn't seem OpenAI was put on notice of the alleged false output by the plaintiff + damages are suspect.
Weird paternalism aside, we need to actually talk about this because this proposal is unfortunately not unique.
In fact, many states have proposed (and some even enacted) identical legislation. These kinds of laws will harm more than help kids. Let's dig into it.🧵
First off, there has never been a "pre algorithm" era of the web. The Internet is built on algorithms. TCP/IP, the foundational communication protocol for the web, is literally an algorithm.
So, to technologists, legislation that "bans algorithms," is nonsense.
Social media and other online services are an amalgamation of algorithms. Google's "page rank" algorithm is how we get results perfectly tailored to our inquiries.
Facebook's newsfeed and Twitter's TL all use algorithms to ensure we see relevant content.
Further, what is your end goal for Silicon Valley tech?
With the AADC, you're forcing tech companies to perform age verification on all of your constituents, further collecting sensitive identification info (largely in violation of other CA privacy laws).
You've created an environment where tech companies are discouraged from improving their products and services for kids and families.
And in fact, you're encouraging companies to block minor users entirely, disrupting their lives and frustrating their access to resources.