Getting ready to kick off the Future of Children's Online Privacy panel at #SOTN2023
Jane Horvath suggests that more states need to implement kids privacy leg.
Privacy for all is an important goal. But state-by-state solutions will only make the current convoluted patchwork problem worse.
If anything, we should be focused on getting to Yes on federal privacy.
Key point from Jamie Susskind -- conversations regarding online harm to kids clouds the federal privacy discourse making it impossible to pass legislation. Those conversations are important but separate.
@lauren_feiner asks the critical Q regarding the counterproductive results of imposing age-verification on users (i.e. forcing platforms to collect more sensitive data).
Panelist responds that websites figured out how to implement COPPA...
But keep in mind -- COPPA relies on an "actual knowledge" standard while the current kids privacy proposals rely on "constructive knowledge" which is VASTLY different.
Constructive knowledge absolutely imposes nothing short of verification (i.e. facial rec / ID check).
EPIC panelist suggests that California AADC is age assurance "instead of verification."
Make no mistake, legally, there is no distinction between assurance and verification, especially under constructive knowledge. You either figure out the age of your users or take the risk.
.@internetsociety -- how do you distinguish "age-appropriate" content. For example, kids could lose access to vital resources like LGBTQ+ information or even repro healthcare info. We have to think about kids on the margins that will be detrimentally impacted.
Surprising take that EARN IT wouldn't implicate encryption. There is absolutely nothing that stops the rule making authority from declaring "no encryption" as a "best practice."
And oh btw, failure to implement a best practice can be brought in as evidence of violation. Hmm.
EPIC suggests the best parental controls force businesses to abandon their current business models.
I'm lost as to how exactly that's a "parental control"
Discussing a blanket ban on social media for anyone under 16.
Key Q from panelist: how exactly do you define "social media"?
What I found interesting is that all of the panelists were absolutely allergic to a blanket ban.
But that's the thing. Obviously the blanket ban bills aren't serious (and constitutionally defective). Bills like AADC operate effectively like a ban for anyone under 18.
And honestly, not even just under 18s. Anyone who doesn't want to submit to intrusive age verification will be blocked from accessing legal speech.
EPIC asserts that CA AADC is practically identical to UK AADC.
This is a popular myth. The UK AADC operates like a set of guidelines. The DPA works w/companies to come into compliance (instead of directly suing them like our AG's do here). Plus, UK is a loser pays system.
Not to mention, the UK AADC outlines how to implement the regulation, providing practical guidance to companies.
CA AADC leaves everyone in the dark, failing to define any of the key terms such as "materially detrimental to children" or "best interest of a child"
or "age assurance" or what content is "age appropriate" for certain age groups... (the list goes on).
And look -- the UK law is incredibly problematic. You'll never hear me endorse it. But it's effectively EASIER to deal with than CA's.
Suggesting otherwise signals a deep misunderstanding of the U.S. / UK litigation and regulatory systems.
EPIC: "we know self-regulation just doesn't work"
citation needed.
EPIC notes the importance of "setting a culture of privacy here in the U.S."
I don't necessarily disagree but caution that a "culture of privacy" in the EU means trading off speech.
• • •
Missing some Tweet in this thread? You can try to
force a refresh
ICYMI Texas' latest compelled-birth bill (HB 2690) enables private claimants to target websites that aid / abet abortions.
This only further raises the stakes for Twitter v. Taamneh; a case that will consider whether Twitter aided / abetted terrorism under the ATA.
How could this play out?
If SCOTUS holds that Twitter--in merely providing access to its service and enforcing its community guidelines against terrorist content--aided / abetted terrorism, the same can be said for *any* website that happens to host abortion related content.
Think about Facebook groups, subreddits, discord servers, group chats, etc., dedicated to providing a safe space for discussions about abortion resources.
Next #Section230 SOTN panel starting with @joellthayer noting that FOSTA was important for taking down Backpage...
The DOJ took Backpage down before FOSTA was enacted. But details.
Yael Eisenstat (ADL): "where does Section 230 stop? where are the lines?"
Section 230(e) is a good starting place.
@MattPerault importantly reiterating those limits. #Section230 is not a defense to federal criminal prosecution. Congress has the tools to create legislation in this area if they feel it necessary.
JCT kicks it off driving at the substantial assistance Q.
Hypo: JCT's friend is a mugger and JCT loans him a gun knowing that the friend *may* use it to commit a crime. Does he need to know more to qualify aiding / abetting?
Petitioners note that the facts in Twitter's case are much more remote than JCT's example. Twitter doesn't have any reason to know or even infer the same of its users.
Most surprising for me was Justice Thomas. Right out the gate, he essentially questioned why this case was even being heard.
Which would be totally fair had he not been begging for a 230 case to opine on since 2019 but I digress.
Another surprise:
The Court seemed to appreciate that algorithms and content moderation are essential to the way the Internet functions today and that attempts to create imprecise legal and technological distinctions could have irreparable effects on the modern web.