Something terrible is happening in Canadian Internet law, and the people who care in the rest of the world are mostly stretched too thin to pay attention. We’re counting on people like @mgeist, @EmilyLaidlaw, @tamir_i, and @vivekdotca to somehow fix it. 1/
@mgeist @EmilyLaidlaw @tamir_i @vivekdotca This is a thread listing some of the law’s problems as identified by @mgeist, and flagging a few resources showing the law’s major human rights problems. Others who know of more that might be useful for those working on this in Canada, please add on. 2/
Many of @mgeist's recent posts are about the rushed and secretive lawmaking process. This latest one lays out the current proposals. michaelgeist.ca/2021/07/online…
3/
It's like a list of the worst ideas around the world -- the ones human rights groups like @edri, @article19org, @hrw, @accessnow and responsible tech orgs like @mozilla and @wikimediapolicy have been fighting in the EU, India, Australia, Singapore, Indonesia, and elsewhere. 4/
First, it has 24-hour notice and takedown. That’s much worse than NetzDG in Germany. (Which gives platforms 7 days to more carefully assess speech that isn’t obviously illegal.) NetzDG got a ton of attention around the world. Canada is flying under the radar. 5/
We know that even under more lenient systems, platforms systematically err on the side of taking down lawful content in order to avoid risk to themselves. cyberlaw.stanford.edu/blog/2021/02/e… 6/
Canada's proposed law would have penalties up to three percent of global revenue or $10 million. And 24-hour takedown requirements. That's a recipe for massive overcompliance, with major consequences for users' rights to seek and impart information. 7/
Second, the Canadian proposed law requires proactive monitoring – aka filtering – for five kinds of content, including hateful content, propaganda, and violent content. 8/
This is exactly the kind of filtering mandate that has had civil society and human rights advocates ringing alarm bells in Europe for several years. A much narrower proposal in the EU Terrorist Content Reg drew condemnation from UN human rights officials and more. 9/
Here's the UN human rights officials' objection letter: spcommreports.ohchr.org/TMResultsBase/…
(I think there may have been two of those.)
Here's a thread listing other objections to filtering *just* for terrorist content. The Canadian proposal is much, much broader. 10/
Scholars and experts including @AlexandraQu, @JoanBarata, @hutko, @davidakaye, @DiaKayyali, @why0hy and many more have written about the dangers filtering raises -- not just for speech and info rights, but for rights against discrimination, rights to privacy, and more. 11/
Third, platforms must report users who *might* have violated the law to police. This kind of privatized dragnet surveillance of user speech is in Germany's new NetzDG law too. Google is challenging it there. reuters.com/technology/goo… 12/
I'm not an expert in that area of law, but I do live in the world. So I think I can spot an issue about who gets reported to the police, and how police treat them. 13/
We have every reason to expect people of color and other marginalized or vulnerable groups to get flagged more, reported to police more, and mistreated more after that happens. The problem can start with bias in AI or other filtering tools, like this: homes.cs.washington.edu/~msap/pdfs/sap… 14/
Bias in content moderation can also come from human moderators -- especially the ones being asked to make very rapid assessments. Here's a thread on that. 15/
The harms from platforms surveilling users and reporting them to police will also disproportionately hurt vulnerable groups like undocumented immigrants, parolees, or sex workers -- whether by getting them silenced, banned, and reported, or by causing self-censorship. 16/
The US has been coming to terms with this problem in its last platform law, SESTA/FOSTA -- to the point that Elizabeth Warren and other Members of Congress have called for a formal assessment of harms to sex workers. congress.gov/bill/116th-con… 17/
Amazing research and advocacy on this topic has been done by @KendraSerra, @MissLoreleiLee, @evan_greer and others. (See hackinghustling.org/wp-content/upl…) 18/
Fourth, the Canadian law would create a set of new regulatory bodies, tribunals, and a Commissioner who can order platforms to "do any act or thing... necessary to ensure compliance with any obligations imposed on [them] by or under the Act w/in the time spec'd in the order.” 19/
WOW. That is a sweeping power for a regulator to hold over online speech. I don't know Canadian Charter rights law, but that would be a blazing prior restraint problem in many countries, and likely under the Inter-Amer Convention on Human Rights (tho CA's not a signatory). 20/
Maybe I'll just tag some of the Inter American human rights official accounts, though, since this Canadian law goes so powerfully against reports they've been issuing for years. @CIDH @PVacaV 21/
@CIDH @PVacaV If anyone doubts that unscrupulous people can come to, and abuse, government powers like these... I invite them to look at the past five years in the United States. 22/
Fifth, the Canadian law provides for ISPs blocking sites that don't comply with the law. The last time the US tried to do this, in SOPA/PIPA, multiple UN and regional human rights officials wrote to object. perma.cc/8RVR-HQTJ 23/
In other parts of the world, law has been shifting to tolerate site-blocking in extreme cases, like where an entire site is dedicated to counterfeiting or piracy. But this seems to be about blocking entire sites that have lots of legal speech, and just some that's illegal. 24/
Comments on this proposal are due Sept. 25th. But @mgeist, who knows what's going on the ground, is pretty discouraged about whether the comments will make any difference. 25/
I hope some of these resources (or those in replies from others) can be useful to the people dealing with this on the ground in Canada. It's a sign of the craziness in Internet policy today that this law isn't getting huge international focus. Bonne chance, you guys. 26/26

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Daphne Keller

Daphne Keller Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @daphnehk

20 Jul
Heads up, people who don’t follow GDPR news: This case is a big deal. It’s basically asking the CJEU to rule that FB’s whole ads system violates the GDPR.
My (very speculative) crystal ball says: Expect a ruling that messes up the ads business model at the margins in ways that sort of track real world privacy values and sort of track how the tech works, but that fall short on both fronts in confusing ways.
No disrespect to the CJEU intended here, BTW. The materials they review often lack any well-developed factual record or amicus/intervenor briefs from independent experts or NGOs to explain key legal issues. And then they have to reach a consensus position. That’s a rough set-up.
Read 7 tweets
7 Jul
Trump's de-platforming lawsuit turns on the idea that Twitter and others took down content under pressure from Dem politicians, thus becoming state actors who can be sued under the First Amendment. 1/
That's... not how state action works. But politicians pressuring platforms to take down lawful speech is problematic. This practice is called "jawboning," and @dbambauer wrote a useful article about it. I also discuss it in Who Do You Sue. 2/
The irony (OK, one of many) is that Trump was Jawboner in Chief. He tried his best to strongarm platforms into adopting *his* preferred speech policies. So much so that @CenDemTech sued him. (When the President does stuff, that really is state action & can violate the 1st Am) 3/
Read 4 tweets
1 Jul
A few more musings on the ruling striking down the Florida platform law. storage.courtlistener.com/recap/gov.usco…

1/
(1) Same lesson as the Facebook antitrust ruling earlier this week: Norms and assumptions change faster in the political sphere, but more slowly in courts. Slowing down and setting forth your factual, legal, logical justification matters.
2/
In other words, don’t get high on your own supply (of rhetoric).
3/
Read 13 tweets
30 Jun
Are there Democrats in Congress who simultaneously
(1) want platforms to act against things like electoral and Covid disinformation and
(2) support Rep. @davidcicilline's antitrust bill with Sect. 2(a)(3) intact?

I see a serious conflict there.
As I read it, that part of the Cicilline bill opens the door to Infowars, Breitbart, The Daily Stormer et al bringing must-carry claims against platforms, or demanding higher ranking.
Here's what that part of the bill prohibits. The first two are about self-dealing by platforms, which is totally appropriate for antitrust/competition law. The third one opens the floodgates to litigation about speech and content moderation -- and bad outcomes.
Read 5 tweets
22 Jun
OK here's my quick and dirty gloss on the Peterson v. Youtube case from the CJEU today. 1/
curia.europa.eu/juris/document…
I see this case as having three big questions:
(1) When can a platform be liable for (c) infringement under substantive (c) law?
(2) When is platform immunized from such infringement by eCommerce safe harbors?
(3) What injunctions can issue even if platform is not liable?

2/
Question (1) -- when can a platform be liable for (c) infringement under substantive (c) law -- turns on evolving case law about communication to the public.
Bottom line: There are a lot of ways to wind up liable here, so the safe harbor issue in Question (2) matters a ton.
3/
Read 24 tweets
21 Jun
It’s 2027. You’re a growing US platform startup, considering international growth. UK’s Online Safety bill has become law, and so has the EU’s DSA. So you know initial compliance costs are steep in both markets, and both have future regulatory risk.
Do you launch in:
If you responded, is your answer based on knowing something about:

(Wish I could cross-reference w answers to first poll...)
OK last one. The EU DSA has serious extraterritorial reach (think GDPR) and fines up to 6% of annual turnover. UK's Online Safety law has even broader territorial reach, and 10% fines.
For the region where you aren't yet in compliance, do you:
Read 4 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!

Follow Us on Twitter!

:(