THREAD: some quick thoughts on @amyklobuchar's new bill, which would allow the government to define speech as "health misinformation" and then revoke platforms' Section 230 protections if they algorithmically amplify that speech theverge.com/2021/7/22/2258… (spoiler: it's a bad idea)
First: I get it. Medical misinformation, especially around COVID safety measures and vaccines, is a real problem. Lives are at stake. And, there are real concerns with the ways that Big Tech companies like Facebook and YouTube artificially algorithmically amplify harmful content.
But this bill won't address any of those problems. And in fact, it could make them even worse. It also almost certainly violates the First Amendment, and would never hold up in court. Which is frustrating, because as I just said, this is a real problem, and we need real solutions
The Medical Misinformation Act falls into two main traps: 1) the idea that the Federal government dictating how platforms moderate content will make them moderate more responsibly, and 2) the idea that weakening Section 230 will incentivize platforms to moderate more responsibly.
To understand why this bill is problematic, first you need to understand that ALL content on a platform like Facebook is either algorithmically amplified or algorithmically suppressed. You're almost never 'just seeing" something. You're seeing what Facebook wants you to see.
This, of course, is the core problem with platforms like Facebook, which use surveillance to deliver content to us based on our behavior. So it's good that lawmakers like @amyklobuchar are focused on algorithms and amplification instead of just the speech itself. But...
... this bill fundamentally misunderstands how platforms react in real life when threatened with liability. If this law were to go into effect, Facebook and YouTube's highly risk-averse lawyers would likely say "do not amplify any medical content at all. we can't take the risk."
On an algorithm-driven platform like Facebook or YouTube, anything that is "not amplified" is effectively "suppressed." So creating liability concerns around amplifying health information would actually end up keeping helpful, reliable legitimate health info out of people's feeds
This is especially true when you create liability around a category of speech as broad as "public health info," when we know guidance can change quickly. If this law was on the books a year ago could FB be sued for amplifying a NYT article about how you don't need to wear a mask?
Klobuchar's bill follows a similar logic to a bill introduced by @RepAnnaEshoo that creates a carveout in 230 for algorithmically amplifying certain types of speech "promoting terrorism." These carveout-style bills tend to fail to address the problem and create new ones.
We don't have to wonder what might happen if we create content-specific carveouts in S. 230, because we can just look at what actually did happen the last time we did: when Congress passed FOSTA/SESTA. The bill has been a disaster. It got people killed dailydot.com/debug/fosta-se…
Conditioning 230 protections on "not amplifying" a category of speech rather than "not hosting" it entirely attempts to avoid these human rights & free expression issues, but ultimately fails because it fails to understand the ways that surveillance capitalist platforms function.
This bill would be bad enough if all it did were attempt to revoke 230 protections for algorithmic amplification of vaguely defined "medical misinformation," but unfortunately it goes a step further, and that step takes it all the way over the line into unconstitutional territory
The very last paragraph of the bill basically tasks the Secretary of Health and Human Services with issuing guidance about what speech constitutes "medical misinformation," for the express purpose of then excluding amplification of that speech from Section 230 protections.
That's uh... that's not compatible with the First Amendment. More importantly: it's a bad idea to allow a Federal government agency to effectively dictate what public health related speech platforms allow you to see and what speech they actively suppress to avoid liability.
That last point is so crucial & everyone needs to understand this: large corporations are REALLY GOOD at avoiding liability. They hate liability! They will do almost anything to avoid it, even if it means suppressing large amounts of legitimate speech, or harming marginalized ppl
The reality is that if you think Big Tech platforms like Facebook and YouTube are doing a bad job moderating medical misinformation now, they will do a MUCH WORSE job if their moderation practices are being dictated by corporate lawyers whose only concern is avoiding liability
None of this means that we should just sit back and do nothing while harmful misinformation goes viral. There are a number of things lawmakers could do right now that would actually reduce this harm without opening the pandora's box of problems that come with changing Section 230
First, Congress could finally pass data privacy legislation, which would strike at the root of Big Tech's surveillance capitalist business model, & make much harder for platforms like Facebook to microtarget misinformation directly into the minds of the ppl most susceptible to it
Robust antitrust enforcement that starts to address the centralization & monopoly problems w/ Big Tech would also go a long way toward making it so each individual platform doesn't have such an outsized influence and make it much harder for bad actors to poison the whole network
There are some real issues to work out in the House antitrust package, but there are a bunch of good ideas embedded in those bills, some of which would at least start to chip away at the root of the problem, which is the dominance of platforms like Facebook, Instagram, & YouTube
Then there are some helpful ideas in bills like this one, introduced by @EdMarkey and @DorisMatsui, which attempt to address problems like algorithmic discrimination and abuse without messing with Section 230 fightforthefuture.org/news/2021-05-2…
Frankly, it's beyond frustrating to see a lawmaker like @amyklobuchar, who genuinely seems to take these issues seriously, introduce a bill that's this sloppy, misguided, unhelpful, and unworkable, at a time when we urgently need actual action, not partisan messaging bills.
And it's just beyond irresponsible for lawmakers to continue introducing bills like this without even acknowledging the concerns raised by dozens of human rights, racial justice, sex worker advocacy, LGBTQ+, and civil liberties groups in this letter theverge.com/2021/1/27/2225…
So, to summarize: there are lots of things lawmakers can do right now to address the harms of Big Tech, including medical misinformation, without touching Section 230.

And if Congress wants to do something on 230, they should advance the Safe Sex Worker Study Act...
... to study the actual impact of FOSTA/SESTA, the last major change to Section 230, before they make the same mistakes again. More on that here from @RepRoKhanna
One last thought, which i mentioned elsewhere but may as well attach to this thread: how does @amyklobuchar think that, for example, Trump's HHS secretary would have defined "medical misinformation?" Cuz i guarantee it would have included, for example, health care for trans ppl.

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Evan Greer

Evan Greer Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @evan_greer

15 Jul
Saying that decentralized tech like cryptocurrency is “inherently right wing” is like saying socialism is “inherently authoritarian” because you can point to examples of authoritarian governments that claim to be socialist. Yes, there are a lot of crypto bro scams and BS, but…
Decentralization is our best bet for having a future internet that’s not based on surveillance capitalism and where people have basic rights. Cryptocurrencies are just sort of the tip of the iceberg, messy (and often scammy) proofs of concept for something much more important
So go ahead and retweet the Dogecoin guy with an axe to grind because his thread confirms your biases or makes you feel righteous, but know that what you’re really dunking on is the potential to have a Spotify owned by artists, uncensorable private Twitter with no Jack Dorsey etc
Read 8 tweets
13 Jul
So @MayorJohnDennis of West Lafayette, IN says that he will veto an ordinance to ban #facialrecognition despite widespread evidence it's ineffective & discriminatory. Then gives an interview to the local paper showing he has no clue how this tech works 🤦‍♀️eu.jconline.com/story/news/202…
Let's break this down a bit. @MayorJohnDennis says he'd veto the ordinance, which was brought forward by concerned residents, despite widespread concern from civil rights groups and experts about the ways this technology exacerbates discrimination & harm washingtonpost.com/politics/2021/…
Here's an actual Mayor of an actual city describing to @jconline what he thinks recognition does:

Notably, nearly twenty other cities across the US have already banned this technology.
Read 5 tweets
13 Jul
THREAD: I'll be following along with the House Judiciary Committee hearing on #facialrecognition today: judiciary.house.gov/calendar/event…

Will be on the lookout for lawmakers parroting talking points fed to them by tech industry lobbyists and law enforcement shills. #BanFacialRecognition
Only a few minutes into the hearing and we've already heard a ton of excuses for why lawmakers aren't just moving quickly to ban this technology. The language they're using around "oversight" & "regulatory frameworks" fed to them directly from tech lobbyists opposing moratorium
Good to hear @RepJerryNadler acknowledge the harm of private use of facial recognition as well as government / law enforcement use. @fightfortheftr supports a ban on the vast majority of private use of facial recognition: fightforthefuture.org/news/2021-03-1…
Read 51 tweets
28 Jun
THREAD: YouTube just banned Right Wing Watch, an organization working to expose and debunk hate groups.

This should be a wake-up call for the left: calling for more and faster social media censorship will always backfire on marginalized social movements.
thedailybeast.com/youtube-perman…
As always with these cases, we're piecing together what actually happened, cuz Big Tech companies like YouTube have incredibly opaque moderation practices. But this is a perfect example of how pushing for companies to make moderation decisions based on news cycles is a bad idea.
From what's been reported, it seems likely that the Right Wing Watch channel was banned because of videos where they incorporate content from some of the far right assholes they are targeting, for the purposes of exposing / criticizing / debunking their racist disinformation.
Read 47 tweets
28 Jun
This entire editorial is premised on the idea that facing professional consequences for being transphobic is a tyrannical violation of free expression. This is a Tucker Carlson segment with a posh British accent.
The mental gymnastics in this piece are just incredible. While defending free expression the Observer essentially says it’s wrong when trans people and our allies express our ourselves by speaking out against people who are spouting an ideology that’s getting trans kids killed
This piece is extra egregious because there are SO MANY actual threats to free expression rights, including speech rights of transphobes the Observer is defending, happening all over the world right now. Attacks on Sec 230 in the US censorship & social media shutdowns globally
Read 5 tweets
3 Jun
NEW: after protests organized by @fightfortheftr and widespread backlash from civil rights groups, Amazon Ring is making some significant changes to the ways they allow law enforcement to request footage from their massive network of surveillance cameras gizmodo.com/amazons-ring-w…
Ring will no longer allow the cops to send requests privately to camera owners. Now they’ll have to do it publicly through the neighbors app. They’re also putting some limits on how often they can request footage, the geographical area covered, and for what purposes.
Let’s be extremely clear: Amazon is only doing this because of the tremendous work done by grassroots digital rights and racial justice activists (as well as journalists!) who helped expose the widespread discrimination & abuse enabled by these corporate surveillance partnerships
Read 6 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!

Follow Us on Twitter!

:(