, 10 tweets, 2 min read
The EFF has a piece out on how client-side scanning ‘breaks’ end-to-end encryption. They take a pretty strong position here (one I happen to agree with.) But I thought it would be helpful to explain my specific technical concerns. Thread: 1/ eff.org/deeplinks/2019…
Just to explain what we’re talking about: many current unencrypted messaging systems scan every photo sent through the service, in order to detect abusive content (CP). Encrypted messaging systems can’t do this. Hence proposals to do scanning on the client side. 2/
The service would send down some kind of list of content hashes that are problematic, and your app would check for matches before encrypting the message/photo/whatever. 3/
The problem with this approach is that it’s subject to abuse in two ways.

1. The system is designed to filter “bad” content, and “bad” means different things to different people.

2. Even if the service provider is decent, bad actors can slip inappropriate content into the DB.
People tend to discount the first concern because we live in a society of laws, etc. But it’s helpful to imagine how an authoritarian government will use this system. In fact, you don’t have to imagine. Just use WeChat. 5/
But even if you live in a healthy democracy (good for you) and you basically trust that this system will be used for good, there’s still the possibility of abuse. To prevent that, someone needs to audit the database to make sure everything in it is supposed to be there. 6/
And this is where existing systems largely fall down. Today’s “sexual abuse imagery” systems rely fundamentally on keeping a *secret* database of image hashes, which are computed using a *secret* algorithm. This creates a lot of potential for undetectable abuse. 7/
A malicious provider can insert the hash of any data they want — say political content — and they’re guaranteed to get a report from any client that sends this. Normal clients can’t audit the database since it’s kept deliberately opaque. 8/
Even worse, if the hashing system has collisions (much more likely for ‘fuzzy’ image hashing systems), it may be possible to find legitimate abuse imagery that just happens to collide with non-abuse content. 9/
In short, using today’s SAI detection systems on the client side (assuming we can even solve the ‘secret algorithm’ problem) basically means “the system is secure as long as you fully trust the provider.” That’s a pretty important security downgrade from normal e2e. /Fin
Missing some Tweet in this thread? You can try to force a refresh.

Enjoying this thread?

Keep Current with Matthew Green

Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

Twitter may remove this content at anytime, convert it as a PDF, save and print for later use!

Try unrolling a thread yourself!

how to unroll video

1) Follow Thread Reader App on Twitter so you can easily mention us!

2) Go to a Twitter thread (series of Tweets by the same owner) and mention us with a keyword "unroll" @threadreaderapp unroll

You can practice here first or read more on our help page!

Follow Us on Twitter!

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just three indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3.00/month or $30.00/year) and get exclusive features!

Become Premium

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!