This article about end-to-end encryption and authorities’ desire to perform real-time content scanning is very well written and I hope you’ll read it. It also makes me pretty angry.
For nearly a decade, technologists have been engaged in a good-faith debate with policymakers about the need for “exceptional access” — basically a way to bypass encryption when police get a warrant. 1/
This is a really hard problem. How do you build a system that can keep your data encrypted against hackers, but still allows (even local) police to decrypt it when they want. Some co-authors wrote about this. mitpress.mit.edu/blog/keys-unde… 2/
But without solving that question , policymakers have shifted their “ask” to something much less reasonable. Now they want a system that can do real-time surveillance against every piece of data sent through an encrypted messaging system. 3/
There is no acknowledgment that this is an insane escalation and poses a massive risk to civil liberties, let alone an even more unsolved technical problem. We have to look at *all* of your messages now and this is a more reasonable ask? 4/
What makes this problem particularly challenging is that the scanning process involves a database of *secret* content. Nobody is allowed to see what governments want to scan for (for obvious reasons: it’s disturbing and private.) 5/
So now we’re trying to build a system that’s end-to-end encrypted but also has a master database of prohibited content. When you send anything that matches (including false matches) the encryption is undone, and government agents get to see it. 6/
While I’m sure that the intentions behind this idea are good, it is hard not to notice the *extreme* interest that national law enforcement agencies are suddenly showing it. They didn’t invest much in CSAM scanning before, but encrypted scanning is a national priority. 7/
I want to stress that once these content scanning systems are in place (for child abuse imagery) their usage will expand. They will be retasked by governments all over the world to scan for speech we consider “protected”. This is the infrastructure of authoritarianism. 8/
Even in democratic countries there will be constant pressure to add content to the list. The UK already implements a nationwide censorship regime at the ISP level. Do you think they won’t take advantage of this technology to push that into private messaging? 9/
I want to conclude by saying one thing to the folks involved in this debate. Many people who were vociferously against unsafe crypto backdoors are scared to wade into this debate, or are even enthusiastic to help build these systems. “Because of the children.” 10/
I’m a parent too. But for my kids’ sake I’m unwilling to help governments build the surveillance infrastructure of the future, until governments explain precisely how they will keep that infrastructure from being abused. I don’t think they have a plan. //
• • •
Missing some Tweet in this thread? You can try to
force a refresh
“New: In 2010, KPN commissioned a study into the behavior of Huawei in the mobile network. The findings were so serious that it was feared for the continued existence of KPN Mobiel if the conclusions were to be leaked”
I can’t access the reporting (paywall and in Dutch) or the actual report. But it sounds like Huawei retained admin access to eavesdrop on calls in the Dutch network, against explicit agreements.
I’ve seen this pattern of story, and I know that it will be hailed by some as “the smoking gun proof of malice” and others will point out that the Huawei code was just a smoking pile of sloppiness, and really: it doesn’t matter.
The extra barriers Apple is throwing up in the way of security researchers make me much more nervous about using their stuff.
It’s not totally the case that security researchers are (today) locked out of iOS. But it’s definitely getting harder. The work that P0 had to do to RE iMessage is an example. googleprojectzero.blogspot.com/2021/01/a-look…
Does it make me nervous that Apple had to write a “firewall” to protect iMessage from malicious payloads (because they’re not confident it can be secured)? Hell yes it does. Would it be nice to have more people banging on this? Yes!
The more I read about the development of electronic payment tech from 1990-2010, the more it looks like a scam designed to ensure that only existing banking (and those few tech companies the banks selected) were viable options.
Apropos a 2010 post by Paul Graham on why the PayPal founders were geniuses. Maybe this is true, but what did PayPal actually do brilliantly? They built anti-fraud tech so that people could use 1970s credit card tech online.
Why weren’t there dozens or hundreds of PayPals, or people doing more sophisticated cash-like payments on the Internet? Well? There were some of the latter but their doors all got kicked in by the Feds. en.m.wikipedia.org/wiki/Liberty_R…
So it looks like NYC is deploying some half-cooked “blockchain” solution for vaccine passports. theintercept.com/2021/03/24/and…
Thank you to @samfbiddle for only using the G-rated quotes.
At one point @samfbiddle told me that IBM claimed to have a technical document explaining how their system worked, and it (in all apparent seriousness) proposed this diagram as a “system architecture” or something. I nearly blew milk out of my nose.
Me: surely everyone else has been a little slower on publishing during the pandemic.
Me: *stupidly checks the websites of my theory friends*
Also me: *vanishes into a tailspin of insecurity*
Advice to new faculty: it is very important to make a friend in your field who will reassure you about why everyone else’s work is easy and yours is both harder and uniquely important. This does not need to actually be true for it to help.
For most of my life I’ve waited for someone to post a credible claim that they’ve broken a major cryptosystem like RSA, and I’m pretty sure tomorrow I’ll still be waiting.
But that doesn’t make it any less fun to think about what a real (implemented) RSA break would look like. Imagine you were a genius who found an efficient factoring algorithm. You have so much opportunity for drama.
Obviously you could just post your algorithm but that’s boring and anyway practical people won’t be able to tell if it works, especially if it’s complicated and you’re not one of a very small number of researchers.