The story here, for those who may have forgotten 2015 (it was a long time ago!) is that the NSA inserted a backdoor into a major encryption standard and then leaned on manufacturers to install it. Thread. 1/
The backdoor was in a pseudorandom number generator called Dual EC. It wasn’t terribly subtle but it was *deniable*. You could say to yourself “well, that could be horribly exploitable but nobody would do that.” Lots of serious people said that, in fact. But they did. 2/
Not only did the NSA insert this backdoor into encryption standards, but they allegedly paid and pressured firms to implement it in their products. This includes major US security firms like RSA Security and Juniper. (That we know of!) 3/
In 2013, compelling evidence confirming the existence of this backdoor leaked out in the Snowden documents. We didn’t know quite how widely it had been implemented yet, but even then it was shocking. 4/
It would be such a terribly embarrassing story if it ended there. But it gets even worse. 5/
One of the products that the US Intel agencies allegedly convinced to use the backdoor was Juniper, whose NetScreen line of firewalls are widely deployed globally and in the US government. We didn’t know about this because the company hid it in their certification documents. 6/
Even if we’d known about this, I’m sure “serious” folks would have vociferously argued that it’s no big deal because only the NSA could possibly exploit this vulnerability (it used a special secret only they could know), so (from a very US-centric PoV) why be a big downer? 7/
But the field is called computer security; not computer optimism. We think about worst case outcomes because if we don’t do that, our opponents absolutely will. 8/
In fact, they already had. What nobody had considered was that *even if the backdoor required a special secret key* only the NSA knows, a system with such a backdoor could be easily “rekeyed.” 9/
In practice this would simply mean hacking into a major firewall manufacturer’s poorly-secured source code repository, changing 32 bytes of data, and then waiting for the windfall when a huge number of VPN connections suddenly became easy to decrypt. And that’s what happened. 10/
The company was Juniper, the hack was in 2012. It is alleged (in this new reporting) to have been a Chinese group called APT 5. Untold numbers of corporate firewalls received the new backdoor, making both US and overseas systems vulnerable. 11/
The new, rekeyed backdoor remained in the NetScreen code for over *three years*, which is a shockingly long time. Eventually it was revealed around Christmas 2015. 12/
Fortunately we learned a lot from this. Everyone involved was fired and no longer works in the field of consumer-facing cryptography.
I’m kidding! Nobody was fired, it was hushed up, and everyone involved got a big promotion or lateral transfer to lucrative jobs in industry. 13/
The outcome of the Juniper hack remains hushed-up today. We don’t know who the target is. (My pet theory based on timelines is that it was OPM, but I’m just throwing darts.) Presumably the FBI has an idea, and it’s bad enough that they’re keeping it quiet. 14/
The lesson to current events is simple: bad things happen. Don’t put backdoors in your system no matter how cryptographically clever they look, and how smart you think you are. They are vulnerabilities waiting for exploitation, and if the NSA wasn’t ready for it, you aren’t. 15/
The second lesson is that “serious” people are always inclined away from worst-case predictions. In bridge building and politics you can listen to those people. But computer security is adversarial: the conscious goal of attackers is to bring about worst-case outcomes. 16/
It is very hard for people to learn this lesson, by the way. We humans aren’t equipped for it. 17/
I want to say only two more slightly “inside baseball” things about Juniper and this reporting.
First, the inclusion of Dual EC into Juniper-NetScreen wasn’t as simple as the NSA calling the company up and asking them to implement “a NIST standard.”
Juniper’s public certification documents don’t mention Dual EC was even used in NetScreen products. It lists another algorithm. The NetScreen Dual EC implementation is included *in addition* to the certified one, and without documentation. That stinks like cheese. 19/
And of course there is a very coincidental “oops” software vulnerability in the NetScreen code that allows the raw output of Dual EC to ooze out onto the wire, bypassing their official, documented algorithm. For more see: dl.acm.org/doi/pdf/10.114… 20/
I’ve told this story eight million times and it never ceases to amaze me that all this really happened, and all we’ve done about it is try to build more encryption backdoors. It makes me very, very tired. 21/21 fin
Addendum: the White House Press Secretary was asked about this story, and their answer is “please stop asking about this story.” h/t @jonathanmayer
• • •
Missing some Tweet in this thread? You can try to
force a refresh
Ok, look people: Signal as a *protocol* is excellent. As a service it’s excellent. But as an application running on your phone, it’s… an application running on your consumer-grade phone. The targeted attacks people use on those devices are well known.
There is malware that targets and compromises phones. There has been malware that targets the Signal application. It’s an app that processes many different media types, and that means there’s almost certainly a vulnerability to be exploited at any given moment in time.
If you don’t know what this means, it means that you shouldn’t expect Signal to defend against nation-state malware. (But you also shouldn’t really expect any of the other stuff here, like Chrome, to defend you in that circumstance either.)
You should use Signal. Seriously. There are other encrypted messaging apps out there, but I don’t have as much faith in their longevity. In particular I have major concerns about the sustainability of for-profit apps in our new “AI” world.
I have too many reasons to worry about this but that’s not really the point. The thing I’m worried about is that, as the only encrypted messenger people seem to *really* trust, Signal is going to end up being a target for too many people.
Signal was designed to be a consumer-grade messaging app. It’s really, really good for that purpose. And obviously “excellent consumer grade” has a lot of intersection with military-grade cryptography just because that’s how the world works. But it is being asked to do a lot!
New public statement from Apple (sent to me privately):
“As of Friday, February 21, Apple can no longer offer Advanced Data Protection as a feature to new users in the UK.”
Additionally:
"Apple can no longer offer Advanced Data Protection (ADP) in the United Kingdom to new users and current UK users will eventually need to disable this security feature. ADP protects iCloud data with end-to-end encryption, which means the data can only be decrypted by the user who owns it, and only on their trusted devices. We are gravely disappointed that the protections provided by ADP will not be available to our customers in the UK given the continuing rise of data breaches and other threats to customer privacy. Enhancing the security of cloud storage with end-to-end encryption is more urgent than ever before. Apple remains committed to offering our users the highest level of security for their personal data and are hopeful that we will be able to do so in the future in the United Kingdom. As we have said many times before, we have never built a backdoor or master key to any of our products or services and we never will.”
This will not affect:
iMessage encryption
iCloud Keychain
FaceTime
Health data
These will remain end-to-end encrypted. Other services like iCloud Backup and Photos will not be end-to-end encrypted.
What is this new setting that sends photo data to Apple servers and why is it default “on” at the bottom of my settings screen?
I understand that it uses differential privacy and some fancy cryptography, but I would have loved to know what this is before it was deployed and turned on by default without my consent.
This seems to involve two separate components. One that builds an index using differential privacy (set at some budget) and the other that does a homomorphic search?
Does this work well enough that I want it on? I don’t know. I wasn’t given the time to think about it.
Most of cryptography research is developing a really nice mental model for what’s possible and impossible in the field, so you can avoid wasting time on dead ends. But every now and then someone kicks down a door and blows up that intuition, which is the best kind of result.
One of the most surprising privacy results of the last 5 years is the LMW “doubly efficient PIR” paper. The basic idea is that I can load an item from a public database without the operator seeing which item I’m loading & without it having to touch every item in the DB each time.
Short background: Private Information Retrieval isn’t a new idea. It lets me load items from a (remote) public database without the operator learning what item I’m asking for. But traditionally there’s a *huge* performance hit for doing this.
The new and revived Chat Control regulation is back. It still appears to demand client side scanning in encrypted messengers. But removes “detection of new CSAM” and simply demands detection of known CSAM. However: it retains the option to change this requirement back.
For those who haven’t been paying attention, the EU Council and Commission have been relentlessly pushing a regulation that would break encryption. It died last year, but it’s back again — this time with Hungary in the driver’s seat. And the timelines are short.
The goal is to require all apps to scan messages for child sexual abuse content (at first: other types of content have been proposed, and will probably be added later.) This is not possible for encrypted messengers without new technology that may break encryption.