The story here, for those who may have forgotten 2015 (it was a long time ago!) is that the NSA inserted a backdoor into a major encryption standard and then leaned on manufacturers to install it. Thread. 1/
The backdoor was in a pseudorandom number generator called Dual EC. It wasn’t terribly subtle but it was *deniable*. You could say to yourself “well, that could be horribly exploitable but nobody would do that.” Lots of serious people said that, in fact. But they did. 2/
Not only did the NSA insert this backdoor into encryption standards, but they allegedly paid and pressured firms to implement it in their products. This includes major US security firms like RSA Security and Juniper. (That we know of!) 3/
In 2013, compelling evidence confirming the existence of this backdoor leaked out in the Snowden documents. We didn’t know quite how widely it had been implemented yet, but even then it was shocking. 4/
It would be such a terribly embarrassing story if it ended there. But it gets even worse. 5/
One of the products that the US Intel agencies allegedly convinced to use the backdoor was Juniper, whose NetScreen line of firewalls are widely deployed globally and in the US government. We didn’t know about this because the company hid it in their certification documents. 6/
Even if we’d known about this, I’m sure “serious” folks would have vociferously argued that it’s no big deal because only the NSA could possibly exploit this vulnerability (it used a special secret only they could know), so (from a very US-centric PoV) why be a big downer? 7/
But the field is called computer security; not computer optimism. We think about worst case outcomes because if we don’t do that, our opponents absolutely will. 8/
In fact, they already had. What nobody had considered was that *even if the backdoor required a special secret key* only the NSA knows, a system with such a backdoor could be easily “rekeyed.” 9/
In practice this would simply mean hacking into a major firewall manufacturer’s poorly-secured source code repository, changing 32 bytes of data, and then waiting for the windfall when a huge number of VPN connections suddenly became easy to decrypt. And that’s what happened. 10/
The company was Juniper, the hack was in 2012. It is alleged (in this new reporting) to have been a Chinese group called APT 5. Untold numbers of corporate firewalls received the new backdoor, making both US and overseas systems vulnerable. 11/
The new, rekeyed backdoor remained in the NetScreen code for over *three years*, which is a shockingly long time. Eventually it was revealed around Christmas 2015. 12/
Fortunately we learned a lot from this. Everyone involved was fired and no longer works in the field of consumer-facing cryptography.
I’m kidding! Nobody was fired, it was hushed up, and everyone involved got a big promotion or lateral transfer to lucrative jobs in industry. 13/
The outcome of the Juniper hack remains hushed-up today. We don’t know who the target is. (My pet theory based on timelines is that it was OPM, but I’m just throwing darts.) Presumably the FBI has an idea, and it’s bad enough that they’re keeping it quiet. 14/
The lesson to current events is simple: bad things happen. Don’t put backdoors in your system no matter how cryptographically clever they look, and how smart you think you are. They are vulnerabilities waiting for exploitation, and if the NSA wasn’t ready for it, you aren’t. 15/
The second lesson is that “serious” people are always inclined away from worst-case predictions. In bridge building and politics you can listen to those people. But computer security is adversarial: the conscious goal of attackers is to bring about worst-case outcomes. 16/
It is very hard for people to learn this lesson, by the way. We humans aren’t equipped for it. 17/
I want to say only two more slightly “inside baseball” things about Juniper and this reporting.
First, the inclusion of Dual EC into Juniper-NetScreen wasn’t as simple as the NSA calling the company up and asking them to implement “a NIST standard.”
Juniper’s public certification documents don’t mention Dual EC was even used in NetScreen products. It lists another algorithm. The NetScreen Dual EC implementation is included *in addition* to the certified one, and without documentation. That stinks like cheese. 19/
And of course there is a very coincidental “oops” software vulnerability in the NetScreen code that allows the raw output of Dual EC to ooze out onto the wire, bypassing their official, documented algorithm. For more see: dl.acm.org/doi/pdf/10.114… 20/
I’ve told this story eight million times and it never ceases to amaze me that all this really happened, and all we’ve done about it is try to build more encryption backdoors. It makes me very, very tired. 21/21 fin
Addendum: the White House Press Secretary was asked about this story, and their answer is “please stop asking about this story.” h/t @jonathanmayer
• • •
Missing some Tweet in this thread? You can try to
force a refresh
Most of cryptography research is developing a really nice mental model for what’s possible and impossible in the field, so you can avoid wasting time on dead ends. But every now and then someone kicks down a door and blows up that intuition, which is the best kind of result.
One of the most surprising privacy results of the last 5 years is the LMW “doubly efficient PIR” paper. The basic idea is that I can load an item from a public database without the operator seeing which item I’m loading & without it having to touch every item in the DB each time.
Short background: Private Information Retrieval isn’t a new idea. It lets me load items from a (remote) public database without the operator learning what item I’m asking for. But traditionally there’s a *huge* performance hit for doing this.
The new and revived Chat Control regulation is back. It still appears to demand client side scanning in encrypted messengers. But removes “detection of new CSAM” and simply demands detection of known CSAM. However: it retains the option to change this requirement back.
For those who haven’t been paying attention, the EU Council and Commission have been relentlessly pushing a regulation that would break encryption. It died last year, but it’s back again — this time with Hungary in the driver’s seat. And the timelines are short.
The goal is to require all apps to scan messages for child sexual abuse content (at first: other types of content have been proposed, and will probably be added later.) This is not possible for encrypted messengers without new technology that may break encryption.
One of the things we need to discuss is that LLMs listening to your conversations and phone calls, reading your texts and emails — this is all going to be normalized and inevitable within seven years.
In a very short timespan it’s going to be expected that your phone can answer questions about what you did or talked about recently, what restaurants you went to. More capability is going to drive more data access, and people will grant it.
I absolutely do believe that (at least initially), vendors will try to do this privately. The models will live on your device or, like Apple Intelligence, they’ll use some kind of secure outsourcing. It’ll be required for adoption.
I hope that the arrest of Pavel Durov does not lead to him or Telegram being held up as some hero of privacy. Telegram has consistently acted to collect huge amounts of unnecessary private data on their servers, and their only measure to protect it was “trust us.”
For years people begged them to roll out even rudimentary default encryption, and they pretty aggressively did not of that. Their response was to move their data centers to various middle eastern countries, and to argue that this made your data safe. Somehow.
Over the years I’ve heard dozens of theories about which nation-states were gaining access to that giant mousetrap full of data they’d built. I have no idea if any of those theories were true. Maybe none were, maybe they all were.
The TL;DR here is that Telegram has an optional end-to-end encryption mode that you have to turn on manually. It only works for individual conversations, not for group chats. This is so relatively annoying to turn on (and invisible to most users) that I doubt many people do.
This on paper isn’t that big a deal, but Telegram’s decision to market itself as a secure messenger means that loads of people (and policymakers) probably assume that lots of its content is end-to-end encrypted. Why wouldn’t you?
If you want to avoid disasters like the AT&T breach, there are basically only three solutions:
1. Don’t store data 2. Don’t store unencrypted data 3. Have security practices like Google
Very few companies can handle (3), certainly not AT&T.
One of the things policymakers refuse to understand is that securing large amounts of customer data, particularly data that needs to be “hot” and continually queried (eg by law enforcement) is just beyond the means of most US companies.
If you’re a policymaker and the your policy requires company X \notin {Apple, Google, Microsoft, Meta}* to store “hot” databases of customer data: congrats, it’s 1941 and you just anchored all the aircraft carriers at Pearl Harbor.