Matthew Green Profile picture
Sep 2, 2021 23 tweets 5 min read Read on X
The story here, for those who may have forgotten 2015 (it was a long time ago!) is that the NSA inserted a backdoor into a major encryption standard and then leaned on manufacturers to install it. Thread. 1/
The backdoor was in a pseudorandom number generator called Dual EC. It wasn’t terribly subtle but it was *deniable*. You could say to yourself “well, that could be horribly exploitable but nobody would do that.” Lots of serious people said that, in fact. But they did. 2/
Not only did the NSA insert this backdoor into encryption standards, but they allegedly paid and pressured firms to implement it in their products. This includes major US security firms like RSA Security and Juniper. (That we know of!) 3/
In 2013, compelling evidence confirming the existence of this backdoor leaked out in the Snowden documents. We didn’t know quite how widely it had been implemented yet, but even then it was shocking. 4/
It would be such a terribly embarrassing story if it ended there. But it gets even worse. 5/
One of the products that the US Intel agencies allegedly convinced to use the backdoor was Juniper, whose NetScreen line of firewalls are widely deployed globally and in the US government. We didn’t know about this because the company hid it in their certification documents. 6/
Even if we’d known about this, I’m sure “serious” folks would have vociferously argued that it’s no big deal because only the NSA could possibly exploit this vulnerability (it used a special secret only they could know), so (from a very US-centric PoV) why be a big downer? 7/
But the field is called computer security; not computer optimism. We think about worst case outcomes because if we don’t do that, our opponents absolutely will. 8/
In fact, they already had. What nobody had considered was that *even if the backdoor required a special secret key* only the NSA knows, a system with such a backdoor could be easily “rekeyed.” 9/
In practice this would simply mean hacking into a major firewall manufacturer’s poorly-secured source code repository, changing 32 bytes of data, and then waiting for the windfall when a huge number of VPN connections suddenly became easy to decrypt. And that’s what happened. 10/
The company was Juniper, the hack was in 2012. It is alleged (in this new reporting) to have been a Chinese group called APT 5. Untold numbers of corporate firewalls received the new backdoor, making both US and overseas systems vulnerable. 11/
The new, rekeyed backdoor remained in the NetScreen code for over *three years*, which is a shockingly long time. Eventually it was revealed around Christmas 2015. 12/
Fortunately we learned a lot from this. Everyone involved was fired and no longer works in the field of consumer-facing cryptography.

I’m kidding! Nobody was fired, it was hushed up, and everyone involved got a big promotion or lateral transfer to lucrative jobs in industry. 13/
The outcome of the Juniper hack remains hushed-up today. We don’t know who the target is. (My pet theory based on timelines is that it was OPM, but I’m just throwing darts.) Presumably the FBI has an idea, and it’s bad enough that they’re keeping it quiet. 14/
The lesson to current events is simple: bad things happen. Don’t put backdoors in your system no matter how cryptographically clever they look, and how smart you think you are. They are vulnerabilities waiting for exploitation, and if the NSA wasn’t ready for it, you aren’t. 15/
The second lesson is that “serious” people are always inclined away from worst-case predictions. In bridge building and politics you can listen to those people. But computer security is adversarial: the conscious goal of attackers is to bring about worst-case outcomes. 16/
It is very hard for people to learn this lesson, by the way. We humans aren’t equipped for it. 17/
I want to say only two more slightly “inside baseball” things about Juniper and this reporting.

First, the inclusion of Dual EC into Juniper-NetScreen wasn’t as simple as the NSA calling the company up and asking them to implement “a NIST standard.” Image
Juniper’s public certification documents don’t mention Dual EC was even used in NetScreen products. It lists another algorithm. The NetScreen Dual EC implementation is included *in addition* to the certified one, and without documentation. That stinks like cheese. 19/
And of course there is a very coincidental “oops” software vulnerability in the NetScreen code that allows the raw output of Dual EC to ooze out onto the wire, bypassing their official, documented algorithm. For more see: dl.acm.org/doi/pdf/10.114… 20/
I’ve told this story eight million times and it never ceases to amaze me that all this really happened, and all we’ve done about it is try to build more encryption backdoors. It makes me very, very tired. 21/21 fin
Addendum: the White House Press Secretary was asked about this story, and their answer is “please stop asking about this story.” h/t @jonathanmayer

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Matthew Green

Matthew Green Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @matthew_d_green

Jul 13
If you want to avoid disasters like the AT&T breach, there are basically only three solutions:

1. Don’t store data
2. Don’t store unencrypted data
3. Have security practices like Google

Very few companies can handle (3), certainly not AT&T.
One of the things policymakers refuse to understand is that securing large amounts of customer data, particularly data that needs to be “hot” and continually queried (eg by law enforcement) is just beyond the means of most US companies.
If you’re a policymaker and the your policy requires company X \notin {Apple, Google, Microsoft, Meta}* to store “hot” databases of customer data: congrats, it’s 1941 and you just anchored all the aircraft carriers at Pearl Harbor.

* Frankly I’m being generous with this list.
Read 5 tweets
Jul 9
I remember a few years back when I suggested people stop using Chrome because it had clearly decided to privilege Google properties with additional access. Now this has become obvious and accelerated.
The fact that Google is doing this right in the face of US and EU anti-monopoly efforts either means they’ve made a very sophisticated calculation, or they’re headed towards a major reckoning.
The big lesson I’ve learned from this is: it doesn’t matter if a company/product employs lots of decent, smart people who care about privacy. Once leadership abandons its principles and moves into extraction mode, those people won’t be able to stop it.
Read 4 tweets
Jul 5
I’ve been watching some early 90s movies recently, and being reminded of the privacy expectations we all used to take for granted in a world that basically worked fine. People paying in cash; using telephones; having important conversations in person.
The idea that a corporation might track you routinely (even if just to push you ads) was barely on the radar. The idea that we needed to add that feature to keep us safe, that was laughable. The past is like a foreign country.
Someone asked me to explain why the early cypherpunks were such a weird alliance of pro-privacy hippies and more right wing gun nuts. Well, that’s easy. The cypherpunk folks were an alliance of weirdos. It was a time where most of these ideas didn’t have mainstream support because the mainstream took most privacy for granted and didn’t see the need to think about weird ideas like “digital money that worked like cash” because we all used cash.
Read 5 tweets
Jul 3
I really do think context is important here. Some of these age verification laws are based on good-faith concerns. But a lot of them are really designed to censor big chunks of the Internet, making them less accessible to both kids and adults.
If you’re thinking that some kind of privacy-preserving age verification system is the answer, that’s great! But you need to make sure your goals (easy access for adults, real privacy, no risk of credentials being stolen) actually overlap with the legislators’ goals.
These systems have loads of sharp edges, and even if you do a perfect job you’re already going to chill access to sites that require age verification. But of course *nobody* comes close to getting it right. For example: 404media.co/id-verificatio…
Read 12 tweets
Jun 21
I want to agree with the idea that mass scanning “breaks encryption” but I think the entire question is a category error. Any law that installs surveillance software directly on your phone isn’t “breaking” or “not breaking” encryption, it’s doing exactly what it promises to do.
For decades we (in the west) had no mass surveillance of any communications. Starting in the 2010s some folks came up with the idea of scanning for illicit content like CSAM uploaded in plaintext on servers. (With apparently relatively little effect on the overall problem.) Image
I don’t think many people realize how new and unproven this scanning tech is: they just assume it’s always been there and it works. It really hasn’t: it’s only a few years old, and it doesn’t seem to have any noticeable impact on sharing of CSAM material.
Read 7 tweets
Jun 10
So Apple has introduced a new system called “Private Cloud Compute” that allows your phone to offload complex (typically AI) tasks to specialized secure devices in the cloud. I’m still trying to work out what I think about this. So here’s a thread. 1/
Apple, unlike most other mobile providers, has traditionally done a lot of processing on-device. For example, all of the machine learning and OCR text recognition on Photos is done right on your device. 2/
Image
Image
The problem is that while modern phone “neural” hardware is improving, it’s not improving fast enough to take advantage of all the crazy features Silicon Valley wants from modern AI, including generative AI and its ilk. This fundamentally requires servers. 3/
Read 22 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us!

:(