Good morning Twitter. This post about Ledger cryptocurrency hardware wallet vulnerabilities is extremely cool, and not just for cryptocurrency people. Let me talk a bit about it. 1/ saleemrashid.com/2018/03/20/bre…
There is a common architectural theme in certain embedded devices: they incorporate a secure processor (or processor component) to protect critical secrets or ensure correct behavior. I’ve seen this in all kinds of devices, not just cryptocurrency wallets. 2/
(For an obvious example, every recent iPhone has a Secure Enclave processor that stores your fingerprint data and cryptographic keys. But these devices are used elsewhere as well. theiphonewiki.com/wiki/Secure_En…) 3/
Secure co-processors typically incorporate some kind of tamper-resistant physical casing as well as a limited interface to protect secret data. They often have some crypto functions on board, and can “attest” (prove to remote parties) that they’re running the right software. 4/
None of these processors can withstand all attacks. But let’s ignore that part and assume they can, for the moment. This still leaves a huge gaping hole in many devices. 5/
You see, the typical “secure element” isn’t powerful enough to drive your entire device (including the pretty GUI and peripherals and network communication if that’s available). So most devices have a second “insecure” processor to do all that stuff. 6/
(A very few devices make exceptions to this. For example, the iPhone SEP has a direct connection to the fingerprint reader, because the application processor isn’t trusted with that data. Weirdly FaceID departs from this but I digress.) 7/
Anyway, the upshot of this design is that even if the secure processor works correctly, it’s entirely dependent on the (relatively) insecure general processor for nearly all functions, including driving a user interface. Basically it’s a hostage. 8/
In some cases this is ok. For example, a good SEP can use crypto to establish secure communications with a trusted outside device (like a remote server). If this is done right, even a compromised app processor can only block communications, not tamper with them. 9/
In others it’s super bad news. If the security of the device relies on the idea that the user can trust what they see on the display. But they can’t if the app processor controls that, and it becomes compromised. 10/
Solving this problem is incredibly hard. Systems like TPMs try to do it by giving the secure chip access to the same RAM as the app processor, which allows it to check which code the app processor is running. 11/
But most secure processors don’t have even this capability. They have no way of knowing whether the app processor is running good or compromised code. 12/
Which (finally!) brings us to the brave, ambitious, scary thing Ledger did. In Ledger wallets, the secure processor *asks* the app processor (nicely) to send it a copy of the firmware that it’s running. 13/
(When I mentioned this to my grad student Gabe, he got a look on his face like I had just handed him Durian candy. Then he started muttering “no, no, that can’t possibly work”) 14/
The reason to be concerned about this approach is because *if* the app processor is compromised, then why would you trust it to send over the actual (malicious) code it’s running? It could just lie and send the legit code. 15/
Ledger tries to square this circle, in a novel way. Their idea is that the device has a fixed amount of NVRAM storage. If you install compromised firmware on it, you’d need room to store that. But you’d also need to store the original firmware to satisfy the checks. 16/
If you make it hard for the attacker to find the room to do this, you win!
This time around, Ledger did not win. Saleem writes about why that is in his post, which I linked to 9,000 tweets up this thread. 17/
But, as Garth says to young Indiana Jones in “The Last Crusade”: You lost today, kid, but that doesn't mean you have to like it. 18/
And since Ledger can’t update the hardware on their devices presumably they’re going to have to try to harden their approach even further. I’m really interested to see whether they win this! 19/
Because if someone can make this approach work, it would have huge implications for a large class of devices beyond wallets. I’m deeply skeptical. But I’m always skeptical. Excited to see how it goes. 20/20 fin
And by the way, nothing in the post or thread above means you should freak out about these vulns, or that you should assume other wallets are better. Just be safe.
Most of cryptography research is developing a really nice mental model for what’s possible and impossible in the field, so you can avoid wasting time on dead ends. But every now and then someone kicks down a door and blows up that intuition, which is the best kind of result.
One of the most surprising privacy results of the last 5 years is the LMW “doubly efficient PIR” paper. The basic idea is that I can load an item from a public database without the operator seeing which item I’m loading & without it having to touch every item in the DB each time.
Short background: Private Information Retrieval isn’t a new idea. It lets me load items from a (remote) public database without the operator learning what item I’m asking for. But traditionally there’s a *huge* performance hit for doing this.
The new and revived Chat Control regulation is back. It still appears to demand client side scanning in encrypted messengers. But removes “detection of new CSAM” and simply demands detection of known CSAM. However: it retains the option to change this requirement back.
For those who haven’t been paying attention, the EU Council and Commission have been relentlessly pushing a regulation that would break encryption. It died last year, but it’s back again — this time with Hungary in the driver’s seat. And the timelines are short.
The goal is to require all apps to scan messages for child sexual abuse content (at first: other types of content have been proposed, and will probably be added later.) This is not possible for encrypted messengers without new technology that may break encryption.
One of the things we need to discuss is that LLMs listening to your conversations and phone calls, reading your texts and emails — this is all going to be normalized and inevitable within seven years.
In a very short timespan it’s going to be expected that your phone can answer questions about what you did or talked about recently, what restaurants you went to. More capability is going to drive more data access, and people will grant it.
I absolutely do believe that (at least initially), vendors will try to do this privately. The models will live on your device or, like Apple Intelligence, they’ll use some kind of secure outsourcing. It’ll be required for adoption.
I hope that the arrest of Pavel Durov does not lead to him or Telegram being held up as some hero of privacy. Telegram has consistently acted to collect huge amounts of unnecessary private data on their servers, and their only measure to protect it was “trust us.”
For years people begged them to roll out even rudimentary default encryption, and they pretty aggressively did not of that. Their response was to move their data centers to various middle eastern countries, and to argue that this made your data safe. Somehow.
Over the years I’ve heard dozens of theories about which nation-states were gaining access to that giant mousetrap full of data they’d built. I have no idea if any of those theories were true. Maybe none were, maybe they all were.
The TL;DR here is that Telegram has an optional end-to-end encryption mode that you have to turn on manually. It only works for individual conversations, not for group chats. This is so relatively annoying to turn on (and invisible to most users) that I doubt many people do.
This on paper isn’t that big a deal, but Telegram’s decision to market itself as a secure messenger means that loads of people (and policymakers) probably assume that lots of its content is end-to-end encrypted. Why wouldn’t you?
If you want to avoid disasters like the AT&T breach, there are basically only three solutions:
1. Don’t store data 2. Don’t store unencrypted data 3. Have security practices like Google
Very few companies can handle (3), certainly not AT&T.
One of the things policymakers refuse to understand is that securing large amounts of customer data, particularly data that needs to be “hot” and continually queried (eg by law enforcement) is just beyond the means of most US companies.
If you’re a policymaker and the your policy requires company X \notin {Apple, Google, Microsoft, Meta}* to store “hot” databases of customer data: congrats, it’s 1941 and you just anchored all the aircraft carriers at Pearl Harbor.