There is a take that companies like Apple are never going to be able to stop well-resourced attackers like NSO from launching targeted attacks. At the extremes this take is probably correct. But adopting cynicism as strategy is a bad approach. 1/
First, look at how Pegasus and other targeted exploits get onto your phone. Most approaches require some user interaction: a compromised website or a phishing link that users have to click.
iMessage, on the other hand, is an avenue for 0-click targeted infection. 2/
While we can’t have “perfect security”, closing down avenues for interactionless targeted infection sure seems like a thing we can make some progress on. 3/
And in fact we’ve seen Apple make some progress on this in the past. Starting recently, Apple added a “firewall” called Blastdoor to iMessage. This is supposed to prevent attacks like Pegasus. Obviously it doesn’t work, but it at least ups the cost of these exploits. 4/
The reason Apple added a firewall is because they obviously *don’t* feel that iMessage is secure by itself. There’s too much unsafe parsing code. Adding a firewall is basically an admission that the core product can’t be secured in its current form. 5/
So it seems fairly obvious that ripping out memory-unsafe parsing code and disabling advanced (non plain-text) features — while not guaranteed to solve the problem — is still an open problem, something that Apple can devote its enormous resources to. 6/
Another area that Apple has already stepped up their game is in logging. Apple power monitoring telemetry records information about weird process “hang” events, which can sometimes trip up exploits. There’s a privacy tradeoff here, but Apple should lean into this. 7/
Even small improvements can make these exploit attempts risky — even just a little risky — by improving the chance that a whole exploit chain gets uncovered and patched. That risk can be the difference between 10,000 targets and 100. 8/
Apple has also been doing tons of stuff on the silicon/firmware side, like adding PAC and (soon) MTE. It looks like people have found their way around PAC (or just avoided it) but MTE may have more impact. developer.arm.com/-/media/Arm%20…
Of course none of these things help unless Apple turns them on (in all relevant code). Doing this has loads of costs: it can break stuff. You want Apple to have a fire under their ass to put in the effort and take those risks. “There’s no perfect security” is anathema to that.
Also: I think people need to appreciate the *difference* between “100 high value targets” and “10,000 targets, including random journalists”. There is a big difference from society’s point of view… 11/
Right now a couple of non-US journalists I talk to have told me all their sources are clamming up. They’re afraid that reporters’ phones are tapped with Pegasus. I’m sure the scum who launched these attacks are thrilled with this. 12/
While we may never stop targeted attacks, making them expensive enough *to prevent them from being credibly mass-deployed against journalists* is a huge benefit to society. It represents a qualitative improvement. 13/
Anyway I don’t have the answer to any of this. I don’t do software exploits, I just hang around people who do. But it’s obvious that we can do better — and doing so will boost exploit costs and risk in beneficial ways. The way to get companies to do better is public pressure. //
• • •
Missing some Tweet in this thread? You can try to
force a refresh
Every article I read on (ZK) rollups almost gets to the real problem, and then misses it. The real problem is the need for storage. ZK proofs won’t solve this.
I keep reading these articles that talk about the problems with rollups. And they’re good articles! E.g.: medium.com/dragonfly-rese…
But they always reach a point where they realize that the problem is state storage, and then they handwave that the solution is going to be something like Mina or zkSync, which don’t fully solve the state storage problem.
This new EU legislation granting providers the right to “voluntarily” scan private messages doesn’t break encryption, or take us to a regime of mandatory mass surveillance. But it definitely sets the stage.
What’s remarkable about this stuff is that it’s phrased as “protecting children from child abuse”. And as a parent I appreciate that. But has anyone explored, empirically, if any of this surveillance actually works to stop the problem?
Here in the US we’ve built an enormous surveillance system to detect instances of child sexual abuse material, it’s been running for years, and the number of reports is going up exponentially.
How many pedophiles are there? Isn’t it a smallish number?
I was going to laugh off this Kaspersky password manager bug, but it is *amazing*. In the sense that I’ve never seen so many broken things in one simple piece of code. donjon.ledger.com/kaspersky-pass…
Like seriously, WTF is even happening here. Why are they sampling *floats*? Why are they multiplying them together? Is this witchcraft?
And here, Kaspersky decided that instead of picking a random password, they should bias the password to be non-random and thus “less likely to be on a cracker list”. 🤦🏻♂️
I’m struggling to understand how a 1-bit hash error can get irreversibly incorporated into CT, while all the blockchains of the world hum along happily. groups.google.com/a/chromium.org…
The problem here is not that a hash can be corrupted, because that happens. The problem is that somehow the totally “breaks” the CT log? Seems like an avoidable design error. But it’s early and I’m still drinking my coffee.
Anyway, it seems to me that every cryptographic system should be built with the assumption that something (memory, network, 56K phone modem) will introduce errors, and the system will detect those errors — but not by exploding.
This is an amazing paper. It implies (with strong statistical evidence) that the design of a major mobile-data encryption algorithm — used in GPRS data — was deliberately backdoored by its designer. eprint.iacr.org/2021/819
The GPRS standards were extensions to the GSM (2G/3G) mobile standard that allowed phones to use data over cellular networks. This was before LTE. For security, the standards included encryption to provide over-the-air security for your data. 2/
As is “normal” for telephony standards, the encryption was provided by two custom ciphers: GEA-1 and GEA-2. While there were strong export control regulations in place for crypto, there’s little overt indication that either of these ciphers was deliberately weakened. 3/