There is a take that companies like Apple are never going to be able to stop well-resourced attackers like NSO from launching targeted attacks. At the extremes this take is probably correct. But adopting cynicism as strategy is a bad approach. 1/
First, look at how Pegasus and other targeted exploits get onto your phone. Most approaches require some user interaction: a compromised website or a phishing link that users have to click.

iMessage, on the other hand, is an avenue for 0-click targeted infection. 2/
While we can’t have “perfect security”, closing down avenues for interactionless targeted infection sure seems like a thing we can make some progress on. 3/
And in fact we’ve seen Apple make some progress on this in the past. Starting recently, Apple added a “firewall” called Blastdoor to iMessage. This is supposed to prevent attacks like Pegasus. Obviously it doesn’t work, but it at least ups the cost of these exploits. 4/
The reason Apple added a firewall is because they obviously *don’t* feel that iMessage is secure by itself. There’s too much unsafe parsing code. Adding a firewall is basically an admission that the core product can’t be secured in its current form. 5/
So it seems fairly obvious that ripping out memory-unsafe parsing code and disabling advanced (non plain-text) features — while not guaranteed to solve the problem — is still an open problem, something that Apple can devote its enormous resources to. 6/
Another area that Apple has already stepped up their game is in logging. Apple power monitoring telemetry records information about weird process “hang” events, which can sometimes trip up exploits. There’s a privacy tradeoff here, but Apple should lean into this. 7/
Even small improvements can make these exploit attempts risky — even just a little risky — by improving the chance that a whole exploit chain gets uncovered and patched. That risk can be the difference between 10,000 targets and 100. 8/
Apple has also been doing tons of stuff on the silicon/firmware side, like adding PAC and (soon) MTE. It looks like people have found their way around PAC (or just avoided it) but MTE may have more impact. developer.arm.com/-/media/Arm%20…
Of course none of these things help unless Apple turns them on (in all relevant code). Doing this has loads of costs: it can break stuff. You want Apple to have a fire under their ass to put in the effort and take those risks. “There’s no perfect security” is anathema to that.
Also: I think people need to appreciate the *difference* between “100 high value targets” and “10,000 targets, including random journalists”. There is a big difference from society’s point of view… 11/
Right now a couple of non-US journalists I talk to have told me all their sources are clamming up. They’re afraid that reporters’ phones are tapped with Pegasus. I’m sure the scum who launched these attacks are thrilled with this. 12/
While we may never stop targeted attacks, making them expensive enough *to prevent them from being credibly mass-deployed against journalists* is a huge benefit to society. It represents a qualitative improvement. 13/
Anyway I don’t have the answer to any of this. I don’t do software exploits, I just hang around people who do. But it’s obvious that we can do better — and doing so will boost exploit costs and risk in beneficial ways. The way to get companies to do better is public pressure. //

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Matthew Green

Matthew Green Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @matthew_d_green

9 Jul
Every article I read on (ZK) rollups almost gets to the real problem, and then misses it. The real problem is the need for storage. ZK proofs won’t solve this.
I keep reading these articles that talk about the problems with rollups. And they’re good articles! E.g.: medium.com/dragonfly-rese…
But they always reach a point where they realize that the problem is state storage, and then they handwave that the solution is going to be something like Mina or zkSync, which don’t fully solve the state storage problem.
Read 8 tweets
9 Jul
Wait, why would we care if they were hacked? Wouldn’t that just mean more people worldwide gained access to an effective vaccine?
Is this what it feels like to become a pirate?
Ok having actually, you know, looked at the article — it’s about disruptive hacking. Stopping that seems like it’d be a pretty good thing.
Read 4 tweets
7 Jul
This new EU legislation granting providers the right to “voluntarily” scan private messages doesn’t break encryption, or take us to a regime of mandatory mass surveillance. But it definitely sets the stage.
What’s remarkable about this stuff is that it’s phrased as “protecting children from child abuse”. And as a parent I appreciate that. But has anyone explored, empirically, if any of this surveillance actually works to stop the problem?
Here in the US we’ve built an enormous surveillance system to detect instances of child sexual abuse material, it’s been running for years, and the number of reports is going up exponentially.

How many pedophiles are there? Isn’t it a smallish number?
Read 8 tweets
6 Jul
I was going to laugh off this Kaspersky password manager bug, but it is *amazing*. In the sense that I’ve never seen so many broken things in one simple piece of code. donjon.ledger.com/kaspersky-pass…
Like seriously, WTF is even happening here. Why are they sampling *floats*? Why are they multiplying them together? Is this witchcraft? Image
And here, Kaspersky decided that instead of picking a random password, they should bias the password to be non-random and thus “less likely to be on a cracker list”. 🤦🏻‍♂️ ImageImage
Read 13 tweets
4 Jul
I’m struggling to understand how a 1-bit hash error can get irreversibly incorporated into CT, while all the blockchains of the world hum along happily. groups.google.com/a/chromium.org…
The problem here is not that a hash can be corrupted, because that happens. The problem is that somehow the totally “breaks” the CT log? Seems like an avoidable design error. But it’s early and I’m still drinking my coffee.
Anyway, it seems to me that every cryptographic system should be built with the assumption that something (memory, network, 56K phone modem) will introduce errors, and the system will detect those errors — but not by exploding.
Read 5 tweets
16 Jun
This is an amazing paper. It implies (with strong statistical evidence) that the design of a major mobile-data encryption algorithm — used in GPRS data — was deliberately backdoored by its designer. eprint.iacr.org/2021/819
The GPRS standards were extensions to the GSM (2G/3G) mobile standard that allowed phones to use data over cellular networks. This was before LTE. For security, the standards included encryption to provide over-the-air security for your data. 2/
As is “normal” for telephony standards, the encryption was provided by two custom ciphers: GEA-1 and GEA-2. While there were strong export control regulations in place for crypto, there’s little overt indication that either of these ciphers was deliberately weakened. 3/
Read 14 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!

Follow Us on Twitter!

:(