So Apple has introduced a new system called “Private Cloud Compute” that allows your phone to offload complex (typically AI) tasks to specialized secure devices in the cloud. I’m still trying to work out what I think about this. So here’s a thread. 1/
Apple, unlike most other mobile providers, has traditionally done a lot of processing on-device. For example, all of the machine learning and OCR text recognition on Photos is done right on your device. 2/
The problem is that while modern phone “neural” hardware is improving, it’s not improving fast enough to take advantage of all the crazy features Silicon Valley wants from modern AI, including generative AI and its ilk. This fundamentally requires servers. 3/
But if you send your tasks out to servers in “the cloud” (god using quotes makes me feel 80), this means sending incredibly private data off your phone and out over the Internet. That exposes you to spying, hacking, and the data hungry business model of Silicon Valley. 4/
The solution Apple has come up with is to try to build secure and trustworthy hardware in their own data centers. Your phone can then “outsource” heavy tasks to this hardware. Seems easy, right? Well: here’s the blog post. 5/security.apple.com/blog/private-c…
TL;DR: it is not easy. Building trustworthy computers is literally the hardest problem in computer security. Honestly it’s almost the only problem in computer security. But while it remains a challenging problem, we’ve made a lot of advances. Apple is using almost all of them. 6/
The first thing Apple is doing is using all the advances they’ve made in building secure phones and PCs in their new servers. This involves using Secure Boot and a Secure Enclave Processor (SEP) to hold keys. They’ve presumably turned on all the processor security features. 7/
Then they’re throwing all kinds of processes at the server hardware to make sure the hardware isn’t tampered with. I can’t tell if this prevents hardware attacks, but it seems like a start. 8/
They also use a bunch of protections to ensure that software is legitimate. One is that the software is “stateless” and allegedly doesn’t keep information between user requests. To help ensure this, each server/node reboot re-keys and wipes all storage. 9/
A second protection is that the operating system can “attest” to the software image it’s running. Specifically, it signs a hash of the software and shares this with every phone/client. If you trust this infrastructure, you’ll know it’s running a specific piece of software. 10/
Of course, knowing that the phone is running a specific piece of software doesn’t help you if you don’t trust the software. So Apple plans to put each binary image into a “transparency log” and publish the software.
But here’s a sticky point: not with the full source code. 11/
Security researchers will get *some code* and a VM they can use to run the software. They’ll then have to reverse-engineer the binaries to see if they’re doing unexpected things. It’s a little suboptimal. 12/
When your phone wants to outsource a task, it will contact Apple and obtain a list of servers/nodes and their keys. It will then encrypt its request to all servers, and one will process it. They’re even using fancy anonymous credentials and a third part relay to hide your IP. 13/
Ok there are probably half a dozen more technical details in the blog post. It’s a very thoughtful design. Indeed, if you gave an excellent team a huge pile of money and told them to build the best “private” cloud in the world, it would probably look like this. 14/
But now the tough questions. Is it a good idea? And is it as secure as what Apple does today? And most importantly:
I admit that as I learned about this feature, it made me kind of sad. The thought that was going through my head was: this is going to be too much of a temptation. Once you can “safely” outsource tasks to the cloud, why bother doing them locally. Outsource everything!
As best I can tell, Apple does not have explicit plans to announce when your data is going off-device for to Private Compute. You won’t opt into this, you won’t necessarily even be told it’s happening. It will just happen. Magically.
I don’t love that part. 17/
Finally, there are so many invisible sharp edges that could exist in a system like this. Hardware flaws. Issues with the cryptographic attenuation framework. Clever software exploits. Many of these will be hard for security researchers to detect. That worries me too. 18/
Wrapping up on a more positive note: it’s worth keeping in mind that sometimes the perfect is the enemy of the really good.
In practice the alternative to on-device is: ship private data to OpenAI or someplace sketchier, where who knows what might happen to it. 19/
And of course, keep in mind that super-spies aren’t your biggest adversary. For many people your biggest adversary is the company who sold you your device/software. This PCC system represents a real commitment by Apple not to “peek” at your data. That’s a big deal. 20/
In any case, this is the world we’re moving to. Your phone might seem to be in your pocket, but a part of it lives 2,000 miles away in a data center. As security folks we probably need to get used to that fact, and do the best we can to make sure all parts are secure. //fin
Addendum: “cryptographic attenuation” should read “cryptographic attestation”, but I’m sure folks will get the point.
• • •
Missing some Tweet in this thread? You can try to
force a refresh
Ok, look people: Signal as a *protocol* is excellent. As a service it’s excellent. But as an application running on your phone, it’s… an application running on your consumer-grade phone. The targeted attacks people use on those devices are well known.
There is malware that targets and compromises phones. There has been malware that targets the Signal application. It’s an app that processes many different media types, and that means there’s almost certainly a vulnerability to be exploited at any given moment in time.
If you don’t know what this means, it means that you shouldn’t expect Signal to defend against nation-state malware. (But you also shouldn’t really expect any of the other stuff here, like Chrome, to defend you in that circumstance either.)
You should use Signal. Seriously. There are other encrypted messaging apps out there, but I don’t have as much faith in their longevity. In particular I have major concerns about the sustainability of for-profit apps in our new “AI” world.
I have too many reasons to worry about this but that’s not really the point. The thing I’m worried about is that, as the only encrypted messenger people seem to *really* trust, Signal is going to end up being a target for too many people.
Signal was designed to be a consumer-grade messaging app. It’s really, really good for that purpose. And obviously “excellent consumer grade” has a lot of intersection with military-grade cryptography just because that’s how the world works. But it is being asked to do a lot!
New public statement from Apple (sent to me privately):
“As of Friday, February 21, Apple can no longer offer Advanced Data Protection as a feature to new users in the UK.”
Additionally:
"Apple can no longer offer Advanced Data Protection (ADP) in the United Kingdom to new users and current UK users will eventually need to disable this security feature. ADP protects iCloud data with end-to-end encryption, which means the data can only be decrypted by the user who owns it, and only on their trusted devices. We are gravely disappointed that the protections provided by ADP will not be available to our customers in the UK given the continuing rise of data breaches and other threats to customer privacy. Enhancing the security of cloud storage with end-to-end encryption is more urgent than ever before. Apple remains committed to offering our users the highest level of security for their personal data and are hopeful that we will be able to do so in the future in the United Kingdom. As we have said many times before, we have never built a backdoor or master key to any of our products or services and we never will.”
This will not affect:
iMessage encryption
iCloud Keychain
FaceTime
Health data
These will remain end-to-end encrypted. Other services like iCloud Backup and Photos will not be end-to-end encrypted.
What is this new setting that sends photo data to Apple servers and why is it default “on” at the bottom of my settings screen?
I understand that it uses differential privacy and some fancy cryptography, but I would have loved to know what this is before it was deployed and turned on by default without my consent.
This seems to involve two separate components. One that builds an index using differential privacy (set at some budget) and the other that does a homomorphic search?
Does this work well enough that I want it on? I don’t know. I wasn’t given the time to think about it.
Most of cryptography research is developing a really nice mental model for what’s possible and impossible in the field, so you can avoid wasting time on dead ends. But every now and then someone kicks down a door and blows up that intuition, which is the best kind of result.
One of the most surprising privacy results of the last 5 years is the LMW “doubly efficient PIR” paper. The basic idea is that I can load an item from a public database without the operator seeing which item I’m loading & without it having to touch every item in the DB each time.
Short background: Private Information Retrieval isn’t a new idea. It lets me load items from a (remote) public database without the operator learning what item I’m asking for. But traditionally there’s a *huge* performance hit for doing this.
The new and revived Chat Control regulation is back. It still appears to demand client side scanning in encrypted messengers. But removes “detection of new CSAM” and simply demands detection of known CSAM. However: it retains the option to change this requirement back.
For those who haven’t been paying attention, the EU Council and Commission have been relentlessly pushing a regulation that would break encryption. It died last year, but it’s back again — this time with Hungary in the driver’s seat. And the timelines are short.
The goal is to require all apps to scan messages for child sexual abuse content (at first: other types of content have been proposed, and will probably be added later.) This is not possible for encrypted messengers without new technology that may break encryption.