Matthew Green Profile picture
Jun 10 22 tweets 6 min read Read on X
So Apple has introduced a new system called “Private Cloud Compute” that allows your phone to offload complex (typically AI) tasks to specialized secure devices in the cloud. I’m still trying to work out what I think about this. So here’s a thread. 1/
Apple, unlike most other mobile providers, has traditionally done a lot of processing on-device. For example, all of the machine learning and OCR text recognition on Photos is done right on your device. 2/
Image
Image
The problem is that while modern phone “neural” hardware is improving, it’s not improving fast enough to take advantage of all the crazy features Silicon Valley wants from modern AI, including generative AI and its ilk. This fundamentally requires servers. 3/
But if you send your tasks out to servers in “the cloud” (god using quotes makes me feel 80), this means sending incredibly private data off your phone and out over the Internet. That exposes you to spying, hacking, and the data hungry business model of Silicon Valley. 4/
The solution Apple has come up with is to try to build secure and trustworthy hardware in their own data centers. Your phone can then “outsource” heavy tasks to this hardware. Seems easy, right? Well: here’s the blog post. 5/security.apple.com/blog/private-c…
TL;DR: it is not easy. Building trustworthy computers is literally the hardest problem in computer security. Honestly it’s almost the only problem in computer security. But while it remains a challenging problem, we’ve made a lot of advances. Apple is using almost all of them. 6/
The first thing Apple is doing is using all the advances they’ve made in building secure phones and PCs in their new servers. This involves using Secure Boot and a Secure Enclave Processor (SEP) to hold keys. They’ve presumably turned on all the processor security features. 7/
Then they’re throwing all kinds of processes at the server hardware to make sure the hardware isn’t tampered with. I can’t tell if this prevents hardware attacks, but it seems like a start. 8/ Image
They also use a bunch of protections to ensure that software is legitimate. One is that the software is “stateless” and allegedly doesn’t keep information between user requests. To help ensure this, each server/node reboot re-keys and wipes all storage. 9/ Image
A second protection is that the operating system can “attest” to the software image it’s running. Specifically, it signs a hash of the software and shares this with every phone/client. If you trust this infrastructure, you’ll know it’s running a specific piece of software. 10/
Of course, knowing that the phone is running a specific piece of software doesn’t help you if you don’t trust the software. So Apple plans to put each binary image into a “transparency log” and publish the software.

But here’s a sticky point: not with the full source code. 11/ Image
Security researchers will get *some code* and a VM they can use to run the software. They’ll then have to reverse-engineer the binaries to see if they’re doing unexpected things. It’s a little suboptimal. 12/
When your phone wants to outsource a task, it will contact Apple and obtain a list of servers/nodes and their keys. It will then encrypt its request to all servers, and one will process it. They’re even using fancy anonymous credentials and a third part relay to hide your IP. 13/ Image
Ok there are probably half a dozen more technical details in the blog post. It’s a very thoughtful design. Indeed, if you gave an excellent team a huge pile of money and told them to build the best “private” cloud in the world, it would probably look like this. 14/
But now the tough questions. Is it a good idea? And is it as secure as what Apple does today? And most importantly:
I admit that as I learned about this feature, it made me kind of sad. The thought that was going through my head was: this is going to be too much of a temptation. Once you can “safely” outsource tasks to the cloud, why bother doing them locally. Outsource everything!
As best I can tell, Apple does not have explicit plans to announce when your data is going off-device for to Private Compute. You won’t opt into this, you won’t necessarily even be told it’s happening. It will just happen. Magically.

I don’t love that part. 17/
Finally, there are so many invisible sharp edges that could exist in a system like this. Hardware flaws. Issues with the cryptographic attenuation framework. Clever software exploits. Many of these will be hard for security researchers to detect. That worries me too. 18/
Wrapping up on a more positive note: it’s worth keeping in mind that sometimes the perfect is the enemy of the really good.

In practice the alternative to on-device is: ship private data to OpenAI or someplace sketchier, where who knows what might happen to it. 19/
And of course, keep in mind that super-spies aren’t your biggest adversary. For many people your biggest adversary is the company who sold you your device/software. This PCC system represents a real commitment by Apple not to “peek” at your data. That’s a big deal. 20/
In any case, this is the world we’re moving to. Your phone might seem to be in your pocket, but a part of it lives 2,000 miles away in a data center. As security folks we probably need to get used to that fact, and do the best we can to make sure all parts are secure. //fin
Addendum: “cryptographic attenuation” should read “cryptographic attestation”, but I’m sure folks will get the point.

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Matthew Green

Matthew Green Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @matthew_d_green

Jul 13
If you want to avoid disasters like the AT&T breach, there are basically only three solutions:

1. Don’t store data
2. Don’t store unencrypted data
3. Have security practices like Google

Very few companies can handle (3), certainly not AT&T.
One of the things policymakers refuse to understand is that securing large amounts of customer data, particularly data that needs to be “hot” and continually queried (eg by law enforcement) is just beyond the means of most US companies.
If you’re a policymaker and the your policy requires company X \notin {Apple, Google, Microsoft, Meta}* to store “hot” databases of customer data: congrats, it’s 1941 and you just anchored all the aircraft carriers at Pearl Harbor.

* Frankly I’m being generous with this list.
Read 5 tweets
Jul 9
I remember a few years back when I suggested people stop using Chrome because it had clearly decided to privilege Google properties with additional access. Now this has become obvious and accelerated.
The fact that Google is doing this right in the face of US and EU anti-monopoly efforts either means they’ve made a very sophisticated calculation, or they’re headed towards a major reckoning.
The big lesson I’ve learned from this is: it doesn’t matter if a company/product employs lots of decent, smart people who care about privacy. Once leadership abandons its principles and moves into extraction mode, those people won’t be able to stop it.
Read 4 tweets
Jul 5
I’ve been watching some early 90s movies recently, and being reminded of the privacy expectations we all used to take for granted in a world that basically worked fine. People paying in cash; using telephones; having important conversations in person.
The idea that a corporation might track you routinely (even if just to push you ads) was barely on the radar. The idea that we needed to add that feature to keep us safe, that was laughable. The past is like a foreign country.
Someone asked me to explain why the early cypherpunks were such a weird alliance of pro-privacy hippies and more right wing gun nuts. Well, that’s easy. The cypherpunk folks were an alliance of weirdos. It was a time where most of these ideas didn’t have mainstream support because the mainstream took most privacy for granted and didn’t see the need to think about weird ideas like “digital money that worked like cash” because we all used cash.
Read 5 tweets
Jul 3
I really do think context is important here. Some of these age verification laws are based on good-faith concerns. But a lot of them are really designed to censor big chunks of the Internet, making them less accessible to both kids and adults.
If you’re thinking that some kind of privacy-preserving age verification system is the answer, that’s great! But you need to make sure your goals (easy access for adults, real privacy, no risk of credentials being stolen) actually overlap with the legislators’ goals.
These systems have loads of sharp edges, and even if you do a perfect job you’re already going to chill access to sites that require age verification. But of course *nobody* comes close to getting it right. For example: 404media.co/id-verificatio…
Read 12 tweets
Jun 21
I want to agree with the idea that mass scanning “breaks encryption” but I think the entire question is a category error. Any law that installs surveillance software directly on your phone isn’t “breaking” or “not breaking” encryption, it’s doing exactly what it promises to do.
For decades we (in the west) had no mass surveillance of any communications. Starting in the 2010s some folks came up with the idea of scanning for illicit content like CSAM uploaded in plaintext on servers. (With apparently relatively little effect on the overall problem.) Image
I don’t think many people realize how new and unproven this scanning tech is: they just assume it’s always been there and it works. It really hasn’t: it’s only a few years old, and it doesn’t seem to have any noticeable impact on sharing of CSAM material.
Read 7 tweets
May 28
Some folks are discussing what it means to be a “secure encrypted messaging app.” I think a lot of this discussion is shallow and in bad faith, but let’s talk about it a bit. Here’s a thread. 1/
First: the most critical element that (good) secure messengers protect is the content of your conversations in flight. This is usually done with end-to-end encryption. Messengers like Signal, WhatsApp, Matrix etc. encrypt this data using keys that only the end-devices know. 2/
Encrypting the content of your conversations, preferably by default, is “table stakes.” It isn’t perfect, but it’s required for a messenger even to flirt with the word “secure.” But security and privacy are hard, deep problems. Solving encrypted messaging is just the start. 3/
Read 15 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us!

:(