I’m glad that Apple is feeling the heat and changing their policy. But this illustrates something important: in building this system, the *only limiting principle* is how much heat Apple can tolerate before it changes its policies. reuters.com/technology/aft…
I’m grateful that Apple has been so open and positive to the technical community. I wish they’d done this before they launched their unpopular service, not after. Some of us have been talking about these issues for two years.
People keep asking whether clarification of the technology will help. I think the problem here is that the technology details actually *obscure* what this system does, and Apple is using this to have it both ways whenever they get criticism.
For example, when Apple initially launched this last week they were clearly worried that their system would be mistaken for server-side scanning. So they emphasized that it scans photos *on your device.*
This created a backlash because people (IMHO rightly) asked: why do you need to scan all the photos on my device? At which point Apple began to “clarify” the issue by pointing out that there was a complicated two-party protocol and so maybe matching doesn’t happen on the client?
In writing this op-ed for the Times, @alexstamos and I had to go through two days of fact checking delays while Apple kept insisting that their system doesn’t do scanning on the client. We kept quoting back their own documentation in response. google.com/amp/s/www.nyti…
Apple has also started emphasizing that they will include “hash publications” to prevent selective targeting of individuals. This is great! None of these things were properly described in the technical/security reviews they did last week!
But overall I feel like a lot of Apple’s technical clarification has been designed to obscure what they’re doing, to people who generally understand it and just don’t like it.
In cryptography we have the notion of an “ideal functionality.” It’s a way to describe what we want from a complicated protocol by ignoring the crypto and just imagining we have a magical trusted server doing the work. Here is my impression of Apple’s last-week version.
This all assumes the crypto works perfectly, but the point here is that “is the scanning happening server or client side” isn’t the right question. The question should be: is Apple scanning photos you uploaded to the server? The answer is “no”, they’re scanning everything.
But even this description isn’t accurate, because the new threat model Apple outlined this week actually expands the ideal functionality as follows. apple.com/child-safety/p…
It now acknowledges that there are millions of devices and they all need to be scanned using the same database. (Not shown here, devices’ photo libraries can change over time too.) It also adds NCMEC, slightly breaking the formalism.
• • •
Missing some Tweet in this thread? You can try to
force a refresh
Everyone keeps writing these doomed takes about how “the US government is going to force tech companies to comply with surveillance, so they might as well just give in preemptively.” Like it’s inevitable and we should just hope for what scraps of privacy we can.
Even I was pessimistic last week. What I’ve seen in the past week has renewed my faith in my fellow countrymen — or at least made me realize how tired and fed up of invasive tech surveillance they really are.
People are really mad. They know that they used to be able to have private family photo albums and letters, and they could use computers without thinking about who else had their information. And they’re looking for someone to blame for the fact that this has changed.
I’m not denying that there’s a CSAM problem in the sense that there is a certain small population of users who promote this terrible stuff, and that there is awful abuse that drives it. But when we say there’s a “problem”, we’re implying it’s getting rapidly worse.
The actually truth here is that we have no idea how bad the underlying problem is. What we have are increasingly powerful automated tools that detect the stuff. As those tools get better, they generate overwhelming numbers of reports.
Someone pointed out that Apple’s Intel Macs probably can’t run their client-side scanning software because they don’t possess a neural engine coprocessor. Real time scanning on Macs is going to require an upgrade to newer M1 hardware (or beyond).
It’s sure a weird thing to pay a ton of money for Apple’s latest hardware, and the first thing they do with it is scan your personal files.
Some other folks have asked whether corporate and enterprise-managed devices will be subject to scanning. What I’ve heard is that enterprise customers are *very* surprised and upset. Apple hasn’t announced if there will be an MDM setting to disable it.
It’s gradually dawning on me how badly Apple screwed up with this content scanning announcement.
If Apple had announced that they were scanning text messages sent through their systems, or photo libraries shared with outside users — well, I wouldn’t have been happy with that. But I think the public would have accepted it.
But they didn’t do that. They announced that they’re going to do real-time scanning of individuals’ *private photo libraries* on their own phones.
That’s… something different. And new. And uncomfortable.
Yesterday we were gradually headed towards a future where less and less of our information had to be under the control and review of anyone but ourselves. For the first time since the 1990s we were taking our privacy back. Today we’re on a different path.
I know the people who did this have good intentions. They think this was inevitable, that we can control it. That it’ll be used only for good, and if it isn’t used for good then that would have happened anyway.
I was alive in the 1990s. I remember we had things like computers that weren’t connected to the Internet, and photo albums that weren’t subject to continuous real-time scanning. Society seemed… stable?
Reading through the analysis. This is not… a security review.
“If we assume there is no adversarial behavior in the security system, then the system will almost never malfunction. Since confidentiality is only broken when this system malfunctions, the system is secure.”
Don’t worry though. There is absolutely no way you can learn which photos the system is scanning for. Why is this good? Doesn’t this mean the system can literally scan for anything with no accountability? Not addressed.