@claudiorlandi The claim that the protocol is “auditable.” This is a strong claim that is being made to consumers and politicians. What does it mean? I think it means “the pdata [first protocol message from server]” is a secure commitment to the scanning database X. 1/
@claudiorlandi In other words, under the assumption of *a malicious server* the clients can be assured that (provided they check that their pdata is what Apple intended to publish) then Apple cannot scan for items outside of a committed database. And this is (at least privately) verifiable. 2/
@claudiorlandi My first observation is that while this “auditability” property exists in Apple’s public claims, no corresponding “dishonest server” properties exist anywhere in the formal description of the protocol. Check for yourself. 3/
@claudiorlandi Moreover, based on the protocol as described in the paper I can’t see how it *could* apply. For example, when the Cuckoo hash table is empty the server picks a “random element.” But what if it doesn’t choose an element honestly at random?
@claudiorlandi For example: what if Apple wants to include other meaningful elements into those so-called “random elements”? Under DDH one can construct structured elements that are indistinguishable from random. 5/
@claudiorlandi Similarly, if pdata is to be “auditable” (meaning a secure commitment to X) then this treatment of the hash functions seems extremely casual. 6/
@claudiorlandi An actual auditability property would assume that the server is malicious and that there is (at minimum) some trusted Verifier that can take (X, pdata, W) where W is a secret witness, and verify that pdata is constructed correctly. There’s more required but that’s the basics. 7/
@claudiorlandi It goes without saying that you could probably tighten this protocol to have that property, for example by generating every random coin from a PRG on a known seed and carefully specifying how verifiably “random points” are sampled. But it’s not specified. 8/
@claudiorlandi And all of this *might* be ok if we were discussing some random academic preprint. But this is a real system that’s being deployed into a billion devices, one with very specific cryptographic failure modes. If Apple is going to make the claim, they can’t omit the proofs. //
• • •
Missing some Tweet in this thread? You can try to
force a refresh
So this indictment is puzzling. It concerns Michael Sussman, a lawyer who organized the collection of DNS data from hosting providers allegedly for political purposes. Many of the companies are anonymized, can we tell who they are? (Thread) context-cdn.washingtonpost.com/notes/prod/def…
So we begin with “Internet Company-1”, which is a (major?) DNS resolver.
The executive in question (Tech Executive-1) claims to have been offered a position as Hillary Clinton’s cyberczar if she won, so maybe that’s a clue?
A lot of pro-CSAM scanning arguments take the following form: “phones already have lots of opportunities (real and potential) for privacy abuse, so you’re stupid for minding when we try to introduce a new and highly-scalable one.”
And to some extent this argument is correct! It’s hard to believe this, but the iPhone (and mass-market smartphones) only launched in 2007. In 14 short years these devices have done more to erode user privacy than 114 years of all other electronic technologies combined.
From my perspective the only reasonable reaction for any technologist observing this incredible privacy collapse is: to wake up in the morning trying to fix it, and collapse into bed at night having spent the entire day trying to undo the damage.
The thesis of this article is that Britain “tamed big tech,” but the actual legislation seems to require a few privacy switches for kids — switches that should be on for everyone under a reasonable regulatory regime. wired.co.uk/article/age-ap…
“Strange women lying in ponds is no basis for a system of government.”
The major observation here is that tech firms will do all sorts of things to “protect children” as long as they’re (1) relatively inexpensive, (2) don’t substantially harm their own financial interests. Which generally means doing ineffective things.
The story here, for those who may have forgotten 2015 (it was a long time ago!) is that the NSA inserted a backdoor into a major encryption standard and then leaned on manufacturers to install it. Thread. 1/
The backdoor was in a pseudorandom number generator called Dual EC. It wasn’t terribly subtle but it was *deniable*. You could say to yourself “well, that could be horribly exploitable but nobody would do that.” Lots of serious people said that, in fact. But they did. 2/
My 11 y/o is making friendship bracelets to support K9 Partners for Life. Also happy to take donations direct or at the Venmo below, since it would make her happy and we’re getting zero foot traffic on this 95 degree Baltimore day :) k94life.org
You can also send Zcash to zs1ztg7nnjqr99k4xn0g8fjw24at3nm95w864hlfk2ujq9mpumrwal2mtqe54985774whk9vvv9js8 but I can’t promise it will be tax deductible :)
Wow. $464.33 total raised for Canine Partners For Life. Thanks to everyone who donated, including the Zcash folks :)
I’m glad that Apple is feeling the heat and changing their policy. But this illustrates something important: in building this system, the *only limiting principle* is how much heat Apple can tolerate before it changes its policies. reuters.com/technology/aft…
I’m grateful that Apple has been so open and positive to the technical community. I wish they’d done this before they launched their unpopular service, not after. Some of us have been talking about these issues for two years.