We are at least on Web 7.0 by now and it is all still terrible.
Remember when "mashups" were a thing? That was a dark time.
The semantic web was a dream too rich for our corporeal realm.
Imagine the idealism necessary to think people would markup the content they produced such that it might be useful to other people outside of the immediate context in which it was created.
They will instead obfuscate it as much as possible with fucked up javascript.
The future visions of the web used to focus on how amazing it would be to have all the proprietary data of the world liberated and accessible to locally run agents.
It could still be that way if we wanted it enough.
Though instead I fear that we will pave what little is left of the decentralized paradise but not before capturing it in an NFT for posterity.
The attack improvements come from considering temporal relationships (the probability of receiving messages over a given threshold in a period of time) instead of just over the lifetime of the system.
This can be devastating if false positive rates are poorly selected.
I think the main takeaway is that there hasn't been enough push back and that this now seems depressingly inevitable.
I expect we will see more calls for surveillance like this in the coming months heavily remixed into the ongoing "online harms" narrative.
Without a strong stance from other tech companies, in particular device manufacturers and OS developers, we will look back on the last few weeks as the beginning of the end of generally available consumer devices that don't conduct constant algorithmic surveillance.
Someone asked me on a reddit thread the other day what value t would have to be if NeuralHash had a similar false acceptance rate to other perceptual hashes and I ball parked it at between 20-60...so yeah.
Some quick calculations with the new numbers:
3-4 photos/day: 1 match every 286 days.
50 photos/day: 1 match every 20 days.
As an appendix/follow up to my previous article (a probabilistic analysis of the high level operation of a system like the one that Apple has proposed) here are some thoughts / notes / analysis of the actual protocol.
Honestly I think the weirdest thing given the intent of this system is how susceptible this protocol seems to be to malicious clients who can easily make the server do extra work, and can probably also just legitimately DoS the human-check with enough contrived matches.
Daily Affirmation: End to end encryption provides some safety, but it doesn't go far enough.
For decades our tools have failed to combat bulk metadata surveillance, it's time to push forward and support radical privacy initiatives.
Watching actual cryptographers debate about whether or not we should be voluntarily *weakening* encryption instead of radically strengthening threat models makes my skin crawl.
I don't think I can say this enough right? Some of you are under the weird impressions that systems are "too secure for the general public to be allowed access to" and it just constantly blows my fucking mind.