We’re pretty rapidly and consciously heading towards a future where everything you do on the Internet requires government ID, with basically no attention paid to the consequences of that (indeed, the consequences of that may be the whole point.)
I’ve become a little bit despairing that we can fight this. The pressure on all sides seems much too intense. But we also have very little tech in place to make this safe: and realistically the only people who can develop it work in Cupertino and Mountain View.
So what does a future involving age verification look like? As a first step it’s going to involve installing government ID on your phone. The ID will be tied to your biometrics (face). Apple is already deploying something like this, but it can’t be used for web browsing — yet.
Once that’s widely deployed, there will need to be protocols deployed to perform age verification directly. The best possible outcome is that we’ll get something privacy-preserving (some kind of anonymous credential) where random websites will learn your age and not your name.
But I’m pessimistic that real privacy-preserving protocols will be allowed. Once this is in place, law enforcement will want to use this tech to precisely identify site visitors (using warrants if we’re lucky.) We’ll be told it’s necessary to stop terrorism and child abuse.
We’ll be told that because the data exists it’s immoral for random tech firms to prevent us from reading it. We’ll have a whole debate about it in which every participant will be forced to pretend that “presenting your ID to read a website” has always been a thing in the US.
I’m hoping this will be a fight for the next generation because I’m tired of fighting a creeping surveillance regime that never gives up. But I expect this debate to kick off around 2028-29 and maybe sooner if we’re unlucky.
• • •
Missing some Tweet in this thread? You can try to
force a refresh
Europe is maybe two months from passing laws that end private communication as we know it, and folks are looking the other way (understandably.) You’re not going to get a do-over once these laws are passed.
The plan, to repeat, is to mandate that every phone contains software that receives a list of illicit material (photos, keywords, AI models that can determine the sentiment of conversations) and scans your data for matches *before* it is encrypted, and alerts the police directly.
This will initially be used to target CSAM (child sexual abuse material) but it will also target conversations that contain “grooming behavior”, which clearly involves some kind of AI recognition of content. Once these systems are in your phone, of course, this can be expanded.
This thing Facebook did — running an MITM on Snapchat and other competitors’ TLS connections via their Onavo VPN — is so deeply messed up and evil that it completely changes my perspective on what that company is willing to do to its users.
I don’t come from a place of deep trust in big tech corporations. But this stuff seems like it crosses a pretty clear red line, maybe even a criminal one.
I would say: I’d like to see some very performative firings before I trust Meta again, but let’s be honest. This almost certainly went right to the top. Nobody is going to do something this unethical unless they know management has their back 100%.
Google has a blog up discussing their threat modeling when deploying “post-quantum” (PQC) cryptographic algorithms. It’s an interesting read. bughunters.google.com/blog/510874798…
To elaborate a bit on what’s in the blog post, we know that quantum algorithms exist, in principle, that can break many of the cryptographic algorithms we routinely use. All we’re waiting for now is a capable enough quantum computer to run them. (And this seems hard.) 1/
But technology development isn’t linear. Sometimes problems seem impossible until a big breakthrough changes everything. Think about the development of classical computers before and after semiconductors. The same could happen with QC. 2/
A thing I worry about in the (academic) privacy field is that our work isn’t really improving privacy globally. If anything it would be more accurate to say we’re finding ways to encourage the collection and synthesis of more data, by applying a thin veneer of local “privacy.”
I’m referring to the rise of “private” federated machine learning and model-building work, where the end result is to give corporations new ways to build models from confidential user data. This data was previously inaccessible (by law or customer revulsion) but now is fair game.
A typical pitch here is that, by applying techniques like Differential Privacy, we can keep any individual user’s data “out of the model.” The claim: the use of your private data is harmless, since the model “based on your data” will be statistically close to one without it.
So Apple has gone and updated the iMessage protocol to incorporate both forward security (very good!) and post-quantum cryptography. security.apple.com/blog/imessage-…
This is a big deal because iMessage (which gets no real attention from anyone) is one of the most widely-adopted secure communications protocols in the world. At least 1 billion people use it, all over the world. It’s the only widely-available encrypted messaging app in China.
The original iMessage protocol was launched in 2011 and was really amazing for the time, since it instantly provided e2e messaging to huge numbers of people. But cryptographically, it wasn’t very good. My students broke it in 2015: washingtonpost.com/world/national…
Article on some new research that finds ways to balance privacy and stalker detection for AirTags and other location trackers. This is a collaboration with my students @gabrie_beck, Harry Eldridge and colleagues Abhishek Jain and Nadia Heninger. wired.com/story/apple-ai…
TL;DR thread. When Apple launched their “Find My” system for lost devices in 2019, they designed a clever solution to keep bad actors (including Apple) from tracking users. This works by making devices change their broadcast identifier every 15 minutes. blog.cryptographyengineering.com/2019/06/05/how…
Two years later, Apple introduced the AirTag. At this point they noticed a problem: people were using location trackers to stalk victims, by placing them on victims’ possessions or cars. This led to several murders. arstechnica.com/tech-policy/20…