I’m going to forget about TLS here for a moment, and point out that the best way to mitigate a lot of these attacks is just to replace cookies entirely.
Maybe we shouldn’t have browsers upload a bunch of passwords to the server every time you request resources from a random website.
• • •
Missing some Tweet in this thread? You can try to
force a refresh
Also I think it’s amazing that in five years we’ve gone from “if you haven’t committed a crime you don’t need encryption” to “US opposition lawmakers have their texts searched.”
Quick reminder: Apple could fix this in a heartbeat by adding an “end to end encryption for iCloud backup” setting (the tech is already in place), but they don’t. Even for those who want it.
I know this is a bit of a stereotype, but why is Russian crypto always so weird?
“We don’t use a normal random number generator, we use a gerbil connected to a hot cup of tea. Also use our ciphers where the S-Boxes are ‘random’ meaning they actually aren’t.”
Dear researchers: the hard part of problems like “traceability” is not the part where you build a mass surveillance system. Building mass surveillance systems is *easy*.
The hard part is building systems that don’t utterly shatter the security guarantees that the private system offered, and don’t have caveats like “obviously this can be abused, stopping that is future work.”
When I go out to see what our research community has been doing in this area, I expect them to understand what makes this research problem hard. Not to find slides like this one.
The post makes this point informally, but it really seems like there’s an impossibility result in this problem: it’s impossible to have privacy and traceability at the same time without some very specific requirements.
There’s this idea that you can have content sent among small groups where there’s privacy of who is forwarding what, but when a piece of content goes “viral” suddenly we can trace the content back to its originator.