I was going to laugh off this Kaspersky password manager bug, but it is *amazing*. In the sense that I’ve never seen so many broken things in one simple piece of code. donjon.ledger.com/kaspersky-pass…
Like seriously, WTF is even happening here. Why are they sampling *floats*? Why are they multiplying them together? Is this witchcraft?
And here, Kaspersky decided that instead of picking a random password, they should bias the password to be non-random and thus “less likely to be on a cracker list”. 🤦🏻♂️
Then they used a non-cryptographic PRNG (Mersenne Twister). Amusingly, this is probably the *least* bad thing Kaspersky did, even though it’s terribly bad.
And in case you thought that after doing everything else wrong, they were going to do the next part right: nope. They then proceed to seed the whole damn thing with time(0).
I have to admire the combination of needless complexity combined with absolutely breathtaking incompetence.
Anyway, before anyone kills me for being mean to developers doing the best they can… The real takeaway here is that (obviously) nobody with even modest cryptographic knowledge ever audited, thought about, or came near this product.
And in case you’re of the opinion that bad implementations are unique to Kaspersky: it’s entirely possible to make some other mainstream password managers “hang forever” by setting the password chatset constraints too high, indicating that they haven’t figured this out either.
Some actual constructive lessons:
* Always use a real RNG to generate unpredictable seeds, never time(0)
* Always use a cryptographic RNG
* Never ever use floats in cryptography (I suspect some Javascript nonsense here)
* To convert from bits to an alphabet of symbols… 1/
(Rewriting this because now I’m afraid people will take advice from tweets)
You should use rejection sampling, with you can find articles about online. Be careful that your rejection loop doesn’t run forever.
And please, get someone to look at your code. Especially if it’s going to be in a mainstream product. You cannot ever ship anything bespoke like this without having an expert glance it over. Even an hour would have flagged all this stuff.
Oh gosh.
Anyway I recently had a discussion with a group of expert cryptographers/cryptographic engineers about whether “don’t roll your own crypto” is a helpful rule, or if it’s non-inclusive.
I don’t know the answer, but stuff like this is why the phrase was invented.
• • •
Missing some Tweet in this thread? You can try to
force a refresh
New public statement from Apple (sent to me privately):
“As of Friday, February 21, Apple can no longer offer Advanced Data Protection as a feature to new users in the UK.”
Additionally:
"Apple can no longer offer Advanced Data Protection (ADP) in the United Kingdom to new users and current UK users will eventually need to disable this security feature. ADP protects iCloud data with end-to-end encryption, which means the data can only be decrypted by the user who owns it, and only on their trusted devices. We are gravely disappointed that the protections provided by ADP will not be available to our customers in the UK given the continuing rise of data breaches and other threats to customer privacy. Enhancing the security of cloud storage with end-to-end encryption is more urgent than ever before. Apple remains committed to offering our users the highest level of security for their personal data and are hopeful that we will be able to do so in the future in the United Kingdom. As we have said many times before, we have never built a backdoor or master key to any of our products or services and we never will.”
This will not affect:
iMessage encryption
iCloud Keychain
FaceTime
Health data
These will remain end-to-end encrypted. Other services like iCloud Backup and Photos will not be end-to-end encrypted.
What is this new setting that sends photo data to Apple servers and why is it default “on” at the bottom of my settings screen?
I understand that it uses differential privacy and some fancy cryptography, but I would have loved to know what this is before it was deployed and turned on by default without my consent.
This seems to involve two separate components. One that builds an index using differential privacy (set at some budget) and the other that does a homomorphic search?
Does this work well enough that I want it on? I don’t know. I wasn’t given the time to think about it.
Most of cryptography research is developing a really nice mental model for what’s possible and impossible in the field, so you can avoid wasting time on dead ends. But every now and then someone kicks down a door and blows up that intuition, which is the best kind of result.
One of the most surprising privacy results of the last 5 years is the LMW “doubly efficient PIR” paper. The basic idea is that I can load an item from a public database without the operator seeing which item I’m loading & without it having to touch every item in the DB each time.
Short background: Private Information Retrieval isn’t a new idea. It lets me load items from a (remote) public database without the operator learning what item I’m asking for. But traditionally there’s a *huge* performance hit for doing this.
The new and revived Chat Control regulation is back. It still appears to demand client side scanning in encrypted messengers. But removes “detection of new CSAM” and simply demands detection of known CSAM. However: it retains the option to change this requirement back.
For those who haven’t been paying attention, the EU Council and Commission have been relentlessly pushing a regulation that would break encryption. It died last year, but it’s back again — this time with Hungary in the driver’s seat. And the timelines are short.
The goal is to require all apps to scan messages for child sexual abuse content (at first: other types of content have been proposed, and will probably be added later.) This is not possible for encrypted messengers without new technology that may break encryption.
One of the things we need to discuss is that LLMs listening to your conversations and phone calls, reading your texts and emails — this is all going to be normalized and inevitable within seven years.
In a very short timespan it’s going to be expected that your phone can answer questions about what you did or talked about recently, what restaurants you went to. More capability is going to drive more data access, and people will grant it.
I absolutely do believe that (at least initially), vendors will try to do this privately. The models will live on your device or, like Apple Intelligence, they’ll use some kind of secure outsourcing. It’ll be required for adoption.
I hope that the arrest of Pavel Durov does not lead to him or Telegram being held up as some hero of privacy. Telegram has consistently acted to collect huge amounts of unnecessary private data on their servers, and their only measure to protect it was “trust us.”
For years people begged them to roll out even rudimentary default encryption, and they pretty aggressively did not of that. Their response was to move their data centers to various middle eastern countries, and to argue that this made your data safe. Somehow.
Over the years I’ve heard dozens of theories about which nation-states were gaining access to that giant mousetrap full of data they’d built. I have no idea if any of those theories were true. Maybe none were, maybe they all were.