I was going to laugh off this Kaspersky password manager bug, but it is *amazing*. In the sense that I’ve never seen so many broken things in one simple piece of code. donjon.ledger.com/kaspersky-pass…
Like seriously, WTF is even happening here. Why are they sampling *floats*? Why are they multiplying them together? Is this witchcraft?
And here, Kaspersky decided that instead of picking a random password, they should bias the password to be non-random and thus “less likely to be on a cracker list”. 🤦🏻♂️
Then they used a non-cryptographic PRNG (Mersenne Twister). Amusingly, this is probably the *least* bad thing Kaspersky did, even though it’s terribly bad.
And in case you thought that after doing everything else wrong, they were going to do the next part right: nope. They then proceed to seed the whole damn thing with time(0).
I have to admire the combination of needless complexity combined with absolutely breathtaking incompetence.
Anyway, before anyone kills me for being mean to developers doing the best they can… The real takeaway here is that (obviously) nobody with even modest cryptographic knowledge ever audited, thought about, or came near this product.
And in case you’re of the opinion that bad implementations are unique to Kaspersky: it’s entirely possible to make some other mainstream password managers “hang forever” by setting the password chatset constraints too high, indicating that they haven’t figured this out either.
Some actual constructive lessons:
* Always use a real RNG to generate unpredictable seeds, never time(0)
* Always use a cryptographic RNG
* Never ever use floats in cryptography (I suspect some Javascript nonsense here)
* To convert from bits to an alphabet of symbols… 1/
(Rewriting this because now I’m afraid people will take advice from tweets)
You should use rejection sampling, with you can find articles about online. Be careful that your rejection loop doesn’t run forever.
And please, get someone to look at your code. Especially if it’s going to be in a mainstream product. You cannot ever ship anything bespoke like this without having an expert glance it over. Even an hour would have flagged all this stuff.
Oh gosh.
Anyway I recently had a discussion with a group of expert cryptographers/cryptographic engineers about whether “don’t roll your own crypto” is a helpful rule, or if it’s non-inclusive.
I don’t know the answer, but stuff like this is why the phrase was invented.
• • •
Missing some Tweet in this thread? You can try to
force a refresh
I’m a sucker for crypto papers that do insane things like build ciphertexts out of garbled circuits, and then use the garbled circuit to do stuff that only shows up in the security reduction. Eg: eprint.iacr.org/2023/1058
So what’s fun about this paper is that it’s trying to do something weirdly hard: build cryptosystems that allow you to encrypt (functions of) secret keys. This can be encrypting your own secret key, or eg I can encrypt your secret key and you can encrypt mine to form a “cycle”.
The reason this is hard is that our standard definitions of security (eg semantic security) say that encryption must be safe for any possible messages an adversary can come up with. But adversaries don’t know my secret key, so the definition says nothing about that.
So Apple deployed an entire key transparency thing for iMessage and it literally seems to be documented in a blog post. What the heck is the point of key transparency if you don’t document things, and (critically) provide open source ID verification tools?
Key transparency is about deterring attacks. But it doesn’t deter them if you keep it all secret, Apple!
Here’s the blog post. TLDR every device shares (?) an ECDSA signing key synced by iCloud key vault, all public keys go into CONIKS, encryption keys are authenticated by signing keys. So many little details unknown. security.apple.com/blog/imessage-…
If anyone thought that the EU legislation on content scanning would be limited, you can forget about that. Europol has demanded unfiltered access to all data produced by these systems. balkaninsight.com/2023/09/29/eur…
To be clear what this means: these scanning systems may produce huge numbers of false positives. That means your private, encrypted messages get decrypted and handed over to the police *even if you haven’t sent anything illegal.*
A lot of people have justified the deployment of these systems (which will scan images, text and maybe audio) by claiming there are “safeguards.” This usually means employees check to see if there’s a crime before they report you to the cops. This would remove those checks.
Like if I was in the adtech or data brokerage industry, I’d sure love these ads. Encryption is bad! Apple is too private. Let’s pass some laws to “protect the children.”
If there’s one thing that makes me deeply suspicious, it’s scrappy child-safety organizations suddenly having huge piles of money to spend on hyper-specific tech focused political pressure campaigns as opposed to, say, children.
To give some context, here are the contents of an initial Snowden leak from September 2013. Cavium was a leading manufacturer of cryptographic co-processors for VPN devices at that time. archive.nytimes.com/www.nytimes.co…
Just to give a sense of how important these chips are to VPN security (and without making any specific claims about this hardware) here’s the FIPS security policy for Cisco’s ASA crypto module, showing how much crypto the Cavium Nitrox chip implements. csrc.nist.gov/CSRC/media/pro…