I was going to laugh off this Kaspersky password manager bug, but it is *amazing*. In the sense that I’ve never seen so many broken things in one simple piece of code. donjon.ledger.com/kaspersky-pass…
Like seriously, WTF is even happening here. Why are they sampling *floats*? Why are they multiplying them together? Is this witchcraft?
And here, Kaspersky decided that instead of picking a random password, they should bias the password to be non-random and thus “less likely to be on a cracker list”. 🤦🏻♂️
Then they used a non-cryptographic PRNG (Mersenne Twister). Amusingly, this is probably the *least* bad thing Kaspersky did, even though it’s terribly bad.
And in case you thought that after doing everything else wrong, they were going to do the next part right: nope. They then proceed to seed the whole damn thing with time(0).
I have to admire the combination of needless complexity combined with absolutely breathtaking incompetence.
Anyway, before anyone kills me for being mean to developers doing the best they can… The real takeaway here is that (obviously) nobody with even modest cryptographic knowledge ever audited, thought about, or came near this product.
And in case you’re of the opinion that bad implementations are unique to Kaspersky: it’s entirely possible to make some other mainstream password managers “hang forever” by setting the password chatset constraints too high, indicating that they haven’t figured this out either.
Some actual constructive lessons:
* Always use a real RNG to generate unpredictable seeds, never time(0)
* Always use a cryptographic RNG
* Never ever use floats in cryptography (I suspect some Javascript nonsense here)
* To convert from bits to an alphabet of symbols… 1/
(Rewriting this because now I’m afraid people will take advice from tweets)
You should use rejection sampling, with you can find articles about online. Be careful that your rejection loop doesn’t run forever.
And please, get someone to look at your code. Especially if it’s going to be in a mainstream product. You cannot ever ship anything bespoke like this without having an expert glance it over. Even an hour would have flagged all this stuff.
Oh gosh.
Anyway I recently had a discussion with a group of expert cryptographers/cryptographic engineers about whether “don’t roll your own crypto” is a helpful rule, or if it’s non-inclusive.
I don’t know the answer, but stuff like this is why the phrase was invented.
• • •
Missing some Tweet in this thread? You can try to
force a refresh
I’ve been watching some early 90s movies recently, and being reminded of the privacy expectations we all used to take for granted in a world that basically worked fine. People paying in cash; using telephones; having important conversations in person.
The idea that a corporation might track you routinely (even if just to push you ads) was barely on the radar. The idea that we needed to add that feature to keep us safe, that was laughable. The past is like a foreign country.
Someone asked me to explain why the early cypherpunks were such a weird alliance of pro-privacy hippies and more right wing gun nuts. Well, that’s easy. The cypherpunk folks were an alliance of weirdos. It was a time where most of these ideas didn’t have mainstream support because the mainstream took most privacy for granted and didn’t see the need to think about weird ideas like “digital money that worked like cash” because we all used cash.
I really do think context is important here. Some of these age verification laws are based on good-faith concerns. But a lot of them are really designed to censor big chunks of the Internet, making them less accessible to both kids and adults.
If you’re thinking that some kind of privacy-preserving age verification system is the answer, that’s great! But you need to make sure your goals (easy access for adults, real privacy, no risk of credentials being stolen) actually overlap with the legislators’ goals.
These systems have loads of sharp edges, and even if you do a perfect job you’re already going to chill access to sites that require age verification. But of course *nobody* comes close to getting it right. For example: 404media.co/id-verificatio…
I want to agree with the idea that mass scanning “breaks encryption” but I think the entire question is a category error. Any law that installs surveillance software directly on your phone isn’t “breaking” or “not breaking” encryption, it’s doing exactly what it promises to do.
For decades we (in the west) had no mass surveillance of any communications. Starting in the 2010s some folks came up with the idea of scanning for illicit content like CSAM uploaded in plaintext on servers. (With apparently relatively little effect on the overall problem.)
I don’t think many people realize how new and unproven this scanning tech is: they just assume it’s always been there and it works. It really hasn’t: it’s only a few years old, and it doesn’t seem to have any noticeable impact on sharing of CSAM material.
So Apple has introduced a new system called “Private Cloud Compute” that allows your phone to offload complex (typically AI) tasks to specialized secure devices in the cloud. I’m still trying to work out what I think about this. So here’s a thread. 1/
Apple, unlike most other mobile providers, has traditionally done a lot of processing on-device. For example, all of the machine learning and OCR text recognition on Photos is done right on your device. 2/
The problem is that while modern phone “neural” hardware is improving, it’s not improving fast enough to take advantage of all the crazy features Silicon Valley wants from modern AI, including generative AI and its ilk. This fundamentally requires servers. 3/
Some folks are discussing what it means to be a “secure encrypted messaging app.” I think a lot of this discussion is shallow and in bad faith, but let’s talk about it a bit. Here’s a thread. 1/
First: the most critical element that (good) secure messengers protect is the content of your conversations in flight. This is usually done with end-to-end encryption. Messengers like Signal, WhatsApp, Matrix etc. encrypt this data using keys that only the end-devices know. 2/
Encrypting the content of your conversations, preferably by default, is “table stakes.” It isn’t perfect, but it’s required for a messenger even to flirt with the word “secure.” But security and privacy are hard, deep problems. Solving encrypted messaging is just the start. 3/
Several people have suggested that the EU’s mandatory chat scanning proposal was dead. In fact it seems that Belgium has resurrected it in a “compromise” and many EU member states are positive. There’s a real chance this becomes law. dropbox.com/scl/fi/9w611f2…
The basic idea of this proposal is to scan private (and encrypted) messages for child sexual abuse material. This now means just images and videos. Previous versions also included text and audio, but the new proposal has for the moment set that aside, because it was too creepy.
Previous versions of this idea ran into opposition from some EU member states. Apparently these modest changes have been enough to bring France and Poland around. Because “compromise”.