Imagine creating a social media company and rigging the stock so nobody can ever depose you, and then *not* creating a giant candy factory staffed with weird and magical helpers.
Whenever I read about the exploits of Zuck I’m like SMH that’s what people who actually worry about their jobs do, you dumbass.
“Oh no, promoting voter info might make idiots think my company is politically biased, then we’d have a 4% drop in weekly engagement…”
Seriously, you could invent chewing gum that never loses its flavor and this is what you choose.
Even @jack doesn’t actually care what happens to Twitter, and I think he’s basically living in a camper van.
• • •
Missing some Tweet in this thread? You can try to
force a refresh
Yes, moderation is going to be harder in end-to-end encrypted spaces. You know what else is going to be harder? Algorithm-driven content amplification. And trust me, one of these things is doing way more damage.
The thing about end-to-end encryption (E2EE) is that it’s absolutely tractable to moderate conversations *if* participants report problems. This voluntary reporting capability is already baked into some systems through “message franking” 1/
So when we say “moderation of E2EE conversations is hard” we’re basically saying “moderation is hard if we’re talking about small(ish) closed groups where not one single participant hits the ‘report abuse’ button.” 2/
I don’t know what to make of the accusations re: Chrome logins in the revised antitrust complaint against Google, but I’m now really looking forward to learning more.
A few years back, Google activated a feature that would automatically log you into the Chrome browser anytime you logged into a Google site. This made it basically impossible to be logged out of Chrome if you used Google accounts.
The Chrome engineers said that they had to do this because users with multiple accounts were getting confused — apparently the idea that some people might not want Chrome to be logged in was not contemplated.
Twitter is being sued over the Saudi spies they hired in customer service and SRE roles, the ones who used their access to collect information on Saudi dissidents. protocol.com/bulletins/saud…
A bunch of people have been telling me that it’s ok to relax end-to-end encryption to fight crime, as long as there are protections and data never leaves the company. Stuff like shows why it’s not.
“But this was an isolated incident!” Or alternatively, maybe being caught was the isolated incident. How many companies (startups, particularly) have internal controls sufficient to withstand even devops folks with admin credentials?
The NSA guidelines for configuring VPNs continue to require IPsec for VPNs rather than WireGuard. I understand why this is (too much DJB cryptography in WireGuard) but IPsec is really a terrible mess of a protocol, which makes this bad advice. media.defense.gov/2020/Jul/02/20…
The number of footguns in IPsec is really high, and they mostly express themselves in terms of implementation errors in VPN devices/software. It’s these implementation errors that risk private data, not some abstract concern about cipher cryptanalysis.
To be clear, there’s nothing wrong with DJB cryptography. The problem here is that the NSA only approves a very specific list of algorithms (see attached) and that list hasn’t been updated since 2016. It doesn’t even list SHA-3 yet! cnss.gov/CNSS/openDoc.c…
Everyone on HN is puzzling about how to ensure open access papers. The answer seems very simple: just have funding agencies (NSF/NIH/DARPA etc.) require a link to an Arxiv/ePrint version for each paper mentioned in an annual report.
For those who haven’t seen the current NSF system: for each paper you’ve published in a given year, you need to convert it into PDF/A (!!) and upload it to a private archival service run by the DoE, one that (I think) taxpayers can’t access.
(This PDF/A thing, as best I can tell, is just a subsidy for Adobe Creative Cloud. Every researcher I know converts their PDFs using a sketchy .ru website so that DoE server must be a haven of malware.)