Yes, moderation is going to be harder in end-to-end encrypted spaces. You know what else is going to be harder? Algorithm-driven content amplification. And trust me, one of these things is doing way more damage.
The thing about end-to-end encryption (E2EE) is that it’s absolutely tractable to moderate conversations *if* participants report problems. This voluntary reporting capability is already baked into some systems through “message franking” 1/
So when we say “moderation of E2EE conversations is hard” we’re basically saying “moderation is hard if we’re talking about small(ish) closed groups where not one single participant hits the ‘report abuse’ button.” 2/
But that’s a normal expectation! We don’t expect private companies to actively moderate email threads or group SMS texts or private conversations. And I hope we don’t start expecting that! I don’t want to live in that society! 3/
There are a few potential exceptions to this: mostly involving people who can’t report content by themselves and need assistance, like kids. There’s some room for flexibility here, but I don’t think that’s what these concerns are about. 4/
When people say “encryption will make moderation harder” they’re mostly talking about the spread of disinformation and hate speech. For these to spread widely, they rely on large, open groups — and algorithmic amplification. 5/
Encrypted messaging can scale from small private conversations to large groups with open membership. But obviously these don’t provide the same privacy properties. If anyone can join, then anyone (including the provider) can moderate and report. 6/
And while moderation gets harder in small groups (and easier in large ones) the same features of E2EE will also make it more difficult for providers to *promote* specific content that they can’t see. 7/
This doesn’t mean we’re going to wind up in a perfect, beautiful world. People are terrible. For example, WhatsApp has had terrible problems with disinfo content even without algorithmic promotion. 8/
But at the end of the day, taking algorithms’ ability to view all content out of the system is probably a good thing. Making providers’ visibility into something chosen voluntary by participants also isn’t the end of the world. //
• • •
Missing some Tweet in this thread? You can try to
force a refresh
Imagine creating a social media company and rigging the stock so nobody can ever depose you, and then *not* creating a giant candy factory staffed with weird and magical helpers.
Whenever I read about the exploits of Zuck I’m like SMH that’s what people who actually worry about their jobs do, you dumbass.
“Oh no, promoting voter info might make idiots think my company is politically biased, then we’d have a 4% drop in weekly engagement…”
Seriously, you could invent chewing gum that never loses its flavor and this is what you choose.
I don’t know what to make of the accusations re: Chrome logins in the revised antitrust complaint against Google, but I’m now really looking forward to learning more.
A few years back, Google activated a feature that would automatically log you into the Chrome browser anytime you logged into a Google site. This made it basically impossible to be logged out of Chrome if you used Google accounts.
The Chrome engineers said that they had to do this because users with multiple accounts were getting confused — apparently the idea that some people might not want Chrome to be logged in was not contemplated.
Twitter is being sued over the Saudi spies they hired in customer service and SRE roles, the ones who used their access to collect information on Saudi dissidents. protocol.com/bulletins/saud…
A bunch of people have been telling me that it’s ok to relax end-to-end encryption to fight crime, as long as there are protections and data never leaves the company. Stuff like shows why it’s not.
“But this was an isolated incident!” Or alternatively, maybe being caught was the isolated incident. How many companies (startups, particularly) have internal controls sufficient to withstand even devops folks with admin credentials?
The NSA guidelines for configuring VPNs continue to require IPsec for VPNs rather than WireGuard. I understand why this is (too much DJB cryptography in WireGuard) but IPsec is really a terrible mess of a protocol, which makes this bad advice. media.defense.gov/2020/Jul/02/20…
The number of footguns in IPsec is really high, and they mostly express themselves in terms of implementation errors in VPN devices/software. It’s these implementation errors that risk private data, not some abstract concern about cipher cryptanalysis.
To be clear, there’s nothing wrong with DJB cryptography. The problem here is that the NSA only approves a very specific list of algorithms (see attached) and that list hasn’t been updated since 2016. It doesn’t even list SHA-3 yet! cnss.gov/CNSS/openDoc.c…
Everyone on HN is puzzling about how to ensure open access papers. The answer seems very simple: just have funding agencies (NSF/NIH/DARPA etc.) require a link to an Arxiv/ePrint version for each paper mentioned in an annual report.
For those who haven’t seen the current NSF system: for each paper you’ve published in a given year, you need to convert it into PDF/A (!!) and upload it to a private archival service run by the DoE, one that (I think) taxpayers can’t access.
(This PDF/A thing, as best I can tell, is just a subsidy for Adobe Creative Cloud. Every researcher I know converts their PDFs using a sketchy .ru website so that DoE server must be a haven of malware.)