Since today is the blue check apocalypse, here’s a new research paper on Twitter’s paid account “verification.” What we learned:
1) Most people don’t understand what blue checks now mean.
2) Paid accounts are disproportionately crypto bros, Elon stans, new, and conservative.
In a nationally representative survey (n=300), most people believe a Twitter blue check requires actual verification of a person’s identity, not just a working phone number and payment. That misunderstanding is more common among older people & people with lower digital literacy.
Twitter’s misleading “verification” is causing real harms. The Eli Lilly debacle, for example, appears to have cost shareholders billions of dollars. There are countless reports of scammers abusing Twitter paid “verification” to hoodwink credulous victims.
Our research calls into question whether Twitter Blue is legal. Continued use of the blue check and the term verification, when they are so misleading and harmful to consumers, could constitute a deceptive business practice under the FTC Act and state consumer protection laws.
What’s more, Twitter may be violating its 2022 consent order with the FTC. Twitter “must not misrepresent…the extent to which [it] maintains and protects the…integrity of Covered Information.” Covered Information includes names, and integrity presumably includes accuracy.
There are many more insights about social media account verification and Twitter Blue in the paper, which you can read at the link below. Credit for the project goes to stellar @PrincetonCITP researchers @madxiaodisease, @m0namon, and @anunaykul.
Good: Apple CSAM detection will mitigate hash set manipulation by using hashes known to 2+ child safety groups.
Less good: The design still depends on trusting Apple or (TBD) occasional third-party onsite audits.
Can we do better? I think so. Here’s some Twitter cryptography…
Goals: A user can verify that every hash in the blinded hash set was contributed by 2+ child safety groups. Every update to the set includes enough information to run that check.
Constraints: A group shouldn’t learn what’s in the hash set & should be able to contribute remotely.
Here’s a protocol that satisfies the goals and constraints.
Setup: The groups generate threshold BLS signature key shares, where 2 shares are needed to sign.
Step 1: The groups and Apple jointly compute a random CSPRNG seed. Each generates a large set of indexed random values.
Apple just posted technical details for their new privacy-preserving system to detect child sexual abuse material (CSAM) on iPhones. Conveniently, @anunaykul and I have a publication at next week’s @USENIXSecurity on this very topic. Some thoughts… usenix.org/conference/use…
There’s a critical and subtle step in Apple’s design. Most perceptual hash functions represent approximate matches as proximity in the hash space. Similar images hash to similar hashes, where hash similarity is typically measured with Hamming distance (except PhotoDNA).
Apple’s NeuralMatch works differently. Similar images have the *same* hash. Checking for an exact match with privacy guarantees, rather than an approximate match, is radically easier. @anunaykul and I spent ~5x longer devising an approximate match protocol than an exact protocol.
New research: Can misinformation warnings work? The convention wisdom is no, backed up by dozens of studies and the proliferation of falsehoods online. In a new publication, we challenge that perspective, showing warnings can be effective—if designed well. lawfareblog.com/warnings-work-…
The design of misinformation warnings, we argue, fails to account for hard-learned lessons from browser and OS security warnings. A decade ago, security warnings were mostly ineffective. But now, security warnings are a key safeguard, protecting millions of users every day.
What changed? Software vendors and academics closely collaborated to standardize evaluation criteria, refine research methods, and rapidly experiment with new designs. The key insight was that “contextual” warnings, which just passively tack on information, are rarely effective.
A few thoughts on the new California privacy law, AB 375 (1/5).
This law won’t have much impact on major online platforms, like Google and Facebook. They already enable users to download a data archive or delete their data, and they aren’t in the business of selling user data.
(2/5) AB 375 likely won’t have much near-term impact on the third-party online advertising ecosystem. The law has an ambiguous exception for “deidentified” data, and the advertising sector will argue that it exempts tracking cookies, email hashes, and other common ad identifiers.
(3/5) It will be up to the California Attorney General to determine whether AB 375 regulates the online advertising ecosystem, then to defend that position in inevitable litigation.