Jonathan Mayer Profile picture
@Princeton prof and tech + law person. Previously at the Senate, @FCC, @Stanford, and CalDOJ.
Apr 20, 2023 6 tweets 3 min read
Since today is the blue check apocalypse, here’s a new research paper on Twitter’s paid account “verification.” What we learned:

1) Most people don’t understand what blue checks now mean.

2) Paid accounts are disproportionately crypto bros, Elon stans, new, and conservative. Image In a nationally representative survey (n=300), most people believe a Twitter blue check requires actual verification of a person’s identity, not just a working phone number and payment. That misunderstanding is more common among older people & people with lower digital literacy.
Aug 14, 2021 10 tweets 2 min read
Good: Apple CSAM detection will mitigate hash set manipulation by using hashes known to 2+ child safety groups.

Less good: The design still depends on trusting Apple or (TBD) occasional third-party onsite audits.

Can we do better? I think so. Here’s some Twitter cryptography… Goals: A user can verify that every hash in the blinded hash set was contributed by 2+ child safety groups. Every update to the set includes enough information to run that check.

Constraints: A group shouldn’t learn what’s in the hash set & should be able to contribute remotely.
Aug 5, 2021 11 tweets 4 min read
Apple just posted technical details for their new privacy-preserving system to detect child sexual abuse material (CSAM) on iPhones. Conveniently, @anunaykul and I have a publication at next week’s @USENIXSecurity on this very topic. Some thoughts… usenix.org/conference/use… There’s a critical and subtle step in Apple’s design. Most perceptual hash functions represent approximate matches as proximity in the hash space. Similar images hash to similar hashes, where hash similarity is typically measured with Hamming distance (except PhotoDNA).
Jul 23, 2021 10 tweets 3 min read
New research: Can misinformation warnings work? The convention wisdom is no, backed up by dozens of studies and the proliferation of falsehoods online. In a new publication, we challenge that perspective, showing warnings can be effective—if designed well. lawfareblog.com/warnings-work-… The design of misinformation warnings, we argue, fails to account for hard-learned lessons from browser and OS security warnings. A decade ago, security warnings were mostly ineffective. But now, security warnings are a key safeguard, protecting millions of users every day.
Jun 29, 2018 5 tweets 1 min read
A few thoughts on the new California privacy law, AB 375 (1/5).

This law won’t have much impact on major online platforms, like Google and Facebook. They already enable users to download a data archive or delete their data, and they aren’t in the business of selling user data. (2/5) AB 375 likely won’t have much near-term impact on the third-party online advertising ecosystem. The law has an ambiguous exception for “deidentified” data, and the advertising sector will argue that it exempts tracking cookies, email hashes, and other common ad identifiers.