Jonathan Mayer Profile picture
Apr 20 6 tweets 3 min read Twitter logo Read on Twitter
Since today is the blue check apocalypse, here’s a new research paper on Twitter’s paid account “verification.” What we learned:

1) Most people don’t understand what blue checks now mean.

2) Paid accounts are disproportionately crypto bros, Elon stans, new, and conservative. Image
In a nationally representative survey (n=300), most people believe a Twitter blue check requires actual verification of a person’s identity, not just a working phone number and payment. That misunderstanding is more common among older people & people with lower digital literacy.
Twitter’s misleading “verification” is causing real harms. The Eli Lilly debacle, for example, appears to have cost shareholders billions of dollars. There are countless reports of scammers abusing Twitter paid “verification” to hoodwink credulous victims.
Our research calls into question whether Twitter Blue is legal. Continued use of the blue check and the term verification, when they are so misleading and harmful to consumers, could constitute a deceptive business practice under the FTC Act and state consumer protection laws.
What’s more, Twitter may be violating its 2022 consent order with the FTC. Twitter “must not misrepresent…the extent to which [it] maintains and protects the…integrity of Covered Information.” Covered Information includes names, and integrity presumably includes accuracy.
There are many more insights about social media account verification and Twitter Blue in the paper, which you can read at the link below. Credit for the project goes to stellar @PrincetonCITP researchers @madxiaodisease, @m0namon, and @anunaykul.

cs.princeton.edu/~jrmayer/paper…

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Jonathan Mayer

Jonathan Mayer Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @jonathanmayer

Aug 14, 2021
Good: Apple CSAM detection will mitigate hash set manipulation by using hashes known to 2+ child safety groups.

Less good: The design still depends on trusting Apple or (TBD) occasional third-party onsite audits.

Can we do better? I think so. Here’s some Twitter cryptography…
Goals: A user can verify that every hash in the blinded hash set was contributed by 2+ child safety groups. Every update to the set includes enough information to run that check.

Constraints: A group shouldn’t learn what’s in the hash set & should be able to contribute remotely.
Here’s a protocol that satisfies the goals and constraints.

Setup: The groups generate threshold BLS signature key shares, where 2 shares are needed to sign.

Step 1: The groups and Apple jointly compute a random CSPRNG seed. Each generates a large set of indexed random values.
Read 10 tweets
Aug 5, 2021
Apple just posted technical details for their new privacy-preserving system to detect child sexual abuse material (CSAM) on iPhones. Conveniently, @anunaykul and I have a publication at next week’s @USENIXSecurity on this very topic. Some thoughts… usenix.org/conference/use…
There’s a critical and subtle step in Apple’s design. Most perceptual hash functions represent approximate matches as proximity in the hash space. Similar images hash to similar hashes, where hash similarity is typically measured with Hamming distance (except PhotoDNA).
Apple’s NeuralMatch works differently. Similar images have the *same* hash. Checking for an exact match with privacy guarantees, rather than an approximate match, is radically easier. @anunaykul and I spent ~5x longer devising an approximate match protocol than an exact protocol.
Read 11 tweets
Jul 23, 2021
New research: Can misinformation warnings work? The convention wisdom is no, backed up by dozens of studies and the proliferation of falsehoods online. In a new publication, we challenge that perspective, showing warnings can be effective—if designed well. lawfareblog.com/warnings-work-…
The design of misinformation warnings, we argue, fails to account for hard-learned lessons from browser and OS security warnings. A decade ago, security warnings were mostly ineffective. But now, security warnings are a key safeguard, protecting millions of users every day.
What changed? Software vendors and academics closely collaborated to standardize evaluation criteria, refine research methods, and rapidly experiment with new designs. The key insight was that “contextual” warnings, which just passively tack on information, are rarely effective.
Read 10 tweets
Jun 29, 2018
A few thoughts on the new California privacy law, AB 375 (1/5).

This law won’t have much impact on major online platforms, like Google and Facebook. They already enable users to download a data archive or delete their data, and they aren’t in the business of selling user data.
(2/5) AB 375 likely won’t have much near-term impact on the third-party online advertising ecosystem. The law has an ambiguous exception for “deidentified” data, and the advertising sector will argue that it exempts tracking cookies, email hashes, and other common ad identifiers.
(3/5) It will be up to the California Attorney General to determine whether AB 375 regulates the online advertising ecosystem, then to defend that position in inevitable litigation.
Read 5 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us on Twitter!

:(