At Princeton CITP, we were concerned by media reports that political candidates use psychological tricks in their emails to get supporters to donate. So we collected 250,000 emails from 3,000 senders from the 2020 U.S. election cycle. Here’s what we found. electionemails2020.org
Let me back up: this is a study by @aruneshmathur, Angelina Wang, @c_schwemmer, Maia Hamin, @b_m_stewart, and me. We started last year by buying a list of all candidates running for federal and state elections in the U.S. We also acquired lists of PACs and other orgs.
Next, the key bit for data collection: we created a bot that was able to find these candidates’ websites through search engines, look for email sign up forms, fill them in, and collect the emails in a giant inbox. We verified manually that each step works pretty accurately.
We analyzed a snapshot of about 100,000 emails collected up through June 25. While the emails are nominally about different topics, most are clearly about fundraising. There is a common pattern: draw readers in with a deceptive subject line and then move to a fundraising request.
Manipulative tactics that nudge people to open the emails are the norm. We classify six types ranging from sensationalism to faking the From: field. The typical sender used at least one such tactic in about 43% of emails. Most senders — 99% — use them at least occasionally.
Manipulative tactics in donation requests are also common. Vague claims of donation “matching” are ubiquitous. More devious: asking you to fill out a survey and, once you've filled it out, making it look like a donation is required to actually submit what you’ve typed.
You may wonder who falls for these tactics. We hypothesize that older Americans are particularly vulnerable to the kinds of user interface manipulations that we see here. Academic research clearly indicates that digital literacy is correlated with age.
We’re used to seeing these tricks on shopping and travel websites but manipulation in the political sphere is a threat to democracy. While there’s a lot of worry about targeted advertising, manipulative emails — despite being extremely effective — have flown under the radar.
How do campaigns get your email? When you sign up, your address might also be shared or sold to other campaigns. We found 348 instances of email sharing by 200 entities. The majority of these entities had no privacy policy and only 25% disclosed their email sharing practice.
We focused on manipulative practices because we felt there was a gap in the research, but we have built a rich resource that can be used to answer questions about political communication, campaign strategy, A/B testing, and even the spread of misinformation.
The intellectual superiority of depth over breadth is a pervasive fiction in academia that sustains the culture of fetishizing specialization. I tried to fight this culture early in my career, but realized it was like punching a bag of sand.
An amazing benefit of my privilege is being able to say "I didn't understand that. Could you explain it again?" as many times as necessary without having to worry that people will think I'm stupid.
If you didn't understand something I said, please ask me as many times as necessary. In fact, I'm delighted when this happens. As a professor, knowing when something I explained didn't make sense is extremely valuable feedback that helps me do better.
I'm a tenured computer science professor who looks like what many people expect a tenured computer science professor to look like. The follow up I get after someone asks "So what do you do?" is nearly always "Oh, you must be really smart."
By the same token, it should be a sobering moment for computer science academia. With few exceptions, work that tries to bring accountability to big tech companies is relegated to the fringes of our discipline. CS these days cozies up to power far more than speaking truth to it.
There's a lot of concern today about industry funding of specific researchers. That's important, but a 100x deeper problem is that the tech industry warps CS academia's concept of what is even considered a legitimate research topic. This influence is both pervasive and invisible.
Most of the industry influence happens without any money changing hands. Academia's dependence on industry data is one way. Another is that most grad students go on to industry jobs and naturally prefer to work on topics that increase their employability.
Academia forces you to pay a "cleverness tax" if you want to succeed—it's a tax on your time that goes towards constantly convincing others that your work is clever enough for publication, getting a PhD, tenure, and promotion. It's one of the things that pushes people out.
Reviewer 3: I see you’ve solved global hunger, but it was always obvious that you could do that by working really hard, so we haven’t learned anything from your paper. Perhaps you could try solving global hunger using only purple foods? That would be novel.
The cleverness tax is higher for scholars whose work doesn’t fit their discipline’s stereotyped notions of what clever work is supposed to look like. You’re often forced to pick between having a real impact on the world and just staying in the game.
I often criticize Twitter, but there are a few things I really appreciate about it, and one of them is threads. I think threads are a pretty cool way to write. Yes, it’s a form of lazy blogging, but I’ve found the laziness to be a virtue more often than it is a sin.
If you’ve tried blogging you know the feeling of staring at a blank page and searching for motivation to write with no idea of whether anyone will find your thoughts interesting. It’s much easier to write a tweet or two and decide to expand the thread if people are interested.
140 was a bit silly, but 280 is a decent length for a well crafted paragraph. Twitter forces me to practice making my text succinct, which has made me a better writer in general. That's great because I write for a living, like many others—even if we don't call ourselves writers.
We like to complain that lawmakers don’t understand tech, but let’s talk for a minute about technologists who don’t understand the law. Actually, it’s much worse than that—many prominent technologists use their platforms to regularly spout willful misinformation about the law.
When any legislation is proposed, a popular game is to claim that it will destroy the Internet, or make machine learning illegal, or something equally implausible. After many fruitless attempts at gentle correction, I’ve realized that this kind of misinfo is deliberate.
Of course there are some dumb laws, rules, and court decisions about Internet tech (and everything else). But that’s no excuse for ignorance. Most of the techies griping about the GDPR, for example, haven’t bothered to read the GDPR or anything authoritative on the topic.