Something that you have to understand about Russian interference is that it’s highly unlikely that they would actually try to manipulate votes or vote tallies. Why? Because they know they can achieve the same or even better outcomes by manipulating voters instead. 🧵
I wrote about this 5 years ago.
As I said then, changing the vote count in one election would yield limited returns. But convincing voters to doubt the legitimacy of election outcomes for the foreseeable future? That’s a return on investment.
There is a strange tendency to talk about Russian interference as if the impact must either be direct — i.e. changing vote totals — or nonexistent. But that’s not the reality of how influence and information operations work, which is through subtle & indirect effects.
Consider, for example, our discourse around contentious issues, like transgender rights, immigration, abortion, and similar issues. How did we get to a point where anyone who even asks questions is eligible to be canceled and labeled as a bigot (L) or a Marxist extremist (R)?
I bet most of you can’t really trace how we got to this point. I also bet most of you aren’t entirely comfortable being here. It’s *almost* as if some outside influence shaped how Americans engage with these issues by exaggerating the extremes & erasing the middle ground.
By manipulating our discourse, it becomes very easy to manipulate public perceptions. Social norms can be skewed; unpopular positions can be made to look popular; perceived support for a candidate can be artificially inflated; extreme positions can become normalized; etc.
Over time, you could even convince political candidates & parties to adopt new positions, enforce new ideological purity tests, reject reasonable compromises, ostracize those who don’t conform, etc — decisions that would unknowingly be based on externally-manipulated perceptions.
In other words, you could entirely change the issues that drive people to the polls and convince people to vote for certain candidates and not others. These effects may be indirect, but this is exactly how you influence an election without touching a single voting machine.
Until we start seeing how this happens & recognize the subtle ways in which influence operations exert their effects — and stop allowing ourselves to be used as conduits — we‘ll remain highly vulnerable targets and should expect Russia and others to continue exploiting that.
• • •
Missing some Tweet in this thread? You can try to
force a refresh
If you’re genuinely surprised that Trump won, may I gently suggest that you reevaluate where you are getting your information from, and be honest with yourself about whether you are willing to listen to people who tell you things you don’t necessarily want to hear.
The information environment on the left is broken, too, just in different ways than on the right. Too many people choose who to follow and who to listen to based on who makes them feel good, not who tells them the truth. In fact, those who told the truth were often ostracized.
I know this because it happened to me. Over & over & over again. I could’ve just chosen to tell you comfortable lies, like many influencers do. It’s scandalously easy to go viral doing that. But unlike them, I wasn’t willing to light our country on fire for clicks & ad revenue.
A viral claim emerged from pro-Trump Twitter on Friday, alleging that locals in NC had assaulted a FEMA director. By Saturday, it was a top Google trend. But it never actually happened.
There were a lot of striking aspects of this story, but more than anything, this was among the clearest examples I’ve seen of how online storytelling can be used to motivate and guide offline violence through the reframing of political violence as a necessary act of survival.
The rumor first emerged on Friday, but really picked up steam later on Friday and into the early hours of Saturday AM, when it ranked among the top 10 Google searches. The initial tweet was retweeted 20,000+ times & got 100,000+ “likes” & 6.4 million views in the first 19 hours.
He did. Trump & his allies spent years weaponizing the narrative around antifa in order to preemptively justify using violence and force to crack down on anyone who opposed Trump — thus paving the way for Trump to invoke the Insurrection Act on 1/6.
👉🏼
This went on for YEARS; I was one of very few people talking about it for a long, long time. It was in the works since at least 2017 (likely earlier) and it involved politicians, media, think tanks, govt officials, & more.
Trump and his allies were so determined to get antifascists to come out and fight on 1/6 (to cause enough chaos to justify a militarized crackdown) that there was even a plan to have right-wing extremists impersonate antifascists, infiltrate 1/6 protests, and incite violence.
I wrote about cognitive warfare and how the contrived panic over Haitian immigrants hijacked our algorithms, our brains, and our national discourse. weaponizedspaces.substack.com/p/how-the-cont…
During the 2-hr-long presidential debate this week, abortion was the top political topic searched in 49 states. The only exception was Ohio, where immigration was the top-searched issue — a trend driven by searches for topics related to the false claims about Haitian immigrants.
But despite being the top search topic in 49 states, abortion wasn’t the top search topic overall. Immigration — specifically, a false story about Haitian immigrants in Ohio — displaced abortion as the top search topic overall for nearly the entire 2-hour time window.
This was always the inevitable endpoint of the wildly false claims about Haitian immigrants eating dogs & cats. As this person literally admits, it doesn’t matter to them if it’s factually true or not — it only matters that (to them) it *feels* like it *could* be true.
It’s REALLY easy to get people to spread absurd lies about immigrants (or anyone else) if those people already believe terrible things about immigrants *and* are politically/ideologically motivated to persuade others to believe terrible things about immigrants.
We see this all the time; it’s one of the main reasons that fact-checking, at least on its own, so often fails — because people don’t believe lies & rumors simply based on the facts presented, but rather based on their own prior beliefs, motives, identity, emotions, and more. P
The CEO of Google — one of the five largest tech companies in existence today — says he has no solution for the company’s AI providing wildly inaccurate information to users.
We need a totally different incentive structure here. We shouldn’t celebrate companies for releasing things faster, or making the most dramatic changes to the status quo. Instead, we should reward those who prioritize rigorous safety testing & built-in guardrails.
Ultimately, the usefulness of AI tools is inherently contingent on being able to use them without producing new and bigger problems along the way. Companies that are rushing just to put things in users’ hands are not producing useful tech; they’re just trying to stay relevant.