The “controversy” over Sydney Sweeney is absurd and largely fake, but there’s one thing worth paying attention to — the tried and tested formula used by the right-wing outrage machine to manufacture liberal fury and then bait the left into making it a reality.
Here’s how it works:
First, invent the outrage. This usually involves picking a neutral or mildly provocative event and finding something about it to frame as being offensive to the left. In this case, the slogan (“Sydney Sweeney has great jeans”).
Second, flood the zone. Carry out a social media blitz and manufacture the appearance of outrage by gaming the algorithm with repetitive content, which will then get pushed into trending feeds and recommended videos — creating the perception that people actually care about it.
Third, bait the reaction. Tie the “outrage” to a hot-button topic (in this case, fascism and white supremacy) that will provoke a response. Then, when a few people inevitably respond, screenshot their posts and circulate them as “evidence” that liberals really are outraged.
Finally, close the loop. Manufacture the reality that you are claiming already exists. In this case, it centers around a narrative that liberals are humorless, hypertensive, and obsessed with identity politics. With enough bait, you can make the narrative look like reality.
This works in part because it’s activating two systems at once: 1) the algorithmic brain of the internet 2) the emotional brain of the audience.
If you learn how to hack these two systems, you can pretty much manufacture any bullsh*t into reality.
The key here is defensiveness. That’s the trap that you want to bait the other side into. If you can get your opponent to always stay on defense, then you are setting the agenda and they are simply responding to it.
This probably sounds quite familiar, and for good reason. We have seen this cycle play out over and over again: when Mr. Potato Head was supposedly “canceled”; when Dr. Suess books allegedly got “banned” by the “woke mob”; when Starbucks apparently “declared war on Christmas.”
The next time you see something that looks like this, pause and ask yourself a few questions:
-where is the original outrage? Can you find it?
-who benefit benefits from this narrative?
-is this recycled?
-does it seem too perfect or too good to be true? If yes, it probably is.
Ultimately, these manufactured outrage cycles are really a battle over control — control over what you see, how you feel, and which stories dominate your mental bandwidth.
Every time the cycle repeats, you lose a little more control over these things.
The only winning move here is not to play the game at all.
You don’t have to be a participant in the right-wing cycle. In fact, you can stop the cycle altogether. But you have to stop responding to the bullsh*t that is manufactured specifically to provoke a response from you.
The full article laying out how the right-wing outrage machine manufactures liberal fury is available here, free to read. This is a lesson that needs to be learned, so please share it. weaponizedspaces.substack.com/p/how-the-righ…
If you find my work helpful, please like, share, and subscribe to my Substack (). And if you have suggestions for future article topics, I’m always open to hearing them.
I don’t know why this is so funny to me but I’m cry-laughing at the idea of Fox News running a 5-day media blitz trying to convince us of a secret blood pressure crisis on the left.
• • •
Missing some Tweet in this thread? You can try to
force a refresh
I just published the 2nd major piece in my series about algorithmic tyranny — this time, revealing how Trump & the right-wing outrage machine are not just gaming algorithms, but rewriting the rules so they can keep gaming them indefinitely. I call it the Feedback Loop Coup.
Last week, I introduced the concept of Reverse Algorithmic Capture, a tactic used to force platforms to rewrite their rules through political & legal pressure. Feedback Loop Coups are similar, except they exploit *existing* rules to rewire algorithms & seize control of your feed.
We all know by now that platforms operate on the same fundamental principle: the more engagement a post receives, the more the algorithm pushes it into other people's feeds. The faster this engagement occurs, the more "urgent" the algorithm considers it, and the wider it spreads.
I have reported on and studied some incredibly dark topics, but there’s a rabbit hole underneath the practice of AI resurrections (ie, trying to recreate dead people through AI personas) that makes QAnon look like a fun walk through the park. And no one is paying attention.
The practice of creating AI personas to represent dead people — including using their likeness, image, voice, and words — is disturbing enough on its face, and of course very rarely involves consent. But it’s also totally unregulated. Because no one is paying attention.
There are researchers out here, warning that they have seen AI resurrections stalk their family members online beyond the point that their family members want to interact with them. Imagine an AI persona of your dead loved one begging to talk to you… and having to say no.
You’ve probably heard by now that Jim Acosta interviewed an AI depiction of a dead school shooting victim on Monday. Beyond the uncanny valley stuff, there are actual harms associated with so-called griefbots — some researchers even warn of “digital hauntings” that prolong grief.
Beyond the implications for individuals, there are also profound societal implications that are not being addressed, as tech companies push these disturbing creations out there with no policies to regulate them or deal with accompanying harms.
There are also serious questions about how these AI creations are being used. For example, the father of Joaquin Oliver said he plans to create social media accounts for his AI son so he can use his voice to advocate for gun control. This is a totally new form of influence.
To follow up on my recent article about Trump’s covert manipulation of the algorithms that curate our reality, I published a 10-step guide for resisting the tyranny of the algorithm.
You have a lot of power here, but you have to learn to use it.
This is the first piece of advice I offer in the survival guide for resisting algorithmic tyranny. Until you learn the difference between entertainment and education vs covert psychological manipulation, you will remain a slave to the algorithm.
Learn the difference. Act on it.
Algorithms serve up content you like b/c they know that’s how they can hook you and get you to keep clicking, keep scrolling, and keep making them more money. They’re like digital drug dealers — and I’d argue that the effects are more insidious and corrosive than actual drugs.
Just to put a fine point on it, remember that Trump dismantled our entire cybersecurity workforce so there’s no one around to stop his algorithmic manipulation — and those who are around and might consider doing know that their job is on the line if they do.
As a reminder, within 2 months of taking office, Trump had fired over half of the government’s AI workforce. The ones who were there to put up guardrails.
And then there’s DOGE. They were given essentially unrestricted access government systems, including artificial intelligence systems. The way they used those systems was conceptually indistinguishable from the way a AI system would behave. It’s a worst case scenario.
Today, I published what is probably my most important article in quite a long time. In this piece, I revealed how the Trump administration is covertly manipulating social media platforms & algorithms to boost the administration’s narrative and suppress opposing lines of thought.
Now, there’s no secret program or code that they’re using to do this. It’s being done through a process that I call reverse algorithmic capture.™
Instead of calling for censorship or bans directly, the Trump admin is reshaping the architecture of digital platforms in its favor.
By engineering incentives that guide algorithms to amplify preferred narratives and suppress dissent, they don’t have to issue any direct orders. They can stay at a seemingly safe distance, maintaining possible deniability, all the while pulling the strings of public algorithms.