You’ve probably heard by now that Jim Acosta interviewed an AI depiction of a dead school shooting victim on Monday. Beyond the uncanny valley stuff, there are actual harms associated with so-called griefbots — some researchers even warn of “digital hauntings” that prolong grief.
Beyond the implications for individuals, there are also profound societal implications that are not being addressed, as tech companies push these disturbing creations out there with no policies to regulate them or deal with accompanying harms.
There are also serious questions about how these AI creations are being used. For example, the father of Joaquin Oliver said he plans to create social media accounts for his AI son so he can use his voice to advocate for gun control. This is a totally new form of influence.
One of the few studies that has looked at the influence of AI resurrection on public perception found that using resurrected victim videos boosts policy support by 25% more than text only testimonials and increases the credibility of the message. That’s kind of terrifying.
This isn’t even the first time this has happened. Last year, a group of parents of Parkland victims launched a robocalling campaign using their dead children’s voices to lobby Congress. Some of the AI voices even issued what sounded like threats of harassment against Congress.
And this doesn’t even touch on issues of privacy, data ownership, the right to be forgotten, the right to own your own legacy, and other issues that are raised when someone uses your likeness, including your picture and voice, without your consent, after your death.
And that brings us to another very uncomfortable question. If, at some point, Joaquin’s parents decide that the AI persona they created is worsening their grief or no longer what they need, what will the experience be like when they have to kill off their dead son’s AI persona?
It gets grim fast. At the very least, we should slow down the production of such profoundly disturbing creations until we have the policies and laws in place to deal with the many uncomfortable questions that come along with a practice like AI revival.
Given current events, it bears repeating that when the Trump admin spreads disinformation, they’re not doing it because they expect you to accept their lies as truth. They do it to erode the notion of truth and destroy our ability to distinguish between truth & falsehood.
The act of lying is bad enough. But selling the idea that truth doesn't matter — or doesn't even exist — is far more corrosive. Democracy rests upon a shared understanding of basic facts. We can't debate issues or hold leaders accountable w/o these agreed upon facts.
If the Trump administration can cast doubt on the very existence of an objective truth, they can also undermine the external mechanisms that we rely on to hold government officials accountable & prevent abuses of power.
You’ve probably seen Nick Shirley’s video accusing Somali-run daycares in Minnesota of fraud. Hopefully you’ve also seen some of the follow-ups showing that security footage & operating hours disprove his central claim of “no children.”
X’s new “About this account” feature just accidentally revealed a vast network of covert foreign influence accounts posing as Americans but operating from overseas — the most sweeping public exposure of covert influence on a major platform since 2016. Story is linked below.
Some of these accounts have hundreds of thousands of followers. They present themselves as American patriots, veterans, moms, truck drivers, or lifelong Republicans. Many are explicitly MAGA. But their operators are posting from overseas while shaping U.S. political narratives.
It’s not just MAGA accounts, but mostly it is. Several large anti-Trump accounts were also revealed as foreign-run, as were public health networks. The common denominator is deception: pretending to be American participants in US politics while pushing highly divisive content.
I wrote about a secret tactic shaping what you see online — one almost no one’s talking about. It’s called Moderation Sabotage, and it’s how political digital operatives overwhelm social media defenses so lies go viral before truth can catch up. Link is posted below.🧵
Imagine flooding the system so completely that moderators can’t respond in time. That’s the playbook: swamp the filters, delay enforcement, and let false or incendiary content live long enough to trend.
By the time platforms react, the damage is done.
This isn’t random chaos. It’s deliberate. Trump’s digital allies — the same architects behind Stop the Steal — have refined Moderation Sabotage into an election-year weapon. Rather than hacking the code, they’re hacking the people who keep the code honest.
NEW: AI campaigns are learning to run themselves — and using our data to do it. Without stricter safeguards, we may soon see AI controlling the very governing bodies that could enforce those safeguards in the first place..
(Link in next tweet).
I took 2 months off due to health problems, and when I returned, I expected to see the normal disinformation playbook in action. Indeed, that was waiting for me. But so was something else: AI is now running for office & pushing humans out of the process.
We’ve already seen AI playing a big role in politics, including several attempts to get an AI system elected to office in order to act as the decision-maker, while humans would simply act as the body for AI’s policies and initiatives. weaponizedspaces.substack.com/p/ai-political…
The “controversy” over Sydney Sweeney is absurd and largely fake, but there’s one thing worth paying attention to — the tried and tested formula used by the right-wing outrage machine to manufacture liberal fury and then bait the left into making it a reality.
Here’s how it works:
First, invent the outrage. This usually involves picking a neutral or mildly provocative event and finding something about it to frame as being offensive to the left. In this case, the slogan (“Sydney Sweeney has great jeans”).
Second, flood the zone. Carry out a social media blitz and manufacture the appearance of outrage by gaming the algorithm with repetitive content, which will then get pushed into trending feeds and recommended videos — creating the perception that people actually care about it.