Good Monday morning to you, curious minds.

What can a year-old report tell us about the evolution of troll farms?

Quite a lot, actually.

This one shows us how AI + social engineering replaced old-school spam with something quieter, and harder to trace.

Let’s take a look. Image
This is a case study from 2024, published by Clemson University’s Media Forensics Hub.

It documents a coordinated bot network using AI to reply, not post — shaping U.S. political discourse from inside the replies.
Digital Yard Signs

By Darren Linvill & Patrick Warren

open.clemson.edu/mfh_reports/7
There were at least 686 accounts. All bots.

Most posed as conservative, Christian, or “relatable” Americans.

They didn’t go viral. They weren’t loud.

They replied to real users — inserting pro-Trump, anti-Biden, and crypto-aligned messages into everyday threads. Image
What made this different was the use of AI.

Early replies were likely written using OpenAI models.

Later ones used Dolphin, a version of LLM tech with fewer restrictions.

Prompts were tuned to bypass ethical guardrails and generate persuasive text. Image
These weren’t political accounts in the usual sense.

They looked like this:

– “Girl Mom, 💄 Patriot”
– “Christ is King”
– “Mama of 2, crypto curious”
– “Artist, dog lover, small biz”

Their posts talked about family, prayer, inflation, America.

Then nudged support for Trump. Image
This is what the researchers call persona engineering.

The bios. The tone. The verified badges.

Even the profile pictures (some stolen, some AI-generated) were designed to say:

“This person is real. This person is just like you.” Image
What did they talk about?

– Democratic Senate candidates
– Harris as a potential president
– The WHO’s Pandemic Treaty
– Voter ID laws in North Carolina
– Crypto memes and NFT jokes

Nothing extreme. Just repetition. Soft amplification. Image
Here’s the real shift:

These bots didn’t seek attention.
They weren’t trying to win the timeline.
They were trying to look normal.

They replied to you. They blended in.

They signaled a political mood — not a hard position. Image
The researchers call this: digital yard signs.

Like the signs people put on their lawns — not to argue, but to show what side they’re on.

To make you feel surrounded. To normalize the message.

That’s what these replies did. Image
This is social engineering.

Built with AI. At scale.

No sweatshops. No human troll farms.

Just code that knows how to sound American enough, and pick a target.

This is the evolution.
The campaign had low engagement.

Almost no likes or reposts.

But that’s not how visibility works anymore.

Replying to someone guarantees you’re seen — by the original user and anyone reading the thread.

Quiet impact. Passive reach.
If you want to understand what influence ops look like now — this is it.

No rageposting. No massive fake follower counts.

Just a believable reply under your tweet, echoing the same talking point again and again.

Until you think it’s common sense.
The full report is worth your time:

Digital Yard Signs: Analysis of an AI Bot Political Influence Campaign on X.

Published Sept 30, 2024. By Darren Linvill & Patrick Warren.

open.clemson.edu/mfh_reports/7
Why does this matter?

Because it didn’t feel like a threat.

It felt like background noise.

A reply here. A comment there. Something about inflation. Or God. Or gas prices.

But it was designed to shift perception — quietly.
The people behind this campaign didn’t want to debate.

They wanted to blend.

That’s how modern influence works:

It doesn’t tell you what to think.

It makes you feel like you already thought it.
That’s why education matters.

We need to teach how these tactics work.

Where they show up. And why they’re effective.

Because if we don’t?

The next wave won’t just influence elections — it’ll rewrite the norms we take for granted. Image

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Gavril Ducu 🇷🇴🇩🇪🇺🇲🇳🇱🇪🇺@🇺🇦

Gavril Ducu 🇷🇴🇩🇪🇺🇲🇳🇱🇪🇺@🇺🇦 Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @DucuGavril

Jan 26
A new paper published in Science explains something many people feel but struggle to name.

Online manipulation is no longer about individual lies or fake accounts. It is becoming an infrastructure.

For years, influence operations relied on human-run bot networks. Image
They were expensive, slow, and limited by time, coordination, and manpower.

That constraint is disappearing.

The combination of large language models and autonomous agents allows influence campaigns to operate continuously, adaptively, and at scale.

arxiv.org/pdf/2506.06299
Researchers call this the rise of malicious AI swarms.

These are not simple bots repeating messages.

They are systems made of many AI-controlled personas that keep memory, maintain identities, coordinate with each other, and adapt in real time to human responses.
Read 20 tweets
Jan 2
Disinformation is often discussed in theory. This is a real case.

One event, one place, real people, real consequences. The point here is not who is right politically, but how people can be pushed into acting against their own material interests. Image
Near Podgorica, residents blocked the construction of a wastewater treatment plant in Botun.

Police intervened. Arrests followed. Politicians reacted. At first glance, this looks like a familiar local conflict over infrastructure. Image
These kinds of disputes happen everywhere. Roads, power lines, factories, treatment plants.

They are usually messy, emotional, and local. Nothing about the situation itself is unusual. Image
Read 14 tweets
Dec 23, 2025
What do people in the West usually get wrong about propaganda?

They imagine simple brainwashing. Someone lies. Someone believes it. End of story.

That idea is comforting because it suggests clear villains, clear victims, and that smart people are safe. Image
So what does propaganda actually do?

It does not need to make you stupid. It just needs to make you tired. When you live inside it, propaganda is not an event.

It is part of the environment, like background noise. Image
Does propaganda require people to believe lies?

No.

Obvious lies can even help. When contradictions pile up, people stop asking if something is true. They start asking why it is being said.

Eventually even that feels exhausting. Image
Read 10 tweets
Dec 21, 2025
THREAD 🧵 | What is Orbán actually doing and why?

Q: Why is Viktor Orbán suddenly comparing EU leaders to Napoleon and Hitler?
A: Because he’s not debating policy. He’s raising fear so defense sounds dangerous and delay sounds wise. Image
Q: Is this just Orbán being provocative for attention?
A: No. He’s repeating it, on stage, on record.

That’s intentional escalation, not a slip of the tongue. Image
Q: Why target Kaja Kallas specifically?
A: She represents clarity on Russia.

If you can’t argue against the policy, you poison the person. Image
Read 12 tweets
Dec 10, 2025
THREAD — Can You Measure How Susceptible You Are to Propaganda? Yes.

People imagine propaganda targets:

• fools
• the uneducated
• the gullible

But research shows something deeply uncomfortable: The most susceptible are often the ones convinced they are immune. Image
Measuring susceptibility isn’t about intelligence.

It’s about habits of mind and emotional reflexes.

We can look at a few key diagnostics anyone can use on themselves.
THE REACTION TEST

Do you react before you reflect?

Propaganda doesn’t persuade logically — it captures via emotion.

If you often feel outrage, agreement, fear or exhilaration before you evaluate, your attack surface is exposed.
Read 13 tweets
Oct 27, 2025
British journalist Carole Cadwalladr has published new reporting that links a Kremlin-connected influence network to Nigel Farage’s political circle. Image
Image
She documents how Oleh Voloshyn, a sanctioned Russian operative, and his wife Nadia Sass, a pro-Kremlin influencer, targeted politicians in the UK and across Europe.
One Farage ally, former MEP Nathan Gill, has already pleaded guilty to taking bribes from Voloshyn while promoting Kremlin messaging in the European Parliament.
Read 6 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us!

:(