We came across part this botnet in the summer, when it was boosting the pro-Chinese network "Spamouflage."
This, from @conspirator0, is a typical profile. Note the broken sentence and word in the bio. No human typed that... at least not on that Twitter account.
Now compare the bio with the version of Dracula that's online at Tallinn Technical University: lap.ttu.ee/erki/failid/ra…
Coincidence?
Nope, not a coincidence. Here's another from the same collection by @conspirator0 , together with a line from the same e-book.
In fact, all the bots from this September batch in this image have single lines from the same e-book version.
Looks like we know what text the botnet was set to scrape.
So did the July batch, but with a +sign instead of the space - presumably because the automation wasn't all it should have been.
The one difference between these different sets is what they were posting.
@conspirator0 found the first batch were pornbots, the later ones were cryptocurrency-themed.
The ones we found in August, which used the same e-book version for their bios, were amplifying the Spamouflage network, which is pro-China / anti-US / anti-Hong Kong protests.
That's the thing with botnets. They can be vacuous, and then switch to propaganda if someone pays. So it's worth being able to spot them.
I wrote this a few years ago, after a little unpleasantness with around 80,000 bots.
BREAKING: @Facebook just took down two foreign influence ops that it discovered going head to head in the Central African Republic, as well as targeting other countries.
There have been other times when multiple foreign ops have targeted the same country.
But this is the first time we’ve had the chance to watch two foreign operations focused on the same country target *each other*.
In the red corner, individuals associated w/ past activity by the Internet Research Agency & previous ops attributed to entities associated w/ Prigozhin.
In the blue corner, individuals associated w/ the French military.
ELECTION THREAD: Today and tonight are going to be a wild time online.
Remember: disinformation actors will try to spread anger or fear any way they can, because they know that people who are angry or scared are easier to manipulate.
Today above all, keep calm.
A couple of things in particular. First, watch out for perception hacking: influence ops that claim to be massively viral even if they’re not.
Trolls lie, and it’s much easier to pretend an op was viral than to make a viral op.
Having studied IO for longer than I care to remember, one of the most frequent comments I’ve heard, and agreed with, is that we need better ways to assess impact on multiple levels and timescales.
As part of that, we need a way to assess live IO in real time.
This paper suggests a way to approximate impact in the moment, when we don’t have the full picture, including the IO operators’ strategic objectives, or the luxury of taking the time to run polls to measure effect on public sentiment (hard even in normal circumstances).
This field is rapidly developing, but we need to start somewhere. Without clear context and a comparative scale, there's a danger of IO capitalising on fear and confusion to claim an impact they never had.