It appears that foreign influence operations on this platform are picking up, as expected. So here are a few high-level observations. Under normal circumstances I would write a proper longer piece. But in the interest of time, here you go. A few trends, questions, and hypotheses:
Most of the exposed Russian tradecraft is sloppy, and often the engagement on X is fake. But not always. One day after this remarkable WIRED story came out, the U.S. IC confirmed the attribution to Russia to reporters (Confirmation npr.org/2024/10/22/nx-…) wired.com/story/russian-…
The U.S. IC is reacting very fast. They expose content as foreign malign influence without amplifying it at the same time. That is excellent. It would be even better if there was one central reference point for all announcements, including press-call drops, perhaps with delay.
Foreign malign influence actors appear to use X as their disinformation platform of choice, for both seeding and amplification. Users then amplify from X to other platforms—if correct, then Meta, Google, and Microsoft will have degraded telemetry and a degraded ability to counter
Also note that X's reportedly weak threat intelligence team, the platform's libertarian approach, and even small changes like hiding "like" accounts—while retaining a lot of influential users, journalists, and official accounts—make it an ideal active measures proving ground.
The IC and platforms other than X are facing hard choices on a regular basis: expose foreign influence ops or not? Because exposure adds amplification—as some big but genuine fringe accounts will then look for the posts on X to amplify, claiming to "counter censorship."
My hunch there, eg: once a fake foreign video gets significant genuine amplification, the IC and even firms should expose, thus put a damper on reasonable mainstream coverage—assuming the follow-on fringe amplification will only reach echo chambers, changing no or v few minds.
Finally, we should not expect foreign (esp. Russian) influence operations to stop after the election. We see two types right now: smearing of one side, and undermining the legitimacy of the vote. Depending on what happens next, legitimacy-attacks might increase post-vote.
One more: the main target audience of Russian IO operators is not the U.S. voting public, it's their own funders. They need contracts & budgets. They care more about pretending to be influential than about actual influence. Don't overestimate them. foreignaffairs.com/russia/lies-ru…
• • •
Missing some Tweet in this thread? You can try to
force a refresh
"Influence and Cyber Operations: An Update," the new OpenAI threat intelligence report, out a few hours ago. The document is interesting for one specific reason that hasn't been mentioned in public reporting so far cdn.openai.com/threat-intelli…
This is the money paragraph, from today's OpenAI report "Influence and Cyber Operations: An Update."
tldr: AI labs sit at a middle section of adversary kill chains—if staffed & equipped properly, the labs are potentially uniquely well positioned for threat intelligence insights
The report also has some interesting LLM TTP examples
JUST OUT — September was a wild month for scholars of modern covert influence operations. No longer do we have to rely on a campaign's digital footprints alone. My first analysis of ~3K leaked internal files and fresh FBI evidence on "Doppelganger."
This video was an internal production by the Social Design Agency, a disinformation firm in Moscow, produced in early August 2023, likely to be viewed by Vladimir Putin. Note the memo reproduced in the description, discussing the video.
Several weeks ago German media (WDR, NDR, SZ) received a leak of internal files from the biggest Russian disinformation contractor, Social Design Agency, often referred to as Doppelganger. "Western security officials" confirmed authenticity. First story by @FlorianFlade et al
Another exclusive @tagesschau, this one is excellent. I wish they would excerpt or screenshot the source documents though tagesschau.de/investigativ/n…
If I taught my DISINFORMATION class again, and if I wanted to include a session on the most self-defeating, the most unethical, really just the dumbest influence campaigns in history, this one would be close to the top of the list. reuters.com/investigates/s…
Okay, first, the DoD deserves some credit at least for openly admitting it was engaged in this kind of covert influence activity, when asked by Reuters.
This is pretty much the textbook example for an unethical influence operation: calling into question the effectiveness of a vaccine (that was later WHO-approved), without evidence, during a deadly pandemic, at a moment of global uncertainty, lockdowns, even panic.
An observation on the Taurus leak that I have not seen elsewhere (could have missed it):
The intercepted recording starts with BG Frank Graefe, in Singapore, saying "Hallo," to which the response is "Moin Moin Herr General, Hauptmann Irrgang hier." "Servus." (A common greeting)
Irrgang: "I would add you now, if you like."
Graefe: "Thank you."
Then: automated Webex voice: "You are accessing the conference now."
My interpretation: the general, from a hotel room in Singapore, likely did not join by URL, but called a staff officer to phone-connect him into the meeting. The intercept likely started before entering the Webex session. So that leaves us with two most probable scenarios: