This past Saturday (August 12th, 2021), a couple thousand accounts tweeted "This is the truth of this world" accompanied by a brief video containing the phrase "Corona virus fake" at more or less the same time. #AstroturfedBullcrap#Spam
2882 tweets from 2875 accounts containing the "Corona virus fake" video and the text "This is the truth of this world" were posted over the span of 13 minutes. All but the first tweet end with a random 6 character code and were (allegedly) sent via "Twitter for iPhone".
Interestingly, the duplicated tweet was the first time 1754 of these 2875 accounts (61%) tweeted via iPhone (many are Android users). This suggests that some entity other than the account owners tweeted the iPhone video tweets from iPhones (or emulators) under their own control.
The "Corona virus fake" video was first tweeted by @cxie (permanent ID 11176362), an account created in 2007 with only three tweets (all recent). An Internet Archive snapshot from 2014 (web.archive.org/web/2014100304…) shows 256 tweets, however, so this account has been purged.
Despite being created in 2007, almost all of @cxie's followers were picked up in July 2021 or later. Interestingly, almost all of other accounts that posted the "Corona virus fake" video tweet (2843 of 2875, 98.9%) followed it en masse.
These replies about being hacked are consistent with the theory that the spammed video tweets were posted by someone other than the legitimate account holders (possibly the operator of the @cxie account):
In addition to the duplicate video tweets, 490 of those accounts sent identical replies in Chinese to @JHANDS08 in just five minutes. As with the video tweet spam, the first reply is from @cxie via "Twitter Web App" and the remainder were all sent via "Twitter for iPhone".
• • •
Missing some Tweet in this thread? You can try to
force a refresh
Some observations regarding @Botted_Likes (permanent ID 1459592225952649221)...
First, "viral posts which don't result in follower growth and have very little engagement in the reply section" is not a useful heuristic for detecting botted likes. Why not?
cc: @ZellaQuixote
"Viral posts that do not result in follower growth" is not a valid test for botting, because posts from large accounts often go viral among the large account's existing followers but do not reach other audiences, resulting in high like/repost counts but little/no follower growth.
"Very little engagement in the reply section" doesn't work for multiple reasons (some topics spur debate and some don't, some people restrict replies, etc)
Hilariously, @Botted_Likes seems to be ignoring their own criteria, as many of the posts they feature have tons of replies.
As with the banned @emywinst account, the @kamala_wins47 account farms engagement by reposting other people's videos, accompanied by bogus claims that the videos have been deleted from Twitter. These video posts frequently garner massive view counts.
@Emywinst @kamala_wins47 The operator of the @kamala_wins47 account generally follows up these viral video posts with one or more replies advertising T-shirts sold on bestusatee(dot)com. This strategy is identical to that used by the banned @emywinst account.
What's up with all these similarly-worded enthusiastic posts about a Pierre Poilievre rally in Kirkland Lake, and are they all from accounts that are less than a month old? (Spoiler: yes, they are.) #Spamtastic
cc: @ZellaQuixote
An X search for "Pierre Poilievre", "Kirkland Lake", and "refreshing" performed on August 4th, 2024 turned up 151 posts from 151 accounts. All are new accounts, with the oldest having been created less than a month ago, on July 7th, 2024. (Some have since been suspended by X.)
The most intense period of activity for this group of accounts was on August 3rd, 2024, when the repetitive posts about the Poilievre rally were posted. Each account also has at least one earlier post on a random topic; some of these older posts seem to cut off abruptly.
• Community Notes successfully placed fact checks on some of the most viral false posts about the shooting
• ~42% of noted posts were subsequently deleted by their authors
• An effort to spread a misidentification of the shooter via Community Notes failed
THE BAD:
• Community Notes fact checks take several hours to show up, which doesn't help much in the initial "breaking news" phase after a violent event
• Many notes never accumulate enough ratings to determine their fate
12 questions for @TheDailyBeast regarding @JakeLahut's false April 2023 story, "How Ron DeSantis Is Taking a Page Out of Nixon’s Playbook", which (among other things) falsely portrays an AI-generated face as a "sexually graphic meme" of a real child.
@JoannaColes @TracyConnor
First, some background and a couple debunks of the false article, for those unfamiliar with the situation:
1. How did the decision to use serial fabulist Steven Jarvis as a source for this article come about?
2. Was anyone employed by or affiliated with The Daily Beast at the time the article was published aware of Steven Jarvis's extensive history of making false claims?
Meet @LovewinnLove (permanent ID 2707213009), a blue-check verified account with a GAN-generated face and a few additional odd characteristics. Despite being created in 2014, this account has no posts prior to October 2023.
cc: @ZellaQuixote
There are multiple indicators that @LovewinnLove's "face" is GAN-generated:
• unrealistic teeth (visible portion of bottom teeth is especially bizarre)
• odd texturing and seams in shirt fabric
• telltale eye positioning (more info in next post)
@LovewinnLove All unmodified StyleGAN-generated face images have the property that the major facial features (particularly the eyes) are in the same position on each image. Blending @LovewinnLove's profile image with 99 other GAN-generated faces demonstrates this nicely.