Meet @ChargoisCodie, whose profile pic is the cover of Codie Chargois's 2021 album "Tentatively Muttering", available on Amazon, iHeartRadio, and YouTube. One might infer that this is Codie Chargois's official Twitter account, but things are not as they seem.
The Codie Chargois tracks on YouTube are all covers that have been given different names than the original songs. Two are identical - "Just Around the Way" () and "3h AM" () are the same version of "Ring of Fire". One more thing...
All of the songs allegedly recorded by "Codie Chargois" appear to have actually been recorded by 39 WEST, an Ohio country band (reverbnation.com/39west). Perhaps "Tentatively Plagiarizing" would've been a better album title than "Tentatively Muttering".
As it turns out, @ChargoisCodie is not alone. One of the accounts it follows (@SantosShallcro1) has quite a few followers with "album covers" as profile pics. The remainder have default profile pics and GAN-generated face pics similar to those from thispersondoesnotexist.com.
These accounts are part of a botnet consisting of 344 accounts created within the last four months. Five of the accounts (@ThrunUlani, @AckleyKainin, @alciGareth, @SantosShallcro1, and @ErkerJoscelynn) are followed by the majority of the remainder.
As mentioned earlier, this botnet uses three types of profile pics:
Some but not all of the album covers actually have corresponding albums (at least some of the content of which is plagiarized and renamed) available on music platforms, all published within the last few months. The album names are amazing, though.
As is the case with unmodified GAN-generated face pics, the major facial features (particularly the eyes) are in the exact same spot on each of the 45 profile pics with GAN profile pics. This trait becomes obvious when the images are blended.
What does this network actually tweet? The majority of its content is a mix of repetitive replies to (5205/8287 tweets, 62.8%) and retweets of (2449/8287 tweets, 29.5%) other bots in the network and Kpop accounts.
Some thoughts on perennial pitfalls in news coverage of social media manipulation that frequently result in reporting on fake accounts/bots/etc being far less accurate and informative than it ought to be...
The most common problem with news articles about fake accounts: failure to include any examples of fake accounts or evidence of their inauthenticity. Any or all of these headlines might be accurate, but you can't tell from the articles, due to absence of evidence.
A related issue: articles like the "Nearly Half of Biden/Trump's Followers Are Fake" and "Nearly Half Of Accounts Tweeting About Coronavirus Are Bots" pieces base their numbers on closed-source third party tools, which may or may not actually be detecting anything useful.
Does thanking, praising, or insulting an LLM-based chatbot affect the speed or accuracy of its responses to questions involving basic arithmetic? Let's find out!
For this experiment, Meta’s Llama 3.1 model was asked to add and multiply random numbers between 10 and 100, with six different wordings: polite, rude, obsequious, urgent, and short and long neutral forms. Each combination of math operation and wording was tested 1000 times.
Results: asking the questions neutrally yielded a faster response than asking politely, rudely, obsequiously, or urgently, even if the neutral prompt was longer. Overall, obsequious math questions took the longest to process, followed by urgent, rude, and polite questions.
Just for fun, I decided to search Amazon for books about cryptocurrency a couple days ago. The first result that popped up was a sponsored listing for a book series by an "author" with a GAN-generated face, "Scott Jenkins".
cc: @ZellaQuixote
Alleged author "Scott Jenkins" is allegedly published by publishing company Tigress Publishing, which also publishes two other authors with GAN-generated faces, "Morgan Reid" and "Susan Jeffries". (A fourth author uses a photo of unknown origin.)
As is the case with all unmodified StyleGAN-generated faces, the facial feature positioning is extremely consistent between the three alleged author images. This becomes obvious when the images are blended together.
The people in these Facebook posts have been carving intricate wooden sculptures and baking massive loaves of bread shaped like bunnies, but nobody appreciates their work. That's not surprising, since both the "people" and their "work" are AI-generated images.
cc: @ZellaQuixote
In the last several days, Facebook's algorithm has served me posts of this sort from 18 different accounts that recycle many of the same AI-generated images. Six of these accounts have been renamed at least once.
The AI-generated images posted by these accounts include the aforementioned sculptures, sad birthdays, soldiers holding up cardboard signs with spelling errors, and farm scenes.
The common element: some sort of emotional appeal to real humans viewing the content.
As Bluesky approaches 30 million users, people who run spam-for-hire operations are taking note. Here's a look at a network of fake Bluesky accounts associated with a spam operation that provides fake followers for multiple platforms.
cc: @ZellaQuixote
This fake follower network consists of 8070 Bluesky accounts created between Nov 30 and Dec 30, 2024. None has posted, although some have reposted here and there. Almost all of their biographies are in Portuguese, with the exception of a few whose biographies only contain emoji.
The accounts in this fake follower network use a variety of repeated or otherwise formulaic biographies, some of which are repeated dozens or hundred of times. Some of the biographies begin with unnecessary leading commas, and a few consist entirely of punctuation.
It's presently unclear why, but over the past year someone has created a network of fake Facebook accounts pretending to be employees of the Los Angeles Dodgers. Many of the accounts in this network have GAN-generated faces.
cc: @ZellaQuixote
This network consists of (at least) 80 Facebook accounts, 48 of which use StyleGAN-generated faces as profile images. The remaining 32 all use the same image, a real photograph of a random person sitting in an office.
As is the case with all unmodified StyleGAN-generated faces, the main facial features (especially the eyes) are in the same position on all 48 AI-generated faces used by the network. This anomaly becomes obvious when the faces are blended together.