Unmodified GAN-generated face pics have the telltale trait that the major facial features (particularly the eyes) are in the same position on every image, and @Gabby_ucm's profile pic is no exception. There are also anomalies in the teeth, clothing, and hair of @Gabby_ucm's pic.
More on GAN-generated images and their use on Twitter in this set of threads:
Although the use of a GAN-generated face is not necessarily deceptive, it becomes clear when looking at @Gabby_ucm's replies to appearance-based compliments that the operator of the account is quite willing to deceive its followers into believing the image depicts a real person.
In addition to the fake profile pic, @Gabby_ucm also posted this image and encouraged their followers to believe that it depicts the person running the account. However, the image is an altered version of a photo snagged from a Japanese hair salon's Instagram page.
Although @Gabby_ucm claimed in late 2021 that the plagiarized photo was taken "recently", it is in fact from 2019 or earlier - and the allegedly US-based @Gabby_ucm's claim to have never visited Japan would seem to preclude appearing in a photo shoot at a Japanese hair salon.
How did @Gabby_ucm get 20,000 followers so quickly? There are at least two reasons, one of which is follow trains. The @Gabby_ucm account has been listed on at least 56 follow trains since December 1st, 2021, and has posted several trains of its own.
The second reason for @Gabby_ucm's rapid growth: it's the recreation of a banned account with many followers (@gabby_UCMaroon) that used the same GAN-generated face pic. @Gabby_ucm says the old account was banned for platform manipulation, but claims not to know what that means.
The suspension for platform manipulation isn't particularly surprising, however, as the operator of the @Gabby_ucm account was running a second account (@GabbyRedux, now suspended) at the same time as the suspended @gabby_UCMaroon, and that account used a stolen profile pic.
As with the current @Gabby_ucm account, the old @gabby_UCMaroon account falsely presented a stolen photo as an image of the person running the account on at least one occasion.
This account has now been suspended.
• • •
Missing some Tweet in this thread? You can try to
force a refresh
Just for fun, I decided to search Amazon for books about cryptocurrency a couple days ago. The first result that popped up was a sponsored listing for a book series by an "author" with a GAN-generated face, "Scott Jenkins".
cc: @ZellaQuixote
Alleged author "Scott Jenkins" is allegedly published by publishing company Tigress Publishing, which also publishes two other authors with GAN-generated faces, "Morgan Reid" and "Susan Jeffries". (A fourth author uses a photo of unknown origin.)
As is the case with all unmodified StyleGAN-generated faces, the facial feature positioning is extremely consistent between the three alleged author images. This becomes obvious when the images are blended together.
The people in these Facebook posts have been carving intricate wooden sculptures and baking massive loaves of bread shaped like bunnies, but nobody appreciates their work. That's not surprising, since both the "people" and their "work" are AI-generated images.
cc: @ZellaQuixote
In the last several days, Facebook's algorithm has served me posts of this sort from 18 different accounts that recycle many of the same AI-generated images. Six of these accounts have been renamed at least once.
The AI-generated images posted by these accounts include the aforementioned sculptures, sad birthdays, soldiers holding up cardboard signs with spelling errors, and farm scenes.
The common element: some sort of emotional appeal to real humans viewing the content.
As Bluesky approaches 30 million users, people who run spam-for-hire operations are taking note. Here's a look at a network of fake Bluesky accounts associated with a spam operation that provides fake followers for multiple platforms.
cc: @ZellaQuixote
This fake follower network consists of 8070 Bluesky accounts created between Nov 30 and Dec 30, 2024. None has posted, although some have reposted here and there. Almost all of their biographies are in Portuguese, with the exception of a few whose biographies only contain emoji.
The accounts in this fake follower network use a variety of repeated or otherwise formulaic biographies, some of which are repeated dozens or hundred of times. Some of the biographies begin with unnecessary leading commas, and a few consist entirely of punctuation.
It's presently unclear why, but over the past year someone has created a network of fake Facebook accounts pretending to be employees of the Los Angeles Dodgers. Many of the accounts in this network have GAN-generated faces.
cc: @ZellaQuixote
This network consists of (at least) 80 Facebook accounts, 48 of which use StyleGAN-generated faces as profile images. The remaining 32 all use the same image, a real photograph of a random person sitting in an office.
As is the case with all unmodified StyleGAN-generated faces, the main facial features (especially the eyes) are in the same position on all 48 AI-generated faces used by the network. This anomaly becomes obvious when the faces are blended together.
None of these chefs exist, as they're all AI-generated images. This hasn't stopped them from racking up lots of engagement on Facebook by posting AI-generated images of food (and occasional thoughts and prayers), however.
cc: @ZellaQuixote
These "chefs" are part of a network of 18 Facebook pages with names like "Cook Fastly" and "Emily Recipes" that continually post AI-generated images of food. While many of these pages claim to be US-based, they are have admins in Morocco per Facebook's Page Transparency feature.
Between them, these 18 Facebook "chef" pages have posted AI-generated images of food at least 36,000 times in the last five months. Not all of the images are unique; many have been posted repeatedly, sometimes by more than one of the alleged chefs.
Can simple text generation bots keep sophisticated LLM chatbots like ChatGPT engaged indefinitely? The answer is yes, which has some potentially interesting implications for distinguishing between conversational chatbots and humans.
For this experiment, four simple chatbots were created:
• a bot that asks the same question over and over
• a bot that replies with random fragments of a work of fiction
• a bot that asks randomly generated questions
• a bot that repeatedly asks "what do you mean by <X>?"
The output of these chatbots was used as input to an LLM chatbot based on the 8B version of the Llama 3.1 model. Three of the four bots were successful at engaging the LLM chatbot in a 1000-message exchange; the only one that failed was the repetitive question bot.