First, what is a bot? The Oxford English Dictionary defines "bot" as "an autonomous program on the internet... that can interact with systems or users". A Twitter bot is simply an automated Twitter account (operated by a piece of computer software rather than a human).
Although much public discussion of "bots" centers on malicious or spammy accounts, there are plenty of legitimate uses of automation. Many news outlets use automation tools to automatically share their articles and videos on Twitter, for example.
There are a variety of fun and useful Twitter bots that freely disclose that they're automated. Some examples:
Although forbidden by Twitter's automation rules (help.twitter.com/en/rules-and-p…), spam networks (large groups of accounts operated by a single entity tweeting the same stuff) are a frequent use of automation.
What do spambots spam? Cryptocurrency has been a hot topic for automated spam networks in recent months, with networks ranging in size from a few dozen accounts to tens of thousands. Some examples:
Sometimes automated spam exists alongside organic activity on same group of accounts. An example of this is the now-defunct Power10 automation tool, which caused its users to automatically retweet large numbers of pro-Trump tweets. businessinsider.com/power10-activi…
Services that sell retweets, likes, and follows (all of which are TOS violations) frequently use botnets to provide the aforementions retweets, likes, and follows. A couple examples:
The above is by no means a comprehensive survey of every bot or every type of bot on Twitter, but is a decent rough overview of common uses of automation, both legitimate and illicit.
Onward to the next topic: types of accounts that people think are bots, but aren't.
Folks who participate in retweet rooms (where everyone retweets every tweet shared in the room) often get mistaken for bots due to their high tweet volume, as users who are in multiple rooms often retweet hundreds of tweets a day. politico.eu/article/twitte…
Copypastas (cases where real humans copy and paste the same of text with few or no alterations) frequently get mistaken for bot activity, as identical tweets appear on multiple accounts.
In a similar vein, accounts that share a lot of news articles or YouTube videos by using the "Share" buttons on the respective sites get mistaken for bots because the article/video title is generally used as the tweet text, resulting in identical (but not automated) tweets.
Impostor accounts and fake personas also frequently get erroneously referred to as "bots", although many of these are human-operated rather than automated. buzzfeednews.com/article/craigs…
How does one know if a given account is a bot? Unfortunately, there's no quick way to tell, and in many cases without finding a large number of accounts that belong to the same network, it may be impossible to be certain. Here are a couple of things that sometimes work...
Every tweet is labeled with the software it was tweeted with, which can be used to identify automated tweets. Most human tweets are sent with Twitter Web App, Twitter for iPhone/iPad/Android, or TweetDeck and most tweets sent with other apps are automated.
Anomalies in timing can sometimes indicate automation as well: constant activity without breaks for sleep, for example. The accounts described in the linked thread are examples (and have other timing anomalies as well).
• some 24/7 accounts are run by multiple people rather than being automated
• web browsers and phones can be automated, so some accounts that post via web/smartphone are actually bots
• etc
(just to clarify - the articles linked in this tweet aren't erroneously claiming that things that the fake accounts described are bots, they're accurate articles about fake/impostor accounts that sometimes have been mistaken for bots.)
Also, be wary of overly simplistic checklists that purport to be advice on "bot detection". Most of the stuff on this list has nothing to do with automation, and will be of little to no use in determining whether or not a given account is a bot.
It's New Year's Eve, and a bunch of politics enthusiasts with GAN-generated faces are enthusiastically replying to a variety of posts with similarly-worded replies. #NewYearShenaniGANs
cc: @ZellaQuixote
The politics enthusiasts are part of a spam network consisting of (at least) 575 accounts created between May and December 2023 with GAN-generated faces. Many of their handles, such as @Maairiuieinaaa and @eJooeiaAoneueer, contain long strings of vowels.
@Maairiuieinaaa @eJooeiaAoneueer All 575 of these accounts use StyleGAN-generated faces as profile images. Some of these, such as @MauMoiagaia's profile image, contain a tiny "StyleGAN 2 (Karras et al.)" watermark in the lower right corner.
It's a great day to look at a network of inauthentic accounts that post identical AI art images (with a side of good old fashioned T-shirt spam).
cc: @ZellaQuixote
This network consists of 24 X accounts. 12 of these accounts were created in the latter half of 2023 and have female avatars, while the other 12 were created in 2013 or earlier and have male avatars.
The 12 accounts with female avatars and 2023 creation dates regularly post AI-generated art images, and these image posts are quickly reposted by other accounts in the network (both female and male). The AI-generated images are often duplicated across accounts.
Meet @ImJamesMiller (permanent ID 1371651462153994242), an account with a GAN-generated face, 172K followers, and no tweets prior to two days ago. What's up with that?
cc: @ZellaQuixote
As it turns out, @ImJamesMiller wasn't always named @ImJamesMiller. In June, the account was named @/IamJimCaviezel in an apparent attempt to impersonate Sound of Freedom actor Jim Caviezel.
@ImJamesMiller Multiple prominent users appear to have accepted the fake Jim Caviezel account as legitimate, including Texas Congressman Brian Babin, right-wing influencer/ex-Game of Thrones blogger Jack Posobiec, and recently indicted ex-Assistant Attorney General Jeff Clark.
It's a great day to look at a network of Bluesky spam accounts with randomized names. #SundaySpam
cc: @ZellaQuixote
This spam network consists of (at least) 401 accounts, all of which were created (or added to the Bluesky app view) in August 2023. These accounts do not follow each other; rather, each one follows a small number of popular Bluesky accounts.
The accounts in this network cycle rhythmically between posting three types of content:
• reposts
• posts containing links to news articles
• posts containing links to news articles accompanied by images
Meet @thisisorange, a Twitter account created in February 2022 with a gold "verified organization" badge, thousands of batch-created fake followers, and a couple other interesting traits.
Verified organizations on Twitter can verify affiliated accounts (employees, teams, brand names, etc), which receive blue checkmarks as well as an organization badge (help.twitter.com/en/using-twitt…). The @thisisorange account has thousands of affiliates, mostly cryptocurrency accounts.
How did this come about? The website linked on @thisisorange's profile (orange dot associates) apparently allows one to become an affiliate simply by providing a Twitter account and a cryptocurrency wallet.