First, what is a bot? The Oxford English Dictionary defines "bot" as "an autonomous program on the internet... that can interact with systems or users". A Twitter bot is simply an automated Twitter account (operated by a piece of computer software rather than a human).
Although much public discussion of "bots" centers on malicious or spammy accounts, there are plenty of legitimate uses of automation. Many news outlets use automation tools to automatically share their articles and videos on Twitter, for example.
There are a variety of fun and useful Twitter bots that freely disclose that they're automated. Some examples:
Although forbidden by Twitter's automation rules (help.twitter.com/en/rules-and-p…), spam networks (large groups of accounts operated by a single entity tweeting the same stuff) are a frequent use of automation.
What do spambots spam? Cryptocurrency has been a hot topic for automated spam networks in recent months, with networks ranging in size from a few dozen accounts to tens of thousands. Some examples:
Sometimes automated spam exists alongside organic activity on same group of accounts. An example of this is the now-defunct Power10 automation tool, which caused its users to automatically retweet large numbers of pro-Trump tweets. businessinsider.com/power10-activi…
Services that sell retweets, likes, and follows (all of which are TOS violations) frequently use botnets to provide the aforementions retweets, likes, and follows. A couple examples:
The above is by no means a comprehensive survey of every bot or every type of bot on Twitter, but is a decent rough overview of common uses of automation, both legitimate and illicit.
Onward to the next topic: types of accounts that people think are bots, but aren't.
Folks who participate in retweet rooms (where everyone retweets every tweet shared in the room) often get mistaken for bots due to their high tweet volume, as users who are in multiple rooms often retweet hundreds of tweets a day. politico.eu/article/twitte…
Copypastas (cases where real humans copy and paste the same of text with few or no alterations) frequently get mistaken for bot activity, as identical tweets appear on multiple accounts.
In a similar vein, accounts that share a lot of news articles or YouTube videos by using the "Share" buttons on the respective sites get mistaken for bots because the article/video title is generally used as the tweet text, resulting in identical (but not automated) tweets.
Impostor accounts and fake personas also frequently get erroneously referred to as "bots", although many of these are human-operated rather than automated. buzzfeednews.com/article/craigs…
How does one know if a given account is a bot? Unfortunately, there's no quick way to tell, and in many cases without finding a large number of accounts that belong to the same network, it may be impossible to be certain. Here are a couple of things that sometimes work...
Every tweet is labeled with the software it was tweeted with, which can be used to identify automated tweets. Most human tweets are sent with Twitter Web App, Twitter for iPhone/iPad/Android, or TweetDeck and most tweets sent with other apps are automated.
Anomalies in timing can sometimes indicate automation as well: constant activity without breaks for sleep, for example. The accounts described in the linked thread are examples (and have other timing anomalies as well).
• some 24/7 accounts are run by multiple people rather than being automated
• web browsers and phones can be automated, so some accounts that post via web/smartphone are actually bots
• etc
(just to clarify - the articles linked in this tweet aren't erroneously claiming that things that the fake accounts described are bots, they're accurate articles about fake/impostor accounts that sometimes have been mistaken for bots.)
Also, be wary of overly simplistic checklists that purport to be advice on "bot detection". Most of the stuff on this list has nothing to do with automation, and will be of little to no use in determining whether or not a given account is a bot.
Some thoughts on perennial pitfalls in news coverage of social media manipulation that frequently result in reporting on fake accounts/bots/etc being far less accurate and informative than it ought to be...
The most common problem with news articles about fake accounts: failure to include any examples of fake accounts or evidence of their inauthenticity. Any or all of these headlines might be accurate, but you can't tell from the articles, due to absence of evidence.
A related issue: articles like the "Nearly Half of Biden/Trump's Followers Are Fake" and "Nearly Half Of Accounts Tweeting About Coronavirus Are Bots" pieces base their numbers on closed-source third party tools, which may or may not actually be detecting anything useful.
Former BNN employee Michael Gordon Douglas aka "Chicago Mike" has been found guilty of CSAM distribution.
In light of this, it's worth revisiting disinformation propagated by BNN and others to make excuses for Mr. Douglas's illegal content-related X/Twitter ban(s).
The disinfo in question originated somewhere seemingly unrelated, with false claims that several people (including me) were using a magic "console" to ban X users on behalf of Ron DeSantis. This hoax was invented by Texas bullshit purveyor Steven Jarvis.
Steven Jarvis peddled his "console" theory to BNN founder Gurbaksh Chahal, and when BNN employee Michael Gordon Douglas's @ChicagoMikeSD X account was suspended in early 2023, BNN published an article falsely attributing the ban to the imaginary "console". web.archive.org/web/2023012507…
Does thanking, praising, or insulting an LLM-based chatbot affect the speed or accuracy of its responses to questions involving basic arithmetic? Let's find out!
For this experiment, Meta’s Llama 3.1 model was asked to add and multiply random numbers between 10 and 100, with six different wordings: polite, rude, obsequious, urgent, and short and long neutral forms. Each combination of math operation and wording was tested 1000 times.
Results: asking the questions neutrally yielded a faster response than asking politely, rudely, obsequiously, or urgently, even if the neutral prompt was longer. Overall, obsequious math questions took the longest to process, followed by urgent, rude, and polite questions.
Just for fun, I decided to search Amazon for books about cryptocurrency a couple days ago. The first result that popped up was a sponsored listing for a book series by an "author" with a GAN-generated face, "Scott Jenkins".
cc: @ZellaQuixote
Alleged author "Scott Jenkins" is allegedly published by publishing company Tigress Publishing, which also publishes two other authors with GAN-generated faces, "Morgan Reid" and "Susan Jeffries". (A fourth author uses a photo of unknown origin.)
As is the case with all unmodified StyleGAN-generated faces, the facial feature positioning is extremely consistent between the three alleged author images. This becomes obvious when the images are blended together.
The people in these Facebook posts have been carving intricate wooden sculptures and baking massive loaves of bread shaped like bunnies, but nobody appreciates their work. That's not surprising, since both the "people" and their "work" are AI-generated images.
cc: @ZellaQuixote
In the last several days, Facebook's algorithm has served me posts of this sort from 18 different accounts that recycle many of the same AI-generated images. Six of these accounts have been renamed at least once.
The AI-generated images posted by these accounts include the aforementioned sculptures, sad birthdays, soldiers holding up cardboard signs with spelling errors, and farm scenes.
The common element: some sort of emotional appeal to real humans viewing the content.
As Bluesky approaches 30 million users, people who run spam-for-hire operations are taking note. Here's a look at a network of fake Bluesky accounts associated with a spam operation that provides fake followers for multiple platforms.
cc: @ZellaQuixote
This fake follower network consists of 8070 Bluesky accounts created between Nov 30 and Dec 30, 2024. None has posted, although some have reposted here and there. Almost all of their biographies are in Portuguese, with the exception of a few whose biographies only contain emoji.
The accounts in this fake follower network use a variety of repeated or otherwise formulaic biographies, some of which are repeated dozens or hundred of times. Some of the biographies begin with unnecessary leading commas, and a few consist entirely of punctuation.