, 11 tweets, 8 min read Read on Twitter
As machine learning continues to evolve so too do its malicious uses, such as creating harder-to-detect social media bots. We experimented with using GPT-2 (the text generation tool described in this article) to generate tweets.

cc: @ZellaQuixote
arstechnica.com/information-te…
First, what is GPT-2 and what does it do? Briefly, it's a text generation tool trained on a vast quantity of text linked from Reddit posts. You give it an example of text you want it to emulate, and it spits out a piece of text similar in style and content.
Before trying out GPT-2 on Twitter content, we tested it on a section of the Mueller report. The results ain't bad - there're some weird spots ("the river waits for 2016?") but it correctly associated Donna Brazile with the DNC despite her not being mentioned in the input text.
Next we fed GPT-2 samples of our own tweets as input, with each tweet formatted as a separate paragraph. The machine-generated "tweets" vary in quality, with many being unintentionally hilarious (and wouldn't "Wikileaks of Schweikhauser" be a good name for a bar or restaurant?)
We repeated the experiment with a couple of prominent accounts, @ScottAdamsSays and @RealJamesWoods. With both the celebrity tweets and our own, we had difficulty telling what in the input tweets led to either the style or content of the "tweets" generated by the software.
Next, we tried feeding our tweets into GPT-2 as longer-form writing by combining tweets into longer paragraphs. Much of the output is again silly, but the generated "tweets" are much closer to our real tweets in both style and subject matter that those from the first experiment.
Same thing for @ScottAdamsSays and @RealJamesWoods; the presence of #Dilbert-related hashtags in several of the synthetic "tweets" based on Adams' tweets is particularly noticeable.
Finally, we decided to give the GPT-2 text generation tool a shot at writing a Twitter thread. As input, we used the text of one of our recent threads.
Here are the results. We ran the test three times, resulting in three synthetic "threads." They all contain a fair amount of nonsense if one looks closely (don't turn to this ML tool for actual analysis,) but the style is reasonably similar to our own threads.
Belatedly, we performed the same tweet generation experiment on the tweets of three of our not-quite-fans (since the "trumplantic" account is currently offline, we've helpfully included some examples of its original tweets.)
Just for a bit of overly meta fun, we used GPT-2 to generate tweets using the set of tweets we generated with GPT-2 in previous phases of this experiment as input. The phrase "united birther conspiracy theories about 9/11" pretty much says it all. . .
Missing some Tweet in this thread?
You can try to force a refresh.

Like this thread? Get email updates or save it to PDF!

Subscribe to Conspirador Norteño
Profile picture

Get real-time email alerts when new unrolls are available from this author!

This content may be removed anytime!

Twitter may remove this content at anytime, convert it as a PDF, save and print for later use!

Try unrolling a thread yourself!

how to unroll video

1) Follow Thread Reader App on Twitter so you can easily mention us!

2) Go to a Twitter thread (series of Tweets by the same owner) and mention us with a keyword "unroll" @threadreaderapp unroll

You can practice here first or read more on our help page!

Follow Us on Twitter!

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just three indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3.00/month or $30.00/year) and get exclusive features!

Become Premium

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!