We use various metrics to predict whether an account will be a superspreader in future months and then see which ones perform the best. Then we do a qualitative review of the worst superspreaders we find and report what we learned.
The four major results are…
1) How we find them:
We extend the classic h-index metric to the misinformation domain and propose the False Information Broadcaster index (FIB index).
This metric finds the worst misinfo. spreaders who consistently share low-credibility content.
2) Who they are:
Many (not all) of the superspreaders we find are verified!
We find pundits with large followings, low-credibility media outlets, personal accounts affiliated with those media outlets, and a range of less popular influencers.
Most (not all) are conservative.
3a) What they do:
Our analysis found that 10 superspreaders (0.003% of accounts) were responsible for originating over 34% of the misinformation shared during the eight months that followed their identification, and 1,000 accounts (0.25%) were responsible for more than 70%!
3b) What they do:
We also learn that superspreaders utilize more toxic language than the average misinformation sharer on Twitter.
4a) What is Twitter doing about it?
Unsurprisingly, many of the accounts we found were suspended by Twitter. Good, right? Maybe not…
4b) Our analysis also suggests that Twitter may be more lenient with prominent superspreaders.
Of the superspreaders who were suspended, less than 3% were verified and less than 10% had more than 150k followers.
We hope this work (1) spurs more research into superspreaders and (2) sheds light on one of our key concerns:
The more prominent misinformation superspreaders become, the greater their negative impact will be, and the more difficult they become to reign in.