My colleague, epidemiologist @joel_c_miller, has done a great job of debunking mis- and disinformation throughout the pandemic. In this great thread, he takes on the claim that COVID is basically harmless, and any excess deaths are due to fear and stress from social precautions.
Instead of calling the person an idiot, he does nice job of explaining how you might test such a hypothesis — and then looks to the data to show that this story about fear and stress is entirely unsupported. The whole thing is well worth a read.
But there's something else interesting here.

The fear-and-stress argument is introduced with an historical account about a medieval experiment conducted by medieval Persian philosopher Avicenna / Ibn Sīnā.

The story is *total bullshit.*

Avicenna did no such experiment.
Avicenna did speculate about how sheep know to fear wolves of his treatise De Anima (On the Soul).

But he didn't do this experiment, or write about anything like it.

iep.utm.edu/avicenna/
In the English language, I can track this particular bit of misinformation back to a trite pop-psych blog post from April, 2020.

The post references an entry about the aforementioned De Anima.

kn-ow.com/article/ibn-si…
But the story is at least a few weeks older. Octav-Sorin Candel actually wrote a paper tracking this false story back to a March 19th 2020 facebook post in Romanian.

mrjournal.ro/docs/R2/37JMR3…
Courtesy of Google Translate:

"You heard about Avicenna's experiment"

"The experiment involves a lamb and a wolf placed in a cage next to it. The lamb died shortly thereafter from the stress caused by fear."
"Scientists say that when we are afraid, the body no longer secretes the necessary chemistry and a cell dies. And if the fear is great, then all the cells in the body die."

(Ed. note: Scientists say no such thing.)
I guess the lesson here is that bullshit clumps.

Here we have a false and disingenuous argument that COVID is no worse than the common cold, gift-wrapped in a fabricated pop-psych tale about a medieval polymath.
In our course we define bullshit as "language, statistical figures, data graphics, and other forms of presentation intended to persuade by impressing and overwhelming a reader or listener, with a blatant disregard for truth and logical coherence."
The Avicenna story is classic bullshit. Most readers don't know a lot of medieval Persian philosophy. The creators are counting on it.

And they pull a little trick, as well, trying to make you think that this is common knowledge:

"You heard about Avicenna's experiment..."
The author of the thread about deaths from fear rather than COVID does the same thing. I'm not going to go through exhaustively, but let's look at one tweet.



It starts with an appeal to the fabricated story (which would be poor evidence even if true.)
Then we get entirely gratuitous technical language that I suppose is supposed to sound like what the author or his readers think that scientists sound like?

Myself, I say "people" instead of "the mammal homo [sic] sapiens."
Then we get a scientifical graphic that I suppose is intended bolster some of the other endocrinebabble in the thread.

The post concludes with the same old trick we saw in the Avicenna story. "This is general knowledge."

Who are YOU to question the emperor's clothes, after all?
Pretty impressive for 280 characters, actually.

In any case, the take-homes are three:

· Bullshit isn't so hard to spot when you know what you're looking for

· When in doubt, trace back to the source.

· Don't try to learn medieval Persian philosophy from COVID grifters.

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Calling Bullshit

Calling Bullshit Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @callin_bull

18 Feb
I love seeing journalists do a textbook job of calling bullshit on the misleading use of quantitative data.

Here's a great example. @RonDeSantisFL claimed that despite having schools open, Florida is 34th / 50 states in pediatric covid cases per capita.
nbcmiami.com/news/local/des…
I don't know for certain what set off their bullshit detector, but one rule we stress in our class is that if something seems too good or too bad to be true, it probably is.

DeSantis's claim is a candidate.

Below, a quote from our book. Image
The very next paragraph of the book suggests what to do when this happens: trace back to the source. This is a key lesson in our course as well, and at the heart of the "think more, share less" mantra that we stress. Don't share the implausible online until you've checked it out. Image
Read 9 tweets
5 Dec 20
In science, people tend to be most interested in positive results — a manipulation changes what you are measuring, two groups differ in meaningful ways, a drug treatment works, that sort of thing.
Journals preferentially publish positive results that are statistically significant — they would be unlikely to have arisen by chance if there wasn't something going on.

Negative results, meanwhile, are uncommon.
Knowing that journals are unlikely to publish negative results, scientists don't bother to write them up and submit them. Instead they up buried file drawers—or these days, file systems.

This is known as the file drawer effect.

(Here p<0.05 indicates statistical significance.)
Read 23 tweets
3 Dec 20
Jevin West was away today so in lecture I was able to sneak in one my favorite topics, observation selection effects.

Let's start a little puzzle.

In Portugal, 60% of families with kids have only one child. But 60% of kids have a sibling.

How can this be?
People are all over this one! And some are out ahead of me (looking at you, @TimScharks). We'll get there, I promise!

There are fewer big families, but the ones there are account for lots of kids.

If you sampled 20 families in Portugal, you'd see something like this.
@TimScharks Now let's think about class sizes.

Universities boast about their small class sizes, and class sizes play heavily into the all-important US News and World Report college rankings.

For example, @UW has an average class size of 28.

Pretty impressive for a huge state flagship.
Read 15 tweets
20 Sep 20
One of our key pieces of advice is to be careful of confirmation bias.

There's a thread going around about how the crop below is what happens when Twitter's use of eye-tracking technology to crop images is fed with data from a misogynistic society. I almost retweeted it. But…
…that story fits my pre-existing commitments about how machine learning picks up on the worst of societal biases. So I thought it was worth checking out.

Turns out, it's not Twitter at all.

Here's the @techreview tweet itself:
The picture is provides as a "twittercard", and is provided by the publisher, @techreview, as part of the header in the html file for the article.
Read 8 tweets
26 Jul 20
A couple of months ago, an almost unfathomably bad paper was published in the Journal of Public Health: From Theory to Practice.

It purports to prove—mathematically—that homeopathy will provide and effective treatment for COVID-19.

link.springer.com/article/10.100…
While it would be fish in a barrel to drag this paper as a contribution to the pseudoscience of homeopathy, we'll largely pass on that here. More interestingly, this single paper illustrates quite a few of the points that we make in our forthcoming book.
The first of them pertains to the role of peer review as guarantor of scientific accuracy.

In short, it's no guarantee, as we discuss here: callingbullshit.org/tools/tools_le…

This paper shows that all sorts of stuff makes it through peer review.
Read 50 tweets
17 Jul 20
A truly remarkable example of misleading data visualization from the Georgia department of public health.
In our book we suggest that one never assume malice when incompetence is a sufficient explanation, and one never assume incompetence when an understandable mistake could be the cause.

Can we apply that here?
I bet we can.

A lot of cartographic software will choose bins automatically based on ranges. For example, these might be the 0-20%, 20-40%, 40-60%, 60-80%, and 80-100% bins.

As the upper bound changes over time, the scale slides much as we see here.
Read 5 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!

Follow Us on Twitter!