Tyler Black, MD Profile picture
Suicidologist, emergency psychiatrist and pharmacologist. Data geek, ok, lots of other geek. Views expressed are my own and not my employers'. he/him/his

Dec 31, 2021, 12 tweets

The editorial process at @FrontiersIn makes a blunder. A study looking at "Developmental delays in children born during the pandemic" claims that fine motor delay and communication delay were seen comparing 2015-2019 & 2020.

This is very misleading. I see this mistake a lot.

/1

In fact, it is true that comparing 2020 to 2015-19 shows high anomalies in these two delays. But, if i compare 2016 to (2015, 2017-20) I would get the SAME significance testing. 2016 is worse than 2020 for fine motor and on par for communication.

/2

This is a case of a fallacy "cherry-picking."

The authors compared 2015-19 to 2020 but NOT:

2015 to 2016-20
2016 to 2015,2017-20
2017 to 2015-16, 2018-20
2018 to 2015-17, 2019-20
2019 to 2015-2018, 2020

And intentionally so, due to the cherrypicked "pandemic" situation.

/3

Had they done proper statistical tests, it would be completely obvious that 2016 and 2020 had similar rates of both delays.

Instead, cherry picking + selection bias leads to an erroneous association.

/4

The authors suggest they *controlled* this by pooling 2015-2019, but in fact they committed another fallacy!!

This is called statistical underfitting. The average is simply an inappropriate comparator.

You can clearly see the underfit here. By averaging 2015-2019, they created an average line that is supposed to represent all years "on average". But it's clearly underfit, and 2016 sticks out like a middle finger to statistical decency!

/5

Fortunately, the careful critical reader can see just how variable these numbers are, in the **FIRST FIGURE**. The peer reviewers failed the editorial process by not pointing out how this figure elevates a "possible limitation" to a "statistical failure."

Noisy numbers!

/6

The communication number is even more shaky.

While the issue isn't underfit of the average, the main issue is that 2020 would NOT be significantly different when compared to: 2016-2019, 2015-2018, 2018-2019, or ANY combination that *excluded* 2017, which seems low.

/7

In fact, very obviously, the KEY to 2020 being "statistically increased" is not 2020's elevation, but rather 2017's small stature. A simple eyeball test shows this, and yet the reviewers missed it.

/8

If we look at other measures of delay that didn't test significantly, we can see how fluctuations played such an important role.

Sorry for the scratchy comments, but it's late when I'm composing this and its irritating how obvious this is.

This type of error is *critical* during a pandemic, and undoubtedly adds fuel to the type of misattributed "cause" that drives so much covid-denialism activism.

It's not challenging statistics either, and this is what peer review is supposed to correct.

/fin

the paper in question, which *should* have concluded, if either of the two reviewers considered the obvious statistical issue, that "delay rates were within normal year-to-year fluctuation."

frontiersin.org/articles/10.33…

Share this Scrolly Tale with your friends.

A Scrolly Tale is a new way to read Twitter threads with a more visually immersive experience.
Discover more beautiful Scrolly Tales like this.

Keep scrolling