The editorial process at @FrontiersIn makes a blunder. A study looking at "Developmental delays in children born during the pandemic" claims that fine motor delay and communication delay were seen comparing 2015-2019 & 2020.
This is very misleading. I see this mistake a lot.
/1
In fact, it is true that comparing 2020 to 2015-19 shows high anomalies in these two delays. But, if i compare 2016 to (2015, 2017-20) I would get the SAME significance testing. 2016 is worse than 2020 for fine motor and on par for communication.
/2
This is a case of a fallacy "cherry-picking."
The authors compared 2015-19 to 2020 but NOT:
2015 to 2016-20
2016 to 2015,2017-20
2017 to 2015-16, 2018-20
2018 to 2015-17, 2019-20
2019 to 2015-2018, 2020
And intentionally so, due to the cherrypicked "pandemic" situation.
/3
Had they done proper statistical tests, it would be completely obvious that 2016 and 2020 had similar rates of both delays.
Instead, cherry picking + selection bias leads to an erroneous association.
/4
The authors suggest they *controlled* this by pooling 2015-2019, but in fact they committed another fallacy!!
This is called statistical underfitting. The average is simply an inappropriate comparator.
You can clearly see the underfit here. By averaging 2015-2019, they created an average line that is supposed to represent all years "on average". But it's clearly underfit, and 2016 sticks out like a middle finger to statistical decency!
/5
Fortunately, the careful critical reader can see just how variable these numbers are, in the **FIRST FIGURE**. The peer reviewers failed the editorial process by not pointing out how this figure elevates a "possible limitation" to a "statistical failure."
Noisy numbers!
/6
The communication number is even more shaky.
While the issue isn't underfit of the average, the main issue is that 2020 would NOT be significantly different when compared to: 2016-2019, 2015-2018, 2018-2019, or ANY combination that *excluded* 2017, which seems low.
/7
In fact, very obviously, the KEY to 2020 being "statistically increased" is not 2020's elevation, but rather 2017's small stature. A simple eyeball test shows this, and yet the reviewers missed it.
/8
If we look at other measures of delay that didn't test significantly, we can see how fluctuations played such an important role.
Sorry for the scratchy comments, but it's late when I'm composing this and its irritating how obvious this is.
This type of error is *critical* during a pandemic, and undoubtedly adds fuel to the type of misattributed "cause" that drives so much covid-denialism activism.
It's not challenging statistics either, and this is what peer review is supposed to correct.
/fin
the paper in question, which *should* have concluded, if either of the two reviewers considered the obvious statistical issue, that "delay rates were within normal year-to-year fluctuation."
The core trick: he treats prescription prevalence as self-evidently bad. But high rates only signal a problem if the meds don't work, are given to people who don't need them, or cause net harm. He establishes none of this. He just gestures at numbers.
/2
The same rhetorical structure would indict insulin prescribing, or asthma inhalers. Prevalence is not pathology. The question is whether treatment matches need — and whether the alternative (untreated illness) is better or worse.
/3
It makes no sense the way we treat our people with disabilities in Canada. Canada has the full apparatus to implement adjusted payments, yet we typically support disabled people WELL under the poverty line.
/1
Canada has an official poverty line: the Market Basket Measure. It's regionally calibrated, methodologically sound, and updated by StatCan.
A single person on BC PWD receives ~$18.4k/year. The Vancouver MBM is ~$29k.
That's not a rounding error. It's a structural choice.
PWD recipients in Vancouver sit at roughly 47% of the poverty line and below the Deep Income Poverty threshold (75% of MBM), which is the level StatCan uses to flag the worst material deprivation in the country.
/3
To be clear, my first answer is "well we know they are supposed to block serotonin reuptake, but it's not that simple and we don't really know."
But, if you want the best 2026 science...
/1
For a few particularly science-interested patients, I walk them through what we currently have for the 'best evidence' even though we're still not sure.
This is the "best story" I can tell about SSRI's right now.
(nb, this is NOT locked in, this is MY best synthesis)
/2
1) SSRIs BLOCK the Serotonin Transporter
The protein that pulls serotonin back into the neuron after its released is blocked. Serotonin lingers longer in the synapse, the gap where neurons signal each other.
This is very well established, & how SSRIs were designed.
The Ihben story is making the rounds. "Judge forced 18 vaccines, child got autism." It's being treated as a smoking gun. It is not a smoking gun. It is barely a story.
Sourcing: one father, one advocacy org (CHD), one GiveSendGo. Records sealed. No filings. No named physicians. Every outlet repeating it cites the same Defender article. This is a closed loop, not corroboration.
/2
"18 vaccines in one day" is not a thing. That number counts antigens as doses to make the headline scream. Real catch-up schedules don't work this way and you can verify that in five minutes on the CDC site.
/3
Ask any person who has been even suggested to have BPD; they will uniformly tell you that they have been told to try DBT (Dialectical Behavioural Therapy). Reflexively recommended. "Gold standard."
This is not science-supported.
/1
Quick history: Marsha Linehan developed DBT in the late 1980s, published the foundational manual in 1993. She drew on CBT, Zen Buddhism, and dialectical philosophy. Brilliant clinician, brilliant marketer. Her institute has trained tens of thousands of therapists worldwide.
/2
That marketing machine is the reason DBT is "the BPD treatment." It is not the reason DBT works better than alternatives, because it does not.
The faint superiority signals in older trials evaporate once you adjust for allegiance bias (DBT researchers studying DBT).
/3
The McCullough Foundation's @NicHulscher — who posts garbage medical misinformation — styles himself an "independent epidemiologist."
His entire career has been spent publishing with, and working for, McCullough.
No academic post, no health agency, no clinical role, no pre-Foundation experience. Hired straight out of his 2024 MPH by the senior author on nearly every paper bearing his name.
/2
He publishes almost exclusively with McCullough, overwhelmingly in predatory or fringe journals, and has already been retracted twice — plus an Expression of Concern — in a career that's barely two years old.
/3