Today, our letter about a paper in Nature Scientific Reports that claimed to find no evidence that staying at home reduced Covid-19 deaths was published
2/n The original paper came out in March, amid the huge worldwide epidemics, and was immediately a massive hit. After 9 months, it has been accessed nearly 400k times and has one of the highest Altmetric scores of any paper ever
3/n The paper has also been, I think it's fair to say, one of the more impactful pieces of work during the pandemic. It is still regularly cited everywhere to support the idea that government restrictions against Covid-19 don't work
4/n However, there are clear issues with this paper. When it first came out, we wrote a series of threads on twitter about this as well as an OSF preprint
6/n @goescarlos proved that the model the authors use will always produce what is essentially noise. @RaphaelWimmer then simulated this and proved that regardless of how closely Covid-19 deaths were linked with staying-at-home behaviour, the model finds no correlation
7/n Indeed, even if you create a dataset where staying at home increases Covid-19 deaths in a 1:1 ratio, the authors model spits out non-significant results
8/n There are also numerous other methodological concerns in the paper. For example, the authors uncritically use Belarussian death data, which is notoriously fake and fails the simplest of sense-checks
9/n Basically, the paper is entirely useless. It proves absolutely nothing about Covid-19 or staying at home
And yet, remember, HUGELY impactful. Used in decision-making in countries across the globe
10/n The thing is, the editors of Scientific Reports did everything right here, according to the usual academic norms. Shortly after we went public with our concerns via twitter, they posted a notice of concern on the paper
11/n The editors have been very responsive to us throughout. They have listened to the problems, had them peer-reviewed to make sure these are real issues, and generally done everything that we expect editors to do in this situation
12/n And yet, the entire process is a monumental failure
Why?
Because it all takes FAR TOO LONG
13/n The study went viral overnight. It was read by 100,000s within a month. While the note of concern went up quickly, it said very little and did not really disagree with the main arguments of the paper
14/n This paper, which was always useless as evidence, has been used as exactly that for nearly a year while we slowly ensured that the enormous, serious criticisms of it were accurate
That is simply not fast enough
15/n In a pandemic, decisions are made overnight. A paper which goes viral today may end up in the hands of the Albanian head of state the next day. A formal response that takes 9+ months to arrive is simply inadequate to mitigate any harms
16/n I don't have any good solutions for this. Peer-review, flawed as it is, will always allow some terrible research through. With the new online world, that research will sometimes go viral and have severe negative impacts before we can correct it
17/n I can say, however, that if it takes the better part of a year to make any major comment on a paper that mathematically cannot provide any evidence on the subject it examines, then our system for error-checking has serious issues
The final large published trial on ivermectin for COVID-19, PRINCIPLE, is now out. Main findings:
1. Clinically unimportant (~1-2day reduction) in time to resolution of symptoms. 2. No benefit for hospitalization/death.
Now, you may be asking "why does anyone care at all any more about ivermectin for COVID?" to which I would respond "yes"
We already knew pretty much everything this study shows. That being said, always good to have more data!
The study is here:
For me, the main finding is pretty simple - ivermectin didn't impact the likelihood of people going to hospital or dying from COVID-19. This has now been shown in every high-quality study out there.pubmed.ncbi.nlm.nih.gov/38431155/
What's particularly interesting is a finding that the authors don't really discuss in their conclusion. These results appear to show that gender affirming care is associated with a reduction in suicide risk 1/n
2/n The paper is a retrospective cohort study that compares young adults and some teens who were referred for gender related services in Finland with a cohort that was matched using age and sex. The median age in the study was 19, so the majority of the population are adults.
3/n The study is very limited. The authors had access to the Finnish registries which include a wide range of data, but chose to only correct their cohorts for age, sex, and number of psychiatric appointments prior to their inclusion in the cohort.
These headlines have to be some of the most ridiculous I've seen in a while
The study tested 18 different PFAS in a tiny sample of 176 people. Of those, one had a barely significant association with thyroid cancer
This is genuinely just not news at all
Here's the study. I'm somewhat surprised it even got published if I'm honest. A tiny case-control study, they looked at 88 people with thyroid cancer and 88 controls thelancet.com/journals/ebiom…
Here are the main results. There was a single measured PFAS which had a 'significant' association with the cancer, the others just look a bit like noise to me
A new study has gone viral for purportedly showing that running therapy had similar efficacy to medication for depression
Which is weird, because a) it's not a very good study and b) seems not to show that at all 1/n
2/n The study is here. The authors describe it as a "partially randomized patient preference design", which is a wildly misleading term. In practice, this is simply a cohort study, where ~90% of the patients self-selected into their preferred treatment sciencedirect.com/science/articl…
3/n This is a big problem, because it means that there are likely confounding factors between the two groups (i.e. who is likely to choose running therapy over meds?). Instead of a useful, randomized trial, this is a very small (n=141) non-randomized paper
The study showed that COVID-19 had, if anything, very few long-term issues for children! As a new father, I find this data very reassuring regarding #LongCovid in kids 1/n
2/n The study is here, it's a retrospective cohort comparing children aged 0-14 who had COVID-19 to a matched control using a database of primary care visits in Italy onlinelibrary.wiley.com/doi/10.1111/ap…
3/ The authors found that there was an increased risk of a range of diagnoses for the kids with COVID-19 after their acute disease, including things like runny noses, anxiety/depression, diarrhoea, etc
This study has recently gone viral, with people saying that it shows that nearly 20% of highly vaccinated people get Long COVID
I don't think it's reasonable to draw these conclusions based on this research. Let's talk about bias 1/n
2/n The study is here. It is a survey of people who tested positive to COVID-19 in Western Australia from July-Aug 2022 medrxiv.org/content/10.110…
3/n This immediately gives us our first source of bias
We KNOW that most cases of COVID-19 were missed at this point in the pandemic, so we're only getting the sample of those people who were sick enough to go and get tested