3/n The design was very simple - take routine data on people who either had or had not elected to be part of an ivermectin distribution program, and controlled for a small number of confounding variables using either a propensity-score or regression model
4/n They authors found a very modest benefit for ivermectin on the risk of having a recorded infection, but a very large relative benefit for mortality from COVID-19
5/n So, on to the problems. There are quite a few.
Firstly, the author group. While this is not disclosed in the paper, several authors are members of the FLCCC, an ivermectin promotion organisation who we might expect to have some interest in the outcome of the research
6/n The corresponding author, Dr. Cadegiani, has been accused of, and I kid you not, "crimes against humanity" due to ethics breaches in his previous research on COVID-19 bmj.com/content/375/bm…
7/n On top of this, two of the authors report a direct financial conflict of interest, in that they say they work for a pharmaceutical company that makes money off ivermectin
8/n So, on to the study itself. In general, it's a fairly simple example of observational research that you'd do on routine medical data. The authors took an intervention (primary care doctor giving ivermectin), divided people into 2 groups based on this, and compared them
9/n The controls for confounding are obviously pretty inadequate given the purpose. The paper aims to see whether ivermectin has an impact on COVID-19 risk, but they don't control for any confounders that might increase your risk of catching COVID
10/n For example, there's no control for occupation, nothing about income, no analysis of the results looking at many well-known risk factors for COVID-19 infection and death
11/n To their credit, the authors do control for some major comorbidities, but since these are rarely related to the risk of CATCHING the disease (as opposed to dying from it if you get it), the causal chain is quite messy
12/n So the potential for uncontrolled confounding is high, and this cannot possibly be described as "quasi-randomized". Quasi-randomized studies are usually either natural experiments with some randomness to them, or prospective studies where randomization was done poorly
13/n Moreover, there are some pretty obvious challenges with ascertaining causality here. The intervention was a doctor prescribing ivermectin at some point, but as far as I can tell that's not followed up on at all in the paper
14/n We do not know, for example, how many in either group were taking ivermectin BEFORE the study took place (given the heavy use of the drug in Brazil, it is likely to be a non-zero figure)
15/n There's also quite strong evidence that many of the "ivermectin" group did not pick up the medication, and stopped taking it almost immediately
16/n This means that there may have been a large proportion of people in the control group taking ivermectin, and a similar proportion in the intervention group NOT taking ivermectin. There's no analysis of this issue in the paper that I can see
17/n There are other issues with the document, although to be fair here it's only a preprint so you expect some mistakes. For example, the main result presented is the unadjusted risk ratio, but for the adjustments the authors just give a p-value
18/n But overall, it's just not a very convincing paper. There is a huge potential for uncontrolled confounding, there are issues with the delineation of intervention vs control groups, and it's generally just a very weak observational study
19/ All of this makes the "limitations" and conclusions sections absolutely bizarre. Turns out you can just declare things are causal as long as you believe it hard enough I suppose?
20/n Ultimately, this newest piece of research is a very weak paper. It might be useful as the very first piece of research into a medication, but it gives us no useful information at this point in the ivermectin literature
21/n It's also worth noting that the people who've been complaining about low ivermectin doses, people not taking drugs etc in NEGATIVE trials are very happy to ignore these issues when it comes to a POSITIVE result
I wonder why that is 🤔🤔🤔
22/n Turns out there are even more issues with the study. Very worrying stuff
23/n In an rather astonishing move, the paper is now only available from the preprint server "on request", after being promoted heavily for weeks by the authors after numeric errors were discovered
24/n I spoke too soon! The paper is up, with more concerning issues yet again
The final large published trial on ivermectin for COVID-19, PRINCIPLE, is now out. Main findings:
1. Clinically unimportant (~1-2day reduction) in time to resolution of symptoms. 2. No benefit for hospitalization/death.
Now, you may be asking "why does anyone care at all any more about ivermectin for COVID?" to which I would respond "yes"
We already knew pretty much everything this study shows. That being said, always good to have more data!
The study is here:
For me, the main finding is pretty simple - ivermectin didn't impact the likelihood of people going to hospital or dying from COVID-19. This has now been shown in every high-quality study out there.pubmed.ncbi.nlm.nih.gov/38431155/
What's particularly interesting is a finding that the authors don't really discuss in their conclusion. These results appear to show that gender affirming care is associated with a reduction in suicide risk 1/n
2/n The paper is a retrospective cohort study that compares young adults and some teens who were referred for gender related services in Finland with a cohort that was matched using age and sex. The median age in the study was 19, so the majority of the population are adults.
3/n The study is very limited. The authors had access to the Finnish registries which include a wide range of data, but chose to only correct their cohorts for age, sex, and number of psychiatric appointments prior to their inclusion in the cohort.
These headlines have to be some of the most ridiculous I've seen in a while
The study tested 18 different PFAS in a tiny sample of 176 people. Of those, one had a barely significant association with thyroid cancer
This is genuinely just not news at all
Here's the study. I'm somewhat surprised it even got published if I'm honest. A tiny case-control study, they looked at 88 people with thyroid cancer and 88 controls thelancet.com/journals/ebiom…
Here are the main results. There was a single measured PFAS which had a 'significant' association with the cancer, the others just look a bit like noise to me
A new study has gone viral for purportedly showing that running therapy had similar efficacy to medication for depression
Which is weird, because a) it's not a very good study and b) seems not to show that at all 1/n
2/n The study is here. The authors describe it as a "partially randomized patient preference design", which is a wildly misleading term. In practice, this is simply a cohort study, where ~90% of the patients self-selected into their preferred treatment sciencedirect.com/science/articl…
3/n This is a big problem, because it means that there are likely confounding factors between the two groups (i.e. who is likely to choose running therapy over meds?). Instead of a useful, randomized trial, this is a very small (n=141) non-randomized paper
The study showed that COVID-19 had, if anything, very few long-term issues for children! As a new father, I find this data very reassuring regarding #LongCovid in kids 1/n
2/n The study is here, it's a retrospective cohort comparing children aged 0-14 who had COVID-19 to a matched control using a database of primary care visits in Italy onlinelibrary.wiley.com/doi/10.1111/ap…
3/ The authors found that there was an increased risk of a range of diagnoses for the kids with COVID-19 after their acute disease, including things like runny noses, anxiety/depression, diarrhoea, etc
This study has recently gone viral, with people saying that it shows that nearly 20% of highly vaccinated people get Long COVID
I don't think it's reasonable to draw these conclusions based on this research. Let's talk about bias 1/n
2/n The study is here. It is a survey of people who tested positive to COVID-19 in Western Australia from July-Aug 2022 medrxiv.org/content/10.110…
3/n This immediately gives us our first source of bias
We KNOW that most cases of COVID-19 were missed at this point in the pandemic, so we're only getting the sample of those people who were sick enough to go and get tested