2/n Firstly, we've got studies that probably or definitely did not take place as described. I'd not include these in any analysis, certainly not an aggregate model
3/n We've got a few case series that are just a bit of a waste of time - without even controlling for age, these provide no useful evidence even as part of an aggregate model
4/n Similarly, we've got these awful analyses of mass drug administration programs in Africa, where both studies did an almost identical analysis that also ignored that these programs were halted during COVID-19. Definitely not useful as evidence
5/n (There are, of course, way more issues with both the ecological trials and case series, but let's just look at the most obvious stuff here)
6/n Then, we've got some more studies that should probably be excluded due to their low quality. We've got this study, in which there were only 34 people taking ivermectin, and thus little hope of controlling for confounding
7/n We've got quite a few observational studies that didn't control for really key confounding variables, for example this paper where the control group was clearly much sicker at baseline. Also, mistakes in the tables
8/n We've got randomized trials that have really serious errors, such as Babalola with numerous numeric mistakes or Shoumann where the authors admitted to breaking randomization in the methods, and Okumus which 'randomized' by alternating
9/n We've also got stuff in here that doesn't give us nearly enough information to include in a meta-analysis. In one memorable instance this includes a slide tweeted by a single, anonymous account
10/n If you were to ask which trials I would definitely not include in any meta-analysis due to extremely low quality, serious issues with bias, or potential fabrication, you'd end up with this
11/n But even beyond this, there's so much weirdness in this website. For example, the authors take studies where there's a tiny relative risk difference between the outcome, but use the inverse to show a tiny point estimate
12/n For instance, in this study 108/110 people in the ivermectin group were discharged compared to 124/144 in the control, or a 14% increase in the relative likelihood of being discharged if you took ivermectin
13/n Instead, based on a pretty dubious logistic regression in the initial paper, the authors have taken the ratio of NOT being discharged, which has the impact of massively exaggerating the effect, from a 14% to a 87% benefit
14/n And then, the numerous examples where the authors contradict their own methodology (always in favour of ivermectin, of course)
15/n The authors say that they take "most serious" events. But here's an example where they took the risk of not having a fever at day 7 instead of length of hospitalization. Why? Well, the placebo group did better on hospitalization than ivermectin
16/n So firstly, if we wanted to make ivmmeta non-useless we'd have to exclude A LOT more of these garbage 'studies'. We'd also have to run the calculations in an honest way, and apply the criteria evenly
17/n I doubt that any of this will be done, because the purpose of the website is not to look at the evidence reasonably, but to prove that ivermectin works regardless of what the evidence shows 🤷♂️
• • •
Missing some Tweet in this thread? You can try to
force a refresh
The final large published trial on ivermectin for COVID-19, PRINCIPLE, is now out. Main findings:
1. Clinically unimportant (~1-2day reduction) in time to resolution of symptoms. 2. No benefit for hospitalization/death.
Now, you may be asking "why does anyone care at all any more about ivermectin for COVID?" to which I would respond "yes"
We already knew pretty much everything this study shows. That being said, always good to have more data!
The study is here:
For me, the main finding is pretty simple - ivermectin didn't impact the likelihood of people going to hospital or dying from COVID-19. This has now been shown in every high-quality study out there.pubmed.ncbi.nlm.nih.gov/38431155/
What's particularly interesting is a finding that the authors don't really discuss in their conclusion. These results appear to show that gender affirming care is associated with a reduction in suicide risk 1/n
2/n The paper is a retrospective cohort study that compares young adults and some teens who were referred for gender related services in Finland with a cohort that was matched using age and sex. The median age in the study was 19, so the majority of the population are adults.
3/n The study is very limited. The authors had access to the Finnish registries which include a wide range of data, but chose to only correct their cohorts for age, sex, and number of psychiatric appointments prior to their inclusion in the cohort.
These headlines have to be some of the most ridiculous I've seen in a while
The study tested 18 different PFAS in a tiny sample of 176 people. Of those, one had a barely significant association with thyroid cancer
This is genuinely just not news at all
Here's the study. I'm somewhat surprised it even got published if I'm honest. A tiny case-control study, they looked at 88 people with thyroid cancer and 88 controls thelancet.com/journals/ebiom…
Here are the main results. There was a single measured PFAS which had a 'significant' association with the cancer, the others just look a bit like noise to me
A new study has gone viral for purportedly showing that running therapy had similar efficacy to medication for depression
Which is weird, because a) it's not a very good study and b) seems not to show that at all 1/n
2/n The study is here. The authors describe it as a "partially randomized patient preference design", which is a wildly misleading term. In practice, this is simply a cohort study, where ~90% of the patients self-selected into their preferred treatment sciencedirect.com/science/articl…
3/n This is a big problem, because it means that there are likely confounding factors between the two groups (i.e. who is likely to choose running therapy over meds?). Instead of a useful, randomized trial, this is a very small (n=141) non-randomized paper
The study showed that COVID-19 had, if anything, very few long-term issues for children! As a new father, I find this data very reassuring regarding #LongCovid in kids 1/n
2/n The study is here, it's a retrospective cohort comparing children aged 0-14 who had COVID-19 to a matched control using a database of primary care visits in Italy onlinelibrary.wiley.com/doi/10.1111/ap…
3/ The authors found that there was an increased risk of a range of diagnoses for the kids with COVID-19 after their acute disease, including things like runny noses, anxiety/depression, diarrhoea, etc
This study has recently gone viral, with people saying that it shows that nearly 20% of highly vaccinated people get Long COVID
I don't think it's reasonable to draw these conclusions based on this research. Let's talk about bias 1/n
2/n The study is here. It is a survey of people who tested positive to COVID-19 in Western Australia from July-Aug 2022 medrxiv.org/content/10.110…
3/n This immediately gives us our first source of bias
We KNOW that most cases of COVID-19 were missed at this point in the pandemic, so we're only getting the sample of those people who were sick enough to go and get tested