Well, the aim is to estimate the infection-fatality rate (IFR) of #COVID19 using seroprevalence (antibody test) studies
The methodology here is not ideal at first glance
4/n What's the issue?
Well, if you want to estimate a number like this from published data you want your search and appraisal methods to be SYSTEMATIC
Hence, systematic review
5/n Instead, what we appear to have here is an opaque search methodology, little information on how inclusion/exclusion criteria were applied (and no real justification for those criteria)
6/n For example, seroprevalence studies including healthcare workers were excluded, because the samples are biased, but studies including blood donors were not, even though these are arguably even more biased
That's a strange inconsistency
7/n Studies only described in the media were excluded, but this appears to have included government reports as well
Again, there's no justification for this and it is REALLY WEIRD to exclude government reports (they're doing most of the testing!)
8/n Moving on, the study then calculated an inferred IFR, if the authors hadn't already done so. The calculation is crude, but not entirely wrong
However, there's an issue - the estimates were then 'adjusted'
9/n Spefically, the IFR estimates were cut by 10-20% depending on whether they included different antibody tests or not
I had a look at the reference here, and it definitely doesn't support such a blanket judgement
10/n Ok, so, on to the results
This table is basically the crux of the review. 12 included studies, with "corrected" IFR ranging from 0.02-0.4%
MUCH lower than most published estimates
11/n A colleague and I did a systematic review and meta-analysis of published estimates of IFR and came to an aggregated estimate of 0.74% (0.51-0.97%) so this is a bit of a surprise to me medrxiv.org/content/10.110…
What's happening here?
12/n Looking at this table, there are some things that immediately spring out
Firstly, three of these studies are of blood donors
13/n It is pretty easy to see why these studies aren't actually estimates of IFR - blood donors are by definition healthy, young etc, and so any IFR calculated from these populations is going to be MUCH lower than the true figure
14/n But if we look at the other included studies, this problem is repeated. The French and Japanese studies both used highly-selected patient populations, both of which likely would lead to a biased (low) estimate of IFR
15/n (The same concern has been raised about the Santa Clara study at the bottom, but for now let's ignore that and move on)
16/n Remember when I said that the calculation of individual IFRs was reasonable?
Well, there's a problem here. When Ioaniddis calculated IFRs, he did a decent job. However, some of the INCLUDED STUDIES didn't
17/n For example, the Iran, Kobe, and Brazilian studies made no attempt to account for right-censoring
18/n In addition, the Iranian study uses the official figure for deaths, and as has been pointed out this number may be a significant underestimate
19/n So, a problem
The red-outlined studies are clearly not estimates of population IFR - they look at specific, selected individuals and can't be extrapolated
The orange-outlined studies are likely underestimates due to methodology
20/n If we exclude these potentially misleading numbers, the lowest IFR estimate immediately jumps from 0.04% to 0.18%
Coincidentally, that 0.18% is Ioannidis' own research
21/n To me, a low estimate of 0.18% makes MUCH more sense than a minimum of 0.02% for IFR
Why? Well, take New York. ~16,000 deaths in a city of 8.4 million means that if every single person has been infected the IFR would be 0.19%
22/n Now, everyone calls NYC an outlier, and perhaps it is, but if you repeat this calculation for other places in the States, the same chilling thing happens:
Massachusetts: 0.9%
New Jersey: 0.12%
Connecticut: 0.1%
23/n The same is true of other places overseas - Lombardy has a total death toll of 0.16%, Madrid is around the same, even London is above 0.1% dead due to COVID-19
It seems INCREDIBLY unlikely, at this point, for the IFR to be below 0.1%
24/n Now, this is noted in the preprint, but basically dismissed as the deaths of old and poor people
That's...not a great perspective imo
25/n In particular, Ioannidis argues that places with lots of elderly and disadvantaged individuals are "very uncommon in the global landscape"
This is trivially incorrect. Most of the world is far worse off than people in NEW YORK CITY
26/n There's also some discussion of the obviously underestimated studies, which begs the question why they were included in the first place? They are clearly not realistic numbers
27/n ...and then a paragraph about Iran that contradicts the earlier points raised about why NYC has seen so many deaths
28/n Some discussion about press release science (we are agreed that it isn't good) but no mention of government reports
This is a HUGE gap to the study
29/n For example, why wasn't this Spanish seroprevalence study included?
It is the biggest in the world, and estimates IFR to be ~1-1.3% - triple the highest estimate in this review!
30/n On the other hand, why were clearly biased estimates included? Why was 500 arbitrarily the minimum size considered for included research (if you choose 1,000, the IFRs are suddenly much higher)
31/n Which brings us to this conclusion, which is, frankly, a bit astonishing
Is it a fact? That's certainly not shown in this review, and most evidence seems to contradict this statement
32/n The final thoughts here may make this a bit more understandable
It seems the author is not a fan of lockdowns. Perhaps this has driven his decisions for his review?
33/n Ultimately, it's hard to know the why, but what we can say is that this review appears to have very significantly underestimated the infection-fatality rate of COVID-19
34/n Moreover, the methodology is quite clearly inadequate to estimate the IFR of COVID-19, and thus the study fails to achieve its own primary objective
35/n Something that people are pointing out - another weakness of this study is that the author appears to have taken the LOWEST POSSIBLE IFR estimate from each study
For example, the Gangelt authors posited an IFR of 0.37-.46%, this paper cites 0.28%
36/n I should note - this paper is currently a PREPRINT
This gives us a great opportunity. We can correct the record in real time, and put up a study that actually achieves its aims
Let's hope it happens
37/n I think it's also worth pointing out that I personally WISH that the IFR of COVID-19 was 0.02%. It would solve so many of our problems - unfortunately, it seems extremely unlikely
The final large published trial on ivermectin for COVID-19, PRINCIPLE, is now out. Main findings:
1. Clinically unimportant (~1-2day reduction) in time to resolution of symptoms. 2. No benefit for hospitalization/death.
Now, you may be asking "why does anyone care at all any more about ivermectin for COVID?" to which I would respond "yes"
We already knew pretty much everything this study shows. That being said, always good to have more data!
The study is here:
For me, the main finding is pretty simple - ivermectin didn't impact the likelihood of people going to hospital or dying from COVID-19. This has now been shown in every high-quality study out there.pubmed.ncbi.nlm.nih.gov/38431155/
What's particularly interesting is a finding that the authors don't really discuss in their conclusion. These results appear to show that gender affirming care is associated with a reduction in suicide risk 1/n
2/n The paper is a retrospective cohort study that compares young adults and some teens who were referred for gender related services in Finland with a cohort that was matched using age and sex. The median age in the study was 19, so the majority of the population are adults.
3/n The study is very limited. The authors had access to the Finnish registries which include a wide range of data, but chose to only correct their cohorts for age, sex, and number of psychiatric appointments prior to their inclusion in the cohort.
These headlines have to be some of the most ridiculous I've seen in a while
The study tested 18 different PFAS in a tiny sample of 176 people. Of those, one had a barely significant association with thyroid cancer
This is genuinely just not news at all
Here's the study. I'm somewhat surprised it even got published if I'm honest. A tiny case-control study, they looked at 88 people with thyroid cancer and 88 controls thelancet.com/journals/ebiom…
Here are the main results. There was a single measured PFAS which had a 'significant' association with the cancer, the others just look a bit like noise to me
A new study has gone viral for purportedly showing that running therapy had similar efficacy to medication for depression
Which is weird, because a) it's not a very good study and b) seems not to show that at all 1/n
2/n The study is here. The authors describe it as a "partially randomized patient preference design", which is a wildly misleading term. In practice, this is simply a cohort study, where ~90% of the patients self-selected into their preferred treatment sciencedirect.com/science/articl…
3/n This is a big problem, because it means that there are likely confounding factors between the two groups (i.e. who is likely to choose running therapy over meds?). Instead of a useful, randomized trial, this is a very small (n=141) non-randomized paper
The study showed that COVID-19 had, if anything, very few long-term issues for children! As a new father, I find this data very reassuring regarding #LongCovid in kids 1/n
2/n The study is here, it's a retrospective cohort comparing children aged 0-14 who had COVID-19 to a matched control using a database of primary care visits in Italy onlinelibrary.wiley.com/doi/10.1111/ap…
3/ The authors found that there was an increased risk of a range of diagnoses for the kids with COVID-19 after their acute disease, including things like runny noses, anxiety/depression, diarrhoea, etc
This study has recently gone viral, with people saying that it shows that nearly 20% of highly vaccinated people get Long COVID
I don't think it's reasonable to draw these conclusions based on this research. Let's talk about bias 1/n
2/n The study is here. It is a survey of people who tested positive to COVID-19 in Western Australia from July-Aug 2022 medrxiv.org/content/10.110…
3/n This immediately gives us our first source of bias
We KNOW that most cases of COVID-19 were missed at this point in the pandemic, so we're only getting the sample of those people who were sick enough to go and get tested