Another is the fact that no consideration has been given to the certainty of the evidence e.g. risk of systematic bias (2 of the studies are pre-prints), random error (amount of information, precision). Given one of the authors has written the book on causal inference = ironic.
If we include the only two studies that have been peer-reviewed then the MA looks like this.
That's before we again look at the overall certainty of evidence.
Also, I did that meta-analysis in 1 min. This shows the absolute pointlessness of MA's outside of the context of a properly conducted and reported systematic review, that includes an assessment of the certainty of evidence (i.e. GRADE or similar).
So poor.
If the risk of bias of the trials is high, then we downgrade the evidence from high to mod.
We have to make a decision as what a clinically meaningful effect here is? 25% RR is considered by some as 'meaningful' for estimates of non-life threatening outcomes eg risk of COVID +ve
In pre-print MA - CI crosses meaningful (low as 0.61) but also includes trivial/no benefit. I would downgrade here by one again meaning we low quality/certainty of evidence.
For my MA, would consider downgrading two levels as shows potential meaningful harm (so very low qual)
I haven't the time to look at whether there is indirectness (the studies don't include relevant/similar populations, intervention, comparator, outcomes) for the research question.
Probably also fails on the optimal information size (enough info) as here we have <300 events
Only thing it (the totality of evidence) doesn't get downgraded on is 'consistency' (results go in same direction, overlapping CIs etc).
Publication bias also needs looking at. No idea what the authors of the pre-print did on this.
How many trials on hydroxychloroquine have been registered/non-registered.
How many of these unpublished? Though the fact that the existing trials are all null trials does indicate that studies with p>0.05 are being published.
A good SR+MA would do all these checks.
It's a pre-print so we can give benefit of the doubt.
But a press-release tweet of a single point estimate that considers little of these CRITICAL issues from a MA that wasn't pre-registered + poor reporting = almost ignorable.
Lots of takes on this. Many are what people have anyway regardless of this paper, such is the general diet/nutrition discourse, and particularly around UPF.
For those of you interested in what an actual evidence-based approach to this paper looks like, buckle up🧵
In my view it is a good example of pervasive issue in the way most medical/health research is interpreted, and this even includes the authors themselves.
What’s the issue?
IGNORING UNCERTAINTY!
2/
A key function of a systematic review is to inform as to how certain we can be that the available data provides an observable truth.
There are different methods a review team can take. One of the most commonly used in med/health research is GRADE.
"It's going to protect YOU..."
"It will reduce YOUR risk of a heart attack, cancer, diabetes etc".
We see this all the time when it comes to medical treatments and health interventions.
I'm going to show why 99.9% of the time this type of phrasing/framing ("YOU/R") is wrong🧵
When people hear these phrases with "YOU" / "YOUR" in them, it's most likely they will perceive the benefits & risks of treatment/intervention in relation to their own personal benefit/risk. The thing is, we do not, & cannot, know your own personal benefit/risk.
The vast majority of the time we only have information on groups of people (samples). Benefits/risks in this context relate to the numbers of people (with a similar health profile) among the group who either do or do not experience a given health outcome.
out of frustration of seeing an organisation promote evidence from a commentary piece as if it were proof a question had been answered. No uncertainty. Proof.
/1
The same organisation didn't mention a systematic review attempting to answer the same question (which happened to show uncertainty). Yes, the organisation was aware of the review's existence.
This signals, to me, an agenda. It's also a good example of the state of things.
/2
The state of things in relation to evidence and "evidence-based". It's a state I worry about, hence the original tweet.
This isn't about the topic/question. It's about the principle. Of promoting selected information as if it is proven fact, ignoring info that might contest.
/3
Our latest publication revisits a well-known problem: reporting of relative effect estimates without absolute effects in journal publications of clinical trials:
First an intro to the problem. A practical example is probably best here. Take a look at the image. How many more people are at increased risk bowel cancer?
Lots of folk already commented on “1 or 2 dose”. One thing that comes up often is “efficacy”, followed by numbers like “80%”, 95%”. In many cases, it reads as if folk think this is how much YOUR chance of getting the virus is reduced by. That’s incorrect.
These figures are actually the relative risk reduction (RRR) of infection with the vaccine. Eg. 2000 people without Covid-19 - 1000 vaccinated (group 1), 1000 not vaccinated (group 2).
200 people (20%) in group 2 get Covid-19
10 people (1%) in group 1 get Covid-19
= 95% RRR
The absolute effect is 19% (difference between 1% and 19%).
Another way of putting it = in 1000 people who don’t have Covid and are not vaccinated, 200 will catch it.
If the same 1000 people had the vaccine, 10 would get it, meaning 190 will be spared.