Another is the fact that no consideration has been given to the certainty of the evidence e.g. risk of systematic bias (2 of the studies are pre-prints), random error (amount of information, precision). Given one of the authors has written the book on causal inference = ironic.
If we include the only two studies that have been peer-reviewed then the MA looks like this.
That's before we again look at the overall certainty of evidence.
Also, I did that meta-analysis in 1 min. This shows the absolute pointlessness of MA's outside of the context of a properly conducted and reported systematic review, that includes an assessment of the certainty of evidence (i.e. GRADE or similar).
So poor.
If the risk of bias of the trials is high, then we downgrade the evidence from high to mod.
We have to make a decision as what a clinically meaningful effect here is? 25% RR is considered by some as 'meaningful' for estimates of non-life threatening outcomes eg risk of COVID +ve
In pre-print MA - CI crosses meaningful (low as 0.61) but also includes trivial/no benefit. I would downgrade here by one again meaning we low quality/certainty of evidence.
For my MA, would consider downgrading two levels as shows potential meaningful harm (so very low qual)
I haven't the time to look at whether there is indirectness (the studies don't include relevant/similar populations, intervention, comparator, outcomes) for the research question.
Probably also fails on the optimal information size (enough info) as here we have <300 events
Only thing it (the totality of evidence) doesn't get downgraded on is 'consistency' (results go in same direction, overlapping CIs etc).
Publication bias also needs looking at. No idea what the authors of the pre-print did on this.
How many trials on hydroxychloroquine have been registered/non-registered.
How many of these unpublished? Though the fact that the existing trials are all null trials does indicate that studies with p>0.05 are being published.
A good SR+MA would do all these checks.
It's a pre-print so we can give benefit of the doubt.
But a press-release tweet of a single point estimate that considers little of these CRITICAL issues from a MA that wasn't pre-registered + poor reporting = almost ignorable.
@BoussageonR @LGHemkens @RecoveryDoctor @GuyattGH @AnilMakam Correct - for THIS meta-analysis.
I think you are conflating different issues & placing all of them under a "current 'E'/GRADE isn't fit for purpose" argument
I'll explain: in the CPAP meta-analysis example, you're right that GRADE could (the word could is important) be done
@BoussageonR @LGHemkens @RecoveryDoctor @GuyattGH @AnilMakam That is because GRADE does not stipulate a. what question(s) you ask, b. how you seek to answer them.
What it does do is provide a framework on how to determine the certainty of a body of evidence that you have decided (based on SR methods) addresses the question you have set.
@BoussageonR @LGHemkens @RecoveryDoctor @GuyattGH @AnilMakam If you choose to ask the question "Is CPAP effective in reducing both all-cause and cardiovascular mortality in patients with OSA?", and then YOU CHOOSE that evidence from RCTs AND Non-RCTs can inform this decision, that is not a function/requirement of EBM/GRADE.
Lots of takes on this. Many are what people have anyway regardless of this paper, such is the general diet/nutrition discourse, and particularly around UPF.
For those of you interested in what an actual evidence-based approach to this paper looks like, buckle up🧵
In my view it is a good example of pervasive issue in the way most medical/health research is interpreted, and this even includes the authors themselves.
What’s the issue?
IGNORING UNCERTAINTY!
2/
A key function of a systematic review is to inform as to how certain we can be that the available data provides an observable truth.
There are different methods a review team can take. One of the most commonly used in med/health research is GRADE.
"It's going to protect YOU..."
"It will reduce YOUR risk of a heart attack, cancer, diabetes etc".
We see this all the time when it comes to medical treatments and health interventions.
I'm going to show why 99.9% of the time this type of phrasing/framing ("YOU/R") is wrong🧵
When people hear these phrases with "YOU" / "YOUR" in them, it's most likely they will perceive the benefits & risks of treatment/intervention in relation to their own personal benefit/risk. The thing is, we do not, & cannot, know your own personal benefit/risk.
The vast majority of the time we only have information on groups of people (samples). Benefits/risks in this context relate to the numbers of people (with a similar health profile) among the group who either do or do not experience a given health outcome.
out of frustration of seeing an organisation promote evidence from a commentary piece as if it were proof a question had been answered. No uncertainty. Proof.
/1
The same organisation didn't mention a systematic review attempting to answer the same question (which happened to show uncertainty). Yes, the organisation was aware of the review's existence.
This signals, to me, an agenda. It's also a good example of the state of things.
/2
The state of things in relation to evidence and "evidence-based". It's a state I worry about, hence the original tweet.
This isn't about the topic/question. It's about the principle. Of promoting selected information as if it is proven fact, ignoring info that might contest.
/3
Our latest publication revisits a well-known problem: reporting of relative effect estimates without absolute effects in journal publications of clinical trials:
First an intro to the problem. A practical example is probably best here. Take a look at the image. How many more people are at increased risk bowel cancer?