Great work by Connor Wells & Shubham Sharma @QueensUHealth asking two important #GlobalHealth questions 1. Is there a #publicationbias against papers from #LMICs? 2. Do oncology RCTs match the global disease burden?
Confirms something we always knew
What we did was this...
We identified 3 problems and 2 facts
We looked at all phase 3 studies in oncology from 2014 to 2017; classified origin of these RCTs based on #WorldBank economic classification of countries. We compared RCT designs and results from HICs and LMICs. The findings were striking…
Of 694 RCTs, 636(92%) were led by HICs; 58(8%) by LMICs. This is the first problem – huge imbalance in where research is done. Cancer incidence is strikingly different in HICs & LMICs, with considerable burden in LMICs. How can we accept such a skewed distribution of research?
601 (87%) evaluated systemic therapies & just 88(13%) on surgery & radiotherapy. This is the second problem – disproportionate emphasis on research in systemic therapies; not nearly enough on curative options like surgery and radiotherapy.
Distribution of RCTs are disproportionate to cancer burden – this is the third problem. With 7% cancer deaths, breast cancer constituted 17% of cancer research; with 14% cancer deaths, gastroesophageal cancer research was just 6%. Similar for liver, pancreas & cervical cancers
LMIC RCTs had smaller sample sizes than HICs (median, 219 vs 474; p<.001),more likely to meet primary end point (39/58; 67% vs 286/636; 45%, p=.001).
Median effect size was larger in LMICs than HICs; HR 0.62 vs 0.84;p<.001
Fact 1:Margins of benefit were higher in LMIC research
In spite of superior margins of benefit, LMIC RCTs get published in journals with lower median Impact Factor than trials from HICs (7 vs 21; p < .001)
Publication bias persisted regardless of whether a trial was positive or negative (median IF for negative trials: LMIC 5 vs HIC, 18; median IF for positive trials, LMIC, 9 vs HIC 25; p < .001)
Fact 2: Systematic publication bias exists
Conclusions 1. Research is done disproportionately higher in HICs and do not match the global burden of cancer 2. RCTs from LMICs are more likely to identify effective therapies and have a larger effect size 3. There is a funding and publication bias against RCTs led by LMICs
We need to change this. The end.
• • •
Missing some Tweet in this thread? You can try to
force a refresh
The WHO’s chief scientist on a year of loss and learning nature.com/articles/d4158…
For anyone remotely involved in healthcare, these are life lessons from @doctorsoumya. A must read.
For those of you who want a quick analysis, thread.
Disclaimer: I’m just breaking this up & annotating them with my own comments. Between quotes are her exact words (with some poetic license)
Planning ahead & prioritizing first steps – an important aspect of taking up a new job
“My original plan for 2020 included rolling out new processes to ensure the quality of technical documents, such as guidelines on water quality, tobacco advertising and immunization programmes”
The preprints of the #SOLIDARITY trial are out on MedRxiv. While many may lament that all four drugs tested did not show benefit, this is a remarkable trial for many reasons. Thread
What the #MAMS design does is enable testing multiple drugs simultaneously, flexibility to drop unpromising ones & add new promising ones even midway during the trial. This was crucial in the #COVID__19 pandemic where the situation has been constantly evolving
I’ve been watching with increasing concern at the trend of new daily diagnoses of COVID-19 in India over the past two weeks. To me, this reflects general public mood which seems to have begun to ignore the threat this virus poses. Thread
While the good news is that our death rates haven’t been as bad as some of the other countries (we might debate the accuracy of death reporting), but with a population of 1.35 billion people, the absolute numbers are still sobering. And rural India is just beginning to get hit
What I’d like to see is reliable “Excess mortality” and a P score which will quantify the true impact of the pandemic on deaths in India. We know that Mumbai had an excess mortality of 13000 deaths between Apr and Jul 2020. We don’t yet have data for India as a whole.
I can't believe the @US_FDA Commissioner @SteveFDA announced that 35 out of 100 patients treated with #ConvalescentPlasma will benefit from it. This demonstrates either a lack of understanding of basic statistics (relative risk vs absolute risk) or external pressures. (1/n)
There are several problems with this - first, this is not based on randomized evidence. This is based on "data obtained from the ongoing National Expanded Access Treatment Protocol (EAP) sponsored by the Mayo Clinic". The preprint is available on medrxiv.org/content/10.110… (2/n)
In an observational study of 35,322 patients transfused with CP, 7-day mortality was 8.7% in those where CP was transfused early (3 days or less) and 11.9% in those transfused later (>3 days). 30 day mortality was 21.6% vs 26.7%. (3/n)
The power of large, pragmatic randomized trials. Conducted by academic researchers using an adaptive trial design (#MAMS), and you’ve got a winner. Strong reasons why publicly funded research is so very important. Thread
The #RECOVERY trial has answered 3 important, clinically relevant questions about #COVID_19. It has shown that #Dexamethasone is beneficial in patients requiring oxygen/ ventilation & that HCQ and Lopinavir-Ritonavir are not useful. #HCQ results are now out in preprint!
#RECOVERY randomized 1561 pts to #HCQ and 3155 to standard of care. Primary endpoint of 28-day mortality was 26.8% in the HCQ arm and 25% in the SOC arm. Lack of benefit consistent across all sub-groups. And remember, 28-day mortality in the same trial in the Dexa arm was 21.6%
The retractions of papers in @TheLancet and the @NEJM have been met with outrage, anger and calls for resignation of the editors. Let's take a moment to think about this. What is a journal's and an editor's responsibility? To ensure that they publish the best science. Thread
They also need to publish the best science reliably and in a timely manner. How do they do this? They send to peer reviewers, get comments, and make a decision. They also rely heavily on authors for the veracity and integrity of the paper and their potential or real COI
Authors sign a declaration that they vouch for the data they present. Given the present circumstances in science, is it realistic to expect that editors and reviewers get the raw data of every single study that's submitted to their journal to guide their decision?