This is my open peer review of the systematic review of non-pharma interventions for preventing #COVID19 by @stella_talic et al. that was recently published in @bmj_latest: bmj.com/content/375/bm… Prepare for a marathon of a thread 1/ Image
@stella_talic @bmj_latest I’m using AMSTAR-2 to conduct this assessment. Also, where relevant, I will drop in additional observations based on my 15-year experience working as a Managing Editor in @cochranecollab producing, editing and publishing #systematicreviews. 2/
@stella_talic @bmj_latest @cochranecollab Q1. Did the research Qs and inclusion criteria for the review include the components of PICO? The title nicely lists three outcomes: Covid-19 (getting the disease), SARS-CoV-2 transmission (contracting the virus or passing it on to others) and dying as a result of Covid-19. 3/
@stella_talic @bmj_latest @cochranecollab But the intervention component is a bit fluffy. Public health measures? I’d use the term non-pharmaceutical interventions. All the same, the review covers a huge range of interventions from personal (e.g. masks) to population level (e.g. mandating masks). 4/
@stella_talic @bmj_latest @cochranecollab Anyway, are all relevant PICO components mentioned and explained? Not in the article per se (probably because of word limitations) but the supplementary file goes to some length to go the distance (p. 18). Not bad so far. I answer Q1 ‘Yes’. 5/
@stella_talic @bmj_latest @cochranecollab Q2. Did the report of the review contain an explicit statement that the review methods were established prior to the conduct of the review and did the report justify any significant deviations from the protocol? 6/
@stella_talic @bmj_latest @cochranecollab This is a bit awkward. The authors state conducting their review in accordance with PROSPERO (and PRISMA). One must assume that they mean having published a protocol in PROSPERO and that the review follows the promises made therein. 7/ Image
@stella_talic @bmj_latest @cochranecollab To establish if the review really does follow the protocol we have to compare the two. To enable this, the authors kindly included the protocol in their massive supplement. But this is different from what we want the establish with our AMSTAR-2 assessment. 8/
@stella_talic @bmj_latest @cochranecollab The fact is that the review itself says nothing about reporting and justifying significant deviations from the protocol. Because of this my answer to Q2 is ‘Partial yes’. Let’s return to possible deviations from protocol to review after finishing with AMSTAR-2. 9/
@stella_talic @bmj_latest @cochranecollab Q3. Did the review authors explain their selection of the study designs for inclusion in the review? This is also something to dig up from the massive supplement. This should have been on p.8 in the protocol but it is not. No explanation => I answer Q3 ‘No’ 10/ Image
@stella_talic @bmj_latest @cochranecollab Q4. Did the review authors use a comprehensive literature search strategy? The authors certainly searched more than two databases and included a search strategy (again, in the supplement). The PRISMA study flow diagram (Fig 1) also testifies of herculean efforts. 11/
@stella_talic @bmj_latest @cochranecollab However, the first problem comes with AMSTAR-2 insisting that authors have: “justified publication restrictions (e.g. language)”. The authors say they excluded articles in a language other than English but they do not justify this. In @cochranecollab this wouldn’t fly. 12/
@stella_talic @bmj_latest @cochranecollab Leaving out articles published in foreign languages of course saves the authors a lot of effort. But the flipside is the possibility of leaving out something relevant. It’s handy to have friends who are fluent in languages other than English. 13/
@stella_talic @bmj_latest @cochranecollab What about the rest of the requirements for Q4? The only requirement the authors fulfil for answering ‘Yes’ is that their search is less than 24 months old. The gap between April 23 (earliest run of search) and November (date of publication) is 209 days = 7 months. 14/ Image
@stella_talic @bmj_latest @cochranecollab So, because we can only tick two items from the left column and one item from the right column, the only correct answer is ‘No’. This is a great example of how a checklist like AMSTAR-2 can operationalize a very complex issue like comprehensiveness of search. 15/
@stella_talic @bmj_latest @cochranecollab Q5. Did the review authors perform study selection in duplicate? I’m assuming the authors did the duplication correctly such that two people independently evaluated every study at both the title & abstract stage and at full text. So, my answer is ‘Yes’. 16/ Image
@stella_talic @bmj_latest @cochranecollab Q6. Did the review authors perform data extraction in duplicate? Looking good in this regard. Extra brownie points for pilot testing the data extraction form to ensure people on the author team have a similar understanding of how to apply the thing. My answer is ‘Yes’. 17/ Image
@stella_talic @bmj_latest @cochranecollab Q7. Did the review authors provide a list of excluded studies and justify the exclusions? The study flow diagram (Fig 1) provides the reasons for exclusion but where is the list? Nowhere. This is a woefully common occurrence. In @cochranecollab no list = no publication. 18/
@stella_talic @bmj_latest @cochranecollab With a whopping 104 pages of supplemental material, a few more pages wouldn’t have been a big deal for the authors. For the reader it makes all the difference because it enables checking why this particular study was chucked out and that wasn’t. My answer to Q7 is ‘No’. 19/
@stella_talic @bmj_latest @cochranecollab Q8. Did the review authors describe the included studies in adequate detail? This is a tricky one. The key here is the word ‘adequate’. Tables 1-3 in the article provide some idea of the studies. And tables 1-3 in the supplement add a bit more. 20/
@stella_talic @bmj_latest @cochranecollab However, I think we can agree that to adequately describe the intervention, it is hardly sufficient to say: “mask wearing” and nothing else. Who wore what, where and how long, etc.? I already complained at Q1 about the lack of detail on the I component of PICO. 21/
@stella_talic @bmj_latest @cochranecollab Understanding the intervention(s) is key in conducting a #systematicreview evaluating the effectiveness of an intervention or a group thereof. How do we know these studies are similar enough that putting their results together is sensible? We don’t. Hence, my answer is ‘No’. 22/
@stella_talic @bmj_latest @cochranecollab Also, if you want to see evidence of someone understanding the intervention better (masks in particular), I recommend reading the review by @jeremyphoward: pnas.org/content/118/4/… and my critique thereof: . 23/
@stella_talic @bmj_latest @cochranecollab @jeremyphoward Q9. Did the review authors use a satisfactory technique for assessing the risk of bias (RoB) in individual studies that were included in the review? Absolutely no qualms on this point whatsoever as they used the best available tools: ROB2 and ROBINS-I. My answer is ‘Yes’. 24/ Image
@stella_talic @bmj_latest @cochranecollab @jeremyphoward Q10. Did the review authors report on the sources of funding for the studies included in the review? The short answer is ‘No’. This too is a veeeeery common occurrence in #systematicreview. But it’s not that much more work. Just one more item on the data extraction form. 25/
@stella_talic @bmj_latest @cochranecollab @jeremyphoward What I find ironic is that the authors used the very same check list I’m using here (AMSTAR-2) to assess the quality of #systematicreview they found with their search but they didn’t stop to think that the same items would apply to their own. Ouch! 26/
@stella_talic @bmj_latest @cochranecollab @jeremyphoward Q11. If meta-analysis was performed did the review authors use appropriate methods for statistical combination of results? With this item it’s vital to proceed point-by-point. Did the authors justify combining the data in a meta-analysis? No, they bloody well did not. 27/
@stella_talic @bmj_latest @cochranecollab @jeremyphoward In Data synthesis on p.3 the authors explain the particular kinds of black magic they used to force results data obtained from enormously different (or heterogeneous, statistically speaking) studies into meta-analyses. 28/ Image
@stella_talic @bmj_latest @cochranecollab @jeremyphoward In Statistical analysis they state that some things were too different to put together. Only three things were similar for statistical pooling: hand washing, face mask wearing, and physical distancing. But do we get an explanation of HOW they are similar? No we don't. 29/ ImageImage
@stella_talic @bmj_latest @cochranecollab @jeremyphoward The protocol is the best place to show everyone you understand the complexities of the intervention and its assessment. In theirs, the authors just list things as if it were self-evident that e.g. any kinds of face masks (respirators, surgical, or self-made) are similar. 30/
@stella_talic @bmj_latest @cochranecollab @jeremyphoward This was a long-winded way of saying we don’t need to assess if the authors used an appropriate weighted technique to combine study results and if they adjusted for heterogeneity if present. Without an explanation of what is similar and why the answer to Q11 is ‘No’. 31/
@stella_talic @bmj_latest @cochranecollab @jeremyphoward Q12. If meta-analysis was performed, did the review authors assess the potential impact of RoB in individual studies on the results of the meta-analysis? It is awfully common that #systematicreview authors assess included studies’ RoB and then forget all about it. 32/
@stella_talic @bmj_latest @cochranecollab @jeremyphoward Here, the authors list their RoB assessment results in study characteristics tables. And that’s all good and proper. In the supplement there’s even one with color coding with darker red meaning higher RoB. But do they bring RoB to the meta-analysis table? No, they do not. 33/
@stella_talic @bmj_latest @cochranecollab @jeremyphoward This is a critical flaw. We were already iffy about whether the things the authors were putting together were sufficiently similar. We also don’t know how reliable those things are. What does that make the results of meta-analyses? Mega-iffy. Feel free to quote me on that. 34/
@stella_talic @bmj_latest @cochranecollab @jeremyphoward Quoting directly from AMSTAR-2: “For [answering] YES, included only low risk of bias RCTs OR if the pooled estimate was based on RCTs and/or NRSI at variable RoB, the authors performed analyses to investigate possible impact of RoB on summary estimates of effect”. 35/
@stella_talic @bmj_latest @cochranecollab @jeremyphoward The latter option applies here. But the authors did NOT perform analyses to see if rubbish studies were leading the results astray and off a cliff. I’m not saying that’s necessarily happening but we don’t know that it’s not happening. Hence my answer to Q12 is ‘No’. 36/ Image
@stella_talic @bmj_latest @cochranecollab @jeremyphoward Q13. Did the review authors account for RoB in individual studies when interpreting/ discussing the results of the review? To merit answering ‘Yes’ the review should have discussed the likely impact of RoB on the results. They merely state how they rated the RoB of studies. 37/
@stella_talic @bmj_latest @cochranecollab @jeremyphoward Doing this right means saying something like: “Because of high risk of bias in the studies included in analysis X the summary estimate of effect isn’t very reliable”. Because the authors do not do anything of the sort, I am compelled to answer Q13 ‘No’. 38/
@stella_talic @bmj_latest @cochranecollab @jeremyphoward Actually, there’s more than just risk of bias that ought to be considered when interpreting results. RoB is just one of the five dimensions of GRADE. Let’s come back to that after we finish this AMSTAR-2 assessment. 39/
@stella_talic @bmj_latest @cochranecollab @jeremyphoward Q14. Did the review authors provide a satisfactory explanation for, and discussion of, any heterogeneity observed in the results of the review? Here, the key words are “satisfactory explanation”. 40/
@stella_talic @bmj_latest @cochranecollab @jeremyphoward According to the Cochrane Handbook (bit.ly/3cyFBxH), the amount of heterogeneity observed in 2/4 of Talic et al’s meta-analyses Fig 5 and Fig 6 fall into the highest category of considerable. And this is assuming only RCTs are involved, which is not the case here. 41/ ImageImage
@stella_talic @bmj_latest @cochranecollab @jeremyphoward The Cochrane Handbook lists seven strategies for addressing heterogeneity. It is my professional opinion that here #2 is the best choice. Meta-analysis is not compulsory. With ridiculously large differences between studies the summary is meaningless. 42/ Image
@stella_talic @bmj_latest @cochranecollab @jeremyphoward But we need to answer Q14. Did the authors perform an investigation of sources of any heterogeneity in the results and discuss the impact of this on the results of the review? In my opinion the consideration of heterogeneity is deplorably inadequate. My answer is ‘No’. 43/ Image
@stella_talic @bmj_latest @cochranecollab @jeremyphoward Q15. If they performed quantitative synthesis did the review authors carry out an adequate investigation of publication bias (small study bias) and discuss its likely impact on the results of the review? There’s no sign of a funnel plot or Egger’s test so my answer is ‘No’. 44/
@stella_talic @bmj_latest @cochranecollab @jeremyphoward Q16. Did the review authors report any potential sources of conflict of interest, including any funding they received for conducting the review? This is all good and proper at the end of the article so my answer is ‘Yes’. 45/
@stella_talic @bmj_latest @cochranecollab @jeremyphoward Finally, if we put everything together into a colour-coded matrix where ‘Ish?’ stands for ‘Partial Yes’, we get this result. On the whole it’s not looking all too rosy. Basically, I wouldn’t have accepted this review for publication due to critical flaws and lack of GRADE. 46/ Image
@stella_talic @bmj_latest @cochranecollab @jeremyphoward Oh yeah. I promised to say something about GRADE by gradeworkinggroup.org. Well, it’s a great system for translating review results into conclusions that say how confident we can be in the results and how eagerly we ought to be implementing them into practice. 47/
@stella_talic @bmj_latest @cochranecollab @jeremyphoward GRADE combines the review authors’ assessment of the evidence as a whole. This means looking at risk of bias, (in)directness, (im)precision, (in)consistency, and publication bias. Read more about it e.g. here: bestpractice.bmj.com/info/toolkit/l… 48/
@stella_talic @bmj_latest @cochranecollab @jeremyphoward I could have also touched upon the authors combining results obtained with different study designs. This is something widely frowned upon in the #systematicreview community, especially in @cochrancollab. And this thread is ridiculously long already. 49/
@stella_talic @bmj_latest @cochranecollab @jeremyphoward However, I might as well also provide links to my earlier threads (in Finnish) about evaluating the review/report by Mäkelä et al. and the as yet unpublished Ollila et al. review: 50/
@stella_talic @bmj_latest @cochranecollab @jeremyphoward I also pulled together my AMSTAR-2 assessments of all three reviews. Looks like a bigger picture is emerging here. My interpretation is that there is more eagerness or hurry to do reviews than there is time and expertise. And that's a bit sad really. 51/ Image
@stella_talic @bmj_latest @cochranecollab @jeremyphoward @schunemann_mac @TheLancet I still think the @CochraneAirways review by Jefferson et al. is the most reliable source of evidence in this area: cochranelibrary.com/cdsr/doi/10.10… even though it doesn’t (yet) incorporate the latest studies. END OF THREAD (so far) 53/ Image
@stella_talic @bmj_latest @cochranecollab The whole point of a protocol is to enable comparing what you promise to what you end up doing. In this case the only function it serves is to enable ticking a box on the PRISMA checklist. Hot tip: If you want to get away with murder at the review stage, write a vague protocol.

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Jani Ruotsalainen

Jani Ruotsalainen Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @MrJaniR

23 Nov
Why is it that I spent some three hours over the weekend scrutinizing a #systematicreview in minute detail? I hadn't expected needing to explain my motives but someone forced my hand so here goes. This will be a shorter thread. 1/10
When someone asked Sir Edmund Hillary why he wants to climb Everest, he responded: “Because it’s there”. Similarly, I examine these reviews because I know I can do it well and because I don’t see anyone else doing it out in the open. 2/10
I used to do this for a living. For nine years I worked as Managing Editor of Cochrane Work, which means that not only did I peer referee countless reviews, I led our group’s peer refereeing process. So, I taught our authors and our editors and peer referees. 3/10
Read 10 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Thank you for your support!

Follow Us on Twitter!

:(