Profile picture
Dan Quintana @dsquintana
, 20 tweets, 6 min read Read on Twitter
Meta-analyses are often used as a gold standard measure of evidence. But how much trust should you place in a meta-analysis outcome? Here are a few things you should look out for next time you read one [THREAD]
1. Don’t just check whether the authors state they followed PRISMA/MARS reporting guidelines, check whether they ACTUALLY did.

prisma-statement.org

SPOILER ALERT: Very few do. The ones that do *typically* include a checklist in the supplement
2. Was the analysis protocol pre-registered? There is SO much analytical flexibility in meta-analysis, so this is a crucial point. Sometimes all it takes is a small tweak of study exclusion criteria to tip a summary effect size over the line to p = .048 (or closer to p = .05)
The quality of meta-analysis pre-registrations can also vary (like clinical trial registrations), due to the level of detail provided AND whether the analysis plan was peer-reviewed
Good: PROSPERO or OSF registration (fast + simple but minimal detail required)

Better: Peer-reviewed protocol (more detail required but few journals offer this)

Best: Registered report MA (even greater detail needed but even fewer journal options, for now)
Peer-reviewed protocols help you catch any blind spots, like an important search keyword you may have missed or not having a plan for dealing with effect size dependicies or outliers—speaking from experience here
3. Check if an academic librarian was consulted for the search strategy. They know databases and search term operators back-to-front, which helps ensure a better search. Most academics aren’t trained for this, let the experts help you!
4. Was there a measure of bias? Remember, Egger’s test is only a measure of “small study” bias, it’s NOT an exclusive test of publication bias. Be wary of papers that ONLY rely on ‘inspection’ of funnel plots to form conclusions on risk of bias
Without statistical inference, meta-analysis funnel plot inspection is like a Rorschach test—these plots will reveal whatever you want them to reveal
Contour-enhanced funnel plots (right) are a slightly better tool to visualize the risk of publication bias than conventional funnel plots (left) as you can visualize how many studies fall between p = 0.05 and p = 0.01, but they’re no silver bullet—how many is too many?
5. A related issue is effect size inflation - published studies tend to have effects that are larger than the ‘true’ effect. So how can this be assessed?
Check if there’s an overall assessment of the quality of evidence. If all included studies were pre-registered there’s *less* of a chance of publication bias or effect size inflation. Even better is a meta-analysis of registered reports osf.io/yq59d/
You can also score the quality of each study, and perform moderator analysis (i.e., do poorer quality studies have larger effect sizes?), but this approach has its critics...
6. Have the authors shared their data and analysis scripts? There’s nothing quite like the *potential* for others to pore over your script to motivate you to triple-check your own scripts and data.

All things equal, trust the meta-analysis with open data & scripts
With open data, you can also figure out which numbers were extracted from a specific study. There are often several options, especially if the primary study outcome isn’t the same as the primary meta-analysis outcome
7. Is the analysis robust? That is, does the main conclusion depend on one study? This can be easy to spot in forest plots. Most MAs have forest plots, so be wary when they’re missing. Leave-one-out analysis are a handy approach to check the impact of individual effect sizes
8. Have the authors dealt with effect size dependencies? Effect sizes from the same population (e.g., pre-post designs or multiple effects from one study) are statistically dependent, so this needs to be accounted for in order to generate accurate effect sizes
There are various approaches, but robust variance estimation is a good option as this doesn’t require you to guess the true correlation between effect sizes (because these are almost never reported) or to drop any effect sizes onlinelibrary.wiley.com/doi/full/10.10…
9. Check conflicts of interest - both financial AND intellectual. Do the authors have a horse in the race? Financial interests are reported in disclosure statements but intellectual interests are only revealed in the study inclusion list—are the author’s own papers are included?
It would be great if there were concrete rules for conducting meta-analyses. However, meta-analysis isn’t monotheistic—there are several paths that can lead to a quality summary effect size. What’s key is transparency and a strong a priori justification for the chosen path
Missing some Tweet in this thread?
You can try to force a refresh.

Like this thread? Get email updates or save it to PDF!

Subscribe to Dan Quintana
Profile picture

Get real-time email alerts when new unrolls are available from this author!

This content may be removed anytime!

Twitter may remove this content at anytime, convert it as a PDF, save and print for later use!

Try unrolling a thread yourself!

how to unroll video

1) Follow Thread Reader App on Twitter so you can easily mention us!

2) Go to a Twitter thread (series of Tweets by the same owner) and mention us with a keyword "unroll" @threadreaderapp unroll

You can practice here first or read more on our help page!

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just three indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member and get exclusive features!

Premium member ($3.00/month or $30.00/year)

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!