Dan Quintana Profile picture
Aug 7, 2020 8 tweets 3 min read Read on X
Including a power contour plot in methods sections of papers would drastically improve the interpretation of results.

Here's why... Image
Let's say you designed your study and paired-samples t-test to reliably detect an effect size δ = 0.3.

Maybe that's the minimally interesting effect size? Maybe that's all you can afford? That's beyond the point for now, but check out @lakens on this daniellakens.blogspot.com/2020/08/feasib…
Here's the power contour plot for this scenario ⬇️

If the reader thinks that interesting effect sizes are LOWER than δ = 0.3, then it's easy to see that the chances of reliably detecting such effects drops pretty quickly. Image
But let's say a study was designed to reliably detect effects δ ≥ 0.8. In some areas, like psychophysics, true effect sizes of δ≥ 0.8 are plausible. But in most other areas of psych this is quite large and unrealistic.

Let's have a look at this power contour plot ⬇️ Image
Imagine this was presented with a paper in a field where true effect sizes of δ≥ 0.8 are unlikely. It's very easy to see that you can't reliably detect more realistic effect sizes and that larger sample sizes would have decreased the effect size that could be reliably detected
In this scenario, if the true effect size is δ ≤ 0.54, then there's less than a 50% chance that this design is likely to detect this effect size.

In other words, it's more likely to miss this effect than detect it
With a power contour plot in the methods section, readers (and reviewers) can quickly see what sort of effect sizes can be reliably detected and then make up their own mind whether these sort of effects are realistic and whether important effects sizes cannot be reliably detected
Sure, people can just write down the effect size that can be reliably detected with the study design. This is better than nothing, but people have different conceptions of what's considered "important".

Power contour plots show you a full range of effects, at a glance

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Dan Quintana

Dan Quintana Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @dsquintana

Feb 14
“Underpowered to detect what?” should be the first question to anyone who says a “study” is underpowered journals.sagepub.com/doi/10.1177/10…
Image
And here's your annual reminder that a "study" cannot be underpowered, but rather, a *design and test combination* can be underpowered for detecting hypothetical effect sizes of interest towardsdatascience.com/why-you-should…
If you want to determine the range of hypothetical effect sizes a that given field can reliably detect (and reject), here's a demo with #Rstats code sciencedirect.com/science/articl…
Image
Read 4 tweets
Mar 30, 2023
My guide to calculating study-level statistical power for meta-analyses using the 'metameta' #Rstats package and web app is out now in AMPPS 🔓 doi.org/10.1177/251524…

Here's how this tool can be used for your next meta-analysis OR for re-analysing published meta-analyses 🧵
There's been a lot of talk recently about the quality of studies that are included in meta-analyses—how useful is a meta-analysis if it's just made up of studies with low evidential value?
But determining the evidential value of studies can be hard. Common approaches for looking at study quality or risk of bias tend to be quite subjective. You're not likely to get the same conclusions from different authors. These tasks can also be quite time consuming.
Read 29 tweets
Jun 28, 2022
Lots of replies and quote retweets to this paper along the lines of, “LoL PSyChoLoGy”, so here are two thoughts on this…

1. I’m glad the field is generally mature enough to actually recognize there’s a problem. Not all fields (or psychology subfields) can say this
When I was an undergraduate psychology student there was no discussion of this—all the studies we were taught just worked 🪄
Reproducibility concerns only started becoming a more mainstream idea during my PhD, but this still wasn’t discussed much.
Read 7 tweets
Jun 24, 2022
“Nearly 100% of published studies… confirm the initial hypothesis. This is an amazing accomplishment given the complexity of the human mind and human behaviour. Somehow, as psychological scientists, we always find the expected result; we always win!” royalsocietypublishing.org/doi/10.1098/rs… Image
One thing I want to add: the psychological sciences need to broadly adopt practices that help us determine when we're wrong. The standard NHST p-value approach cannot be used to provide support for absence of an effect. This paper provides two solutions academic.oup.com/psychsocgeront…
Strong auxiliary assumptions are also required for falsifying hypotheses
Read 5 tweets
Oct 24, 2021
Here are the answers for yesterday’s mini-quiz and a little info on each of these psych studies 🧵
1. The bottomless soup bowl study 🍜🍜🍜🍜

Participants who ate soup from bowls that were refillable, unbeknownst to them, ate 73% more soup than those eating from normal bowls pubmed.ncbi.nlm.nih.gov/15761167/

Here is Brian Wansink with the refillable bowl ⬇️ Image
This study has been cited over 700 times and won an Ig Nobel prize, but the numbers reported in the paper are… suspicious jamesheathers.medium.com/sprite-case-st… and there are doubts this study ever actually happened statmodeling.stat.columbia.edu/2019/08/20/did…
Read 20 tweets
Apr 23, 2021
New preprint 🎉

Oxytocin receptor (OXTR) expression patterns in the brain across development osf.io/j3b5d/

Here we identify OXTR gene expression patterns across the lifespan along with gene-gene co-expression patterns and potential health implications

[THREAD]
So, let's begin with some background.

As well as being an oxytocin researcher I'm also a meta-scientist, which means that a lot of my work on improving research methods is focused on improving oxytocin research (that's what got me into meta-science in the first place)
Earlier this year, we published a paper, led by @fuyu00, in which we proposed that three things are required to improving the precision of intranasal oxytocin research: Improved methods, reproducibility, and theory.

Read the article here: rdcu.be/cjeok
Read 30 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us!

:(