Sloppy data science from > 10 years ago and a viral thread filled with mental health treatment misinformation this week: A horror story

🧵
More than 10 years ago, a landmark new theory about how human memory works dropped in a major scientific journal

The oversimplified jist: Having someone recall a scary memory makes it so you can more easily modify or even erase that memory during a limited period of time
This theory had HUGE implications for treating all kinds of anxiety, and especially post-traumatic stress disorder

Imagine the promise of being able to erase, or at least make way less scary, a memory that's haunted you for decades
A bunch of people got to work on applying this theory to treatments, and the original paper was cited over 1,300 times (a huge number for any single paper in psychology)

nature.com/articles/natur…
One of the treatments based on this theory that got people's attention was... playing Tetris!

Just playing the game for a bit of time seemed to reduce the intrusion of scary memories in a lab study of undergrads

ncbi.nlm.nih.gov/pmc/articles/P…
But it didn't stop there! Folks tested playing Tetris in the hospital within 6 hours of a car crash and found it reduced the intrusion of traumatic memories in the week following the crash compared to a control group

nature.com/articles/mp201…
This study was also covered favorably in national outlets like NPR

npr.org/sections/healt…
More formal reviews of the broader science underlying this theory of memory and Tetris acknowledged there might be limitations

But still, the general attitude of the article is something like: seems more promising than not!

sciencedirect.com/science/articl…
All of the above articles were cited in a viral thread this week (over 40,000 likes and RTs) urging people to have Tetris on their phone so they could play it following a traumatic event

This person did their homework, looked at the science, and found a bunch of misinformation
I'm not linking the viral thread because I don't want this to be a dunk on a person who wanted to help people and looked up the science

If it's a dunk on anything, it's on the structures of science making it possible for peer-reviewed articles to contain this much misinformation
Because here's the big problem, that new theory of human memory never should have existed in the first place

Not only have follow up studies failed to confirm it, running the same analyses on the same data from the original study >10 years ago doesn't confirm it
Put another way, the original authors messed up their analyses (it could happen to anyone!) and the original "evidence" for this widely influential theory of memory never actually existed

See this thread (scroll up and down) for more info
But wait, shouldn't we care if those Tetris interventions worked even if the theory behind them is wrong?

Unfortunately, those studies can't possibly tell us whether Tetris actually helps people
Imagine someone told you they had a poll of the 2024 Presidential election, but they only asked 71 people. You wouldn't care what that poll said because that's way too few people to learn what you want to know

71 is the maximum number of people in any of the studies above
Technical aside: Yes yes, stats folks, I know statistical power & selection bias are different problems. I'm just trying to illustrate that people should trust their instincts that most studies are too small. Here's something I wrote that's more technical medium.com/@mullarkey.mik…
Bottom line: No matter how much we want them to, studies that small can't tell us whether treatments for mental health problems work

"Promising results in a small study" is a fantasy scientists in a broken system sell to get grants, not evidence a new treatment will help
Oh, and none of this is hypothetical

Tetris doesn't help undergrads in a lab if you test it with more people

ncbi.nlm.nih.gov/pmc/articles/P…

And there are a bunch of other problems with the car crash Tetris study outlined by @IoanaA_Cristea in this commentary
nature.com/articles/mp201…
Where does this leave us? The original poster of the viral thread isn't wrong when they cite lack of access to mental health treatment as a big reason to look for alternatives

And isn't something better than nothing? Well, unfortunately that's not always true
For post traumatic stress disorder in particular, there's evidence a particular treatment delivered very shortly after a traumatic event actually harms patients rather than helps them (Shout out to @williamspsych for leading that effort!)

pennstate.pure.elsevier.com/en/publication…
So we can't just assume that playing Tetris is a neutral act at worst following a trauma

If you made me bet money, I'd probably bet on it being neither harmful nor helpful. But we really don't know
We need more tests of accessible mental health treatments in large enough samples to know whether they actually work

We should also design our treatments with accessibility in mind from the start

We tried to do that here in a study of over 2,400 people
psyarxiv.com/ved4p/
More systems, incentives, and mandates that encourage these kinds of large scale tests please!

Ditto for systems that actually check whether the stats in papers were done correctly (I'd suggest compensating experts for their labor with money as a start)
And if you're looking for mental health resources right now you can check out @therapy4theppl which has links to therapy & legit self-help
therapy4thepeople.org

If you're having difficulty with depression in particular I have resources in this thread

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Dr. Michael Mullarkey

Dr. Michael Mullarkey Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @mcmullarkey

7 Oct
One trope coming out of the Big Tech & mental health discussions is the presumed weakness of self-reported well-being

Folks don't have perfect insight into what impacts their mental health

And how folks perceive their own well-being is more important than any "objective" metric
I've found this thought exercise helpful

A close friend tells you they're having a hard time getting out of bed every day and feeling really down

They get a new Not-Theranos blood test that "detects depression" and test negative

Do you believe your friend or the blood test?
This dynamic is what makes most mental health diagnoses simultaneously much trickier and much easier than most physical health diagnoses

Even if we don't expect perfect insight, how people feel about their lives often matters more than any "objective" test
Read 5 tweets
1 Oct
If you ever want to sound like an expert without paying attention, you only need two words in response to any question

"It depends"

A thread on why we should retire that two word answer 🧵
When people say "it depends" they often mean the effect of one variable depends on the level of at least one other variable

For example:
You: Does this program improve depression?
Me, Fancy Expert: Well, it depends, probably on how depressed people were before the program
Understandably you'll want some evidence for my "it depends"

Luckily my underpaid RA has already fired up an ANOVA or regression, and *I* found that how depressed folks were before the program moderated the effect of the program

"It depends" wins again?

Nope, so many problems
Read 23 tweets
30 Sep
Figuring out what causes what is SO HARD

And especially if you have a psych background, you might think we *need* an experiment to understand causes

While I love experiments, here's a thread of resources on why they're neither necessary nor sufficient to determine causes 🧵
This paper led by @MP_Grosz is a great start! It persuaded me that merely adjusting our language (eg saying "age is positively associated with happiness" instead of "happiness increases with age") isn't enough

journals.sagepub.com/doi/full/10.11…
If our underlying research question is causal, we still need causal methods! But if they're not just experiments, what are the options?

Luckily for us @dingding_peng has a must-read primer on using causal methods with non-experimental data

journals.sagepub.com/doi/10.1177/25…
Read 13 tweets
29 Sep
If we prioritized improving patients' and trainees' lives clinical psych's structures would look entirely different

A part touched on but (understandably!) not emphasized in this piece: There's vanishingly little evidence our training improves clinical outcomes for patients
🧵
Multiple studies with thousands of patients (though only 23-39 supervisors each!) show that supervisors share less than 1% of the variance in patient outcome

And that's just correlation, the causal estimate could be much smaller

tandfonline.com/doi/full/10.10…

journals.sagepub.com/doi/full/10.11…
There's evidence supervisors and trainees care more about a supervisors' "relational characteristics" than their "transmission of clinical know how"

It's ok to want to spend time with people we like, and there's no guarantee that will help patients

ncbi.nlm.nih.gov/pmc/articles/P…
Read 14 tweets
27 Sep
Where should folks turn if they want mental health support for depression *right now* and aren't in crisis?

Traditional talk therapy often has long waitlists

The therapy apps you've heard about promising quick access to treatment have lots of problems

What I recommend 🧵
Adults Part I

Program: Deprexis
Content: 10 self-guided, internet-based modules (most grounded in evidence-based approaches)
Cost: ~1-2 sessions of therapy ($280)
Evidence: Solid meta-analytic evidence across >10 RCTs journals.plos.org/plosone/articl…
Link: orexo-store-2.mybigcommerce.com
Adults Part II

Program: MoodGYM
Content: 5 self-guided, internet-based modules (all grounded in CBT-based approaches)
Cost: <1 session of therapy ($27)
Evidence: Somewhat shaky meta-analytic evidence across >10 RCTs researchgate.net/profile/Conal-…
Link: moodgym.com.au
Read 8 tweets
19 May
Still responding to folks re: my transition to data science post! I'll get to everyone, promise!

Given the interest I thought people might want to know the (almost all free/low cost!) resources I used to train myself for a data science role

A (hopefully helpful) 🧵
R, Part I

My first real #rstats learning experience was using swirl. I loved that I could use it inside of R (rather than having to go back and forth between the resource and the RStudio console)

swirlstats.com/students.html
R, Part II

A cliche rec, but it's cliche for a reason. R for Data Science by @hadleywickham & @StatGarrett transitioned me from "kind of messing around" to "wow, I did that cool thing" in R. It's absolutely a steal that it's available for free

r4ds.had.co.nz
Read 14 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!

Follow Us on Twitter!

:(