Sloppy data science from > 10 years ago and a viral thread filled with mental health treatment misinformation this week: A horror story
🧵
More than 10 years ago, a landmark new theory about how human memory works dropped in a major scientific journal
The oversimplified jist: Having someone recall a scary memory makes it so you can more easily modify or even erase that memory during a limited period of time
This theory had HUGE implications for treating all kinds of anxiety, and especially post-traumatic stress disorder
Imagine the promise of being able to erase, or at least make way less scary, a memory that's haunted you for decades
A bunch of people got to work on applying this theory to treatments, and the original paper was cited over 1,300 times (a huge number for any single paper in psychology)
But it didn't stop there! Folks tested playing Tetris in the hospital within 6 hours of a car crash and found it reduced the intrusion of traumatic memories in the week following the crash compared to a control group
All of the above articles were cited in a viral thread this week (over 40,000 likes and RTs) urging people to have Tetris on their phone so they could play it following a traumatic event
This person did their homework, looked at the science, and found a bunch of misinformation
I'm not linking the viral thread because I don't want this to be a dunk on a person who wanted to help people and looked up the science
If it's a dunk on anything, it's on the structures of science making it possible for peer-reviewed articles to contain this much misinformation
Because here's the big problem, that new theory of human memory never should have existed in the first place
Not only have follow up studies failed to confirm it, running the same analyses on the same data from the original study >10 years ago doesn't confirm it
Put another way, the original authors messed up their analyses (it could happen to anyone!) and the original "evidence" for this widely influential theory of memory never actually existed
See this thread (scroll up and down) for more info
But wait, shouldn't we care if those Tetris interventions worked even if the theory behind them is wrong?
Unfortunately, those studies can't possibly tell us whether Tetris actually helps people
Imagine someone told you they had a poll of the 2024 Presidential election, but they only asked 71 people. You wouldn't care what that poll said because that's way too few people to learn what you want to know
71 is the maximum number of people in any of the studies above
Technical aside: Yes yes, stats folks, I know statistical power & selection bias are different problems. I'm just trying to illustrate that people should trust their instincts that most studies are too small. Here's something I wrote that's more technical medium.com/@mullarkey.mik…
Bottom line: No matter how much we want them to, studies that small can't tell us whether treatments for mental health problems work
"Promising results in a small study" is a fantasy scientists in a broken system sell to get grants, not evidence a new treatment will help
Oh, and none of this is hypothetical
Tetris doesn't help undergrads in a lab if you test it with more people
Where does this leave us? The original poster of the viral thread isn't wrong when they cite lack of access to mental health treatment as a big reason to look for alternatives
And isn't something better than nothing? Well, unfortunately that's not always true
For post traumatic stress disorder in particular, there's evidence a particular treatment delivered very shortly after a traumatic event actually harms patients rather than helps them (Shout out to @williamspsych for leading that effort!)
So we can't just assume that playing Tetris is a neutral act at worst following a trauma
If you made me bet money, I'd probably bet on it being neither harmful nor helpful. But we really don't know
We need more tests of accessible mental health treatments in large enough samples to know whether they actually work
We should also design our treatments with accessibility in mind from the start
We tried to do that here in a study of over 2,400 people psyarxiv.com/ved4p/
More systems, incentives, and mandates that encourage these kinds of large scale tests please!
Ditto for systems that actually check whether the stats in papers were done correctly (I'd suggest compensating experts for their labor with money as a start)
And if you're looking for mental health resources right now you can check out @therapy4theppl which has links to therapy & legit self-help therapy4thepeople.org
If you're having difficulty with depression in particular I have resources in this thread
If you ever want to sound like an expert without paying attention, you only need two words in response to any question
"It depends"
A thread on why we should retire that two word answer 🧵
When people say "it depends" they often mean the effect of one variable depends on the level of at least one other variable
For example:
You: Does this program improve depression?
Me, Fancy Expert: Well, it depends, probably on how depressed people were before the program
Understandably you'll want some evidence for my "it depends"
Luckily my underpaid RA has already fired up an ANOVA or regression, and *I* found that how depressed folks were before the program moderated the effect of the program
And especially if you have a psych background, you might think we *need* an experiment to understand causes
While I love experiments, here's a thread of resources on why they're neither necessary nor sufficient to determine causes 🧵
This paper led by @MP_Grosz is a great start! It persuaded me that merely adjusting our language (eg saying "age is positively associated with happiness" instead of "happiness increases with age") isn't enough
If we prioritized improving patients' and trainees' lives clinical psych's structures would look entirely different
A part touched on but (understandably!) not emphasized in this piece: There's vanishingly little evidence our training improves clinical outcomes for patients
🧵
Multiple studies with thousands of patients (though only 23-39 supervisors each!) show that supervisors share less than 1% of the variance in patient outcome
And that's just correlation, the causal estimate could be much smaller
Still responding to folks re: my transition to data science post! I'll get to everyone, promise!
Given the interest I thought people might want to know the (almost all free/low cost!) resources I used to train myself for a data science role
A (hopefully helpful) 🧵
R, Part I
My first real #rstats learning experience was using swirl. I loved that I could use it inside of R (rather than having to go back and forth between the resource and the RStudio console)
A cliche rec, but it's cliche for a reason. R for Data Science by @hadleywickham & @StatGarrett transitioned me from "kind of messing around" to "wow, I did that cool thing" in R. It's absolutely a steal that it's available for free