If we prioritized improving patients' and trainees' lives clinical psych's structures would look entirely different
A part touched on but (understandably!) not emphasized in this piece: There's vanishingly little evidence our training improves clinical outcomes for patients
🧵
Multiple studies with thousands of patients (though only 23-39 supervisors each!) show that supervisors share less than 1% of the variance in patient outcome
And that's just correlation, the causal estimate could be much smaller
But even if supervisors don't account for much variance in patient outcomes directly, maybe the experience gained in working with patients over time matters?
Unfortunately, the evidence we have for that is thin to nonexistent as well
A recent meta-analysis failed to find evidence that more experienced therapists improve symptoms or functioning more than less experienced therapists across 29 studies (though patients with more experienced therapists reported more treatment satisfaction)
These findings align with decades of converging evidence that folks with less experience can deliver mental health interventions as well as experienced professionals (ie help patients get at least as much better)
They also align with analyses that therapists get at worst slightly less helpful to patients over time during training or at best minimally (d = 0.04) more helpful over time during training (Small therapist Ns though)
Before you @ me, I know absence of evidence ≠ evidence of absence, and I wish the quality of these studies were MUCH higher
However, an institution that prioritized improving patients' and trainees' lives would have invested in rigorously testing our training model already
If less training is required to help people than our current model of "accumulate a ludicrous number of clinical hours while also doing 16 other jobs" we could improve trainees' lives by reducing hour requirements
There's solid evidence we could prepare trainees in a much shorter amount of time
Trainings that take *much* less time than PhD programs (80 hours or less vs. 500 or more) can lead to medium effect sizes on patient outcomes vs control conditions in RCTs
Even if our training improves patient outcomes more than the current evidence suggests, we could still do better
How long are we going to keep enshrining powerful people's personal preferences instead of investing in rigorously evaluating how we could do better as a field?
I'm not optimistic, and I hope I'm wrong
A clinical psychology that prioritizes improving patients' and trainees' lives would invest in evaluating whether its current training models accomplish those goals
And if those goals are not being met, we would prioritize solutions that center patients' and trainees' well-being
Or, if you would prefer the thrust of this thread in a succinct tweet
If you ever want to sound like an expert without paying attention, you only need two words in response to any question
"It depends"
A thread on why we should retire that two word answer 🧵
When people say "it depends" they often mean the effect of one variable depends on the level of at least one other variable
For example:
You: Does this program improve depression?
Me, Fancy Expert: Well, it depends, probably on how depressed people were before the program
Understandably you'll want some evidence for my "it depends"
Luckily my underpaid RA has already fired up an ANOVA or regression, and *I* found that how depressed folks were before the program moderated the effect of the program
And especially if you have a psych background, you might think we *need* an experiment to understand causes
While I love experiments, here's a thread of resources on why they're neither necessary nor sufficient to determine causes 🧵
This paper led by @MP_Grosz is a great start! It persuaded me that merely adjusting our language (eg saying "age is positively associated with happiness" instead of "happiness increases with age") isn't enough
Still responding to folks re: my transition to data science post! I'll get to everyone, promise!
Given the interest I thought people might want to know the (almost all free/low cost!) resources I used to train myself for a data science role
A (hopefully helpful) 🧵
R, Part I
My first real #rstats learning experience was using swirl. I loved that I could use it inside of R (rather than having to go back and forth between the resource and the RStudio console)
A cliche rec, but it's cliche for a reason. R for Data Science by @hadleywickham & @StatGarrett transitioned me from "kind of messing around" to "wow, I did that cool thing" in R. It's absolutely a steal that it's available for free
Trying to balance:
- Having genuine empathy for people who are staring down the barrel of their life's work not replicating
- Not reinforcing power structures and practices that led to a world where those barrels are all too common
Hearing @minzlicht talk about this on the "Replication Crisis Gets Personal" @fourbeerspod episode brought home to me how lucky I am to be early in my career now as opposed to 20 or even 10 years ago
But his example* reminds me people in power have a choice when confronted with a much messier literature than initially described
They can double down, or they can engage meaningfully with a more complicated world
*And many others, my mentions aren't ever comprehensive!