One trope coming out of the Big Tech & mental health discussions is the presumed weakness of self-reported well-being

Folks don't have perfect insight into what impacts their mental health

And how folks perceive their own well-being is more important than any "objective" metric
I've found this thought exercise helpful

A close friend tells you they're having a hard time getting out of bed every day and feeling really down

They get a new Not-Theranos blood test that "detects depression" and test negative

Do you believe your friend or the blood test?
This dynamic is what makes most mental health diagnoses simultaneously much trickier and much easier than most physical health diagnoses

Even if we don't expect perfect insight, how people feel about their lives often matters more than any "objective" test
There are real critiques of self report measures, and I've written some of them!

But the outcome data evaluated by Big Tech in the original WSJ article wasn't weak because it was self-reported. It was weak because of mismatches between the research design and the conclusions
I will also note that some predictors of interest in this space like screen time aren't well assessed by self-report *at all*

I'm talking about not underrating validated outcome metrics of well-being, depression symptoms, etc.

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Dr. Michael Mullarkey

Dr. Michael Mullarkey Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @mcmullarkey

1 Oct
If you ever want to sound like an expert without paying attention, you only need two words in response to any question

"It depends"

A thread on why we should retire that two word answer 🧵
When people say "it depends" they often mean the effect of one variable depends on the level of at least one other variable

For example:
You: Does this program improve depression?
Me, Fancy Expert: Well, it depends, probably on how depressed people were before the program
Understandably you'll want some evidence for my "it depends"

Luckily my underpaid RA has already fired up an ANOVA or regression, and *I* found that how depressed folks were before the program moderated the effect of the program

"It depends" wins again?

Nope, so many problems
Read 23 tweets
30 Sep
Figuring out what causes what is SO HARD

And especially if you have a psych background, you might think we *need* an experiment to understand causes

While I love experiments, here's a thread of resources on why they're neither necessary nor sufficient to determine causes 🧵
This paper led by @MP_Grosz is a great start! It persuaded me that merely adjusting our language (eg saying "age is positively associated with happiness" instead of "happiness increases with age") isn't enough

journals.sagepub.com/doi/full/10.11…
If our underlying research question is causal, we still need causal methods! But if they're not just experiments, what are the options?

Luckily for us @dingding_peng has a must-read primer on using causal methods with non-experimental data

journals.sagepub.com/doi/10.1177/25…
Read 13 tweets
29 Sep
If we prioritized improving patients' and trainees' lives clinical psych's structures would look entirely different

A part touched on but (understandably!) not emphasized in this piece: There's vanishingly little evidence our training improves clinical outcomes for patients
🧵
Multiple studies with thousands of patients (though only 23-39 supervisors each!) show that supervisors share less than 1% of the variance in patient outcome

And that's just correlation, the causal estimate could be much smaller

tandfonline.com/doi/full/10.10…

journals.sagepub.com/doi/full/10.11…
There's evidence supervisors and trainees care more about a supervisors' "relational characteristics" than their "transmission of clinical know how"

It's ok to want to spend time with people we like, and there's no guarantee that will help patients

ncbi.nlm.nih.gov/pmc/articles/P…
Read 14 tweets
27 Sep
Where should folks turn if they want mental health support for depression *right now* and aren't in crisis?

Traditional talk therapy often has long waitlists

The therapy apps you've heard about promising quick access to treatment have lots of problems

What I recommend 🧵
Adults Part I

Program: Deprexis
Content: 10 self-guided, internet-based modules (most grounded in evidence-based approaches)
Cost: ~1-2 sessions of therapy ($280)
Evidence: Solid meta-analytic evidence across >10 RCTs journals.plos.org/plosone/articl…
Link: orexo-store-2.mybigcommerce.com
Adults Part II

Program: MoodGYM
Content: 5 self-guided, internet-based modules (all grounded in CBT-based approaches)
Cost: <1 session of therapy ($27)
Evidence: Somewhat shaky meta-analytic evidence across >10 RCTs researchgate.net/profile/Conal-…
Link: moodgym.com.au
Read 8 tweets
19 May
Still responding to folks re: my transition to data science post! I'll get to everyone, promise!

Given the interest I thought people might want to know the (almost all free/low cost!) resources I used to train myself for a data science role

A (hopefully helpful) 🧵
R, Part I

My first real #rstats learning experience was using swirl. I loved that I could use it inside of R (rather than having to go back and forth between the resource and the RStudio console)

swirlstats.com/students.html
R, Part II

A cliche rec, but it's cliche for a reason. R for Data Science by @hadleywickham & @StatGarrett transitioned me from "kind of messing around" to "wow, I did that cool thing" in R. It's absolutely a steal that it's available for free

r4ds.had.co.nz
Read 14 tweets
28 Feb
I just found out a paper we first submitted ~3 years ago was accepted! We used an N > 1,000 sample, open data/code, and robust methods

I'm proud of this paper, and it also helped radicalize me against a lot of the stories we tell ourselves about peer review

A 🧵
The many reviews we received were almost uniformly hostile, confused, non-constructive, or some combination
The paper definitely got better throughout the process, and that had ~0 to do with the reviews

Real reason #1: A wonderful, ongoing collaboration with a stellar biostatistician/many other great collaborators

Real reason #2: I got better at coding/new tools became available
Read 22 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!

Follow Us on Twitter!

:(