New paper on skip-out questions in #depression diagnosis.
In many clinical interviews for depression, we query for core symptoms sad mood & anhedonia. If neither are present, we "skip" the other 7 symptoms (eg sleep, appetite) bc 1 of the core symptoms is required for diagnosis🧵
This is a practice to save time, but begs the question: are there people with many so-called "secondary" depression symptoms who do not have the core symptoms? This is what our new paper in JPCS led by the wonderful @orla_mcbride is about.
We also look into common data-analytic procedures to deal with skip-out data, such as imputing missing data with 0, meaning that if you don't have sad mood/anhedonia and we don't ask you about insomnia, we decide you don't have insomnia. Yep, bizarre, but it's really common.
Broadly, we find that symptoms *are* endorsed in much higher frequency in the presence of sad mood and/or anhedonia, seemingly justifying DSM's core distinction. There are two "howevers" here, however:
However 1: given that all symptoms are positively intercorrelated, you could make the same argument for most symptoms. E.g., all symptoms much higher endorsed when you have insomnia vs if you don't have it.
However 2: while symptoms were much less frequent when not endorsing core symptoms of depression, they were far from 0. Therefore, substituting those with e.g. 0 (conceptually or statistically) is flawed.
Keep in mind that secondary symptoms present in absence of core symptoms are *meaningful* in this data.
When you take a self-report depression questionnaire & are asked about insomnia, you may indicate insomnia in the absence of depression (e.g. ongoing construction on street).
This was not the case here: secondary symptoms indicated were only recorded by clinicians after ruling out things symptoms due to medical illness, symptoms due to medication, insomnia due to construction etc.
Hope you enjoy the piece—feedback very welcome. Big thanks to Ken Kendler & Steve Aggen for providing such an amazing dataset, Orla for leading this work over many years (we did it Orla!!! 👊), & my student Jelle for contributing to his first paper!
For those who don't have full text access, the PDF is (in line with the Taverne Amendment, thank you Dutch government) available on my website.
1/4 If you read one #depression#biomarker paper this year, read this one by Nils & the gang. They looked at a large sample of depressed and healthy participants, investigating numerous features (neuro, genetics, etc) in 2.4 million #MachineLearing models.
2/4 I'm not surprised by the results: depression is not a unitary, biological disease entity. It is a label that was historically developed for clinical utility: it is a heuristic superimposed on a complex landscape of mental health problems people experience.
3/4 The label has strenghts (and I believe labels can be helpful), but of course biological investigations into labels such as depression will have limited success.
1/5 This review by Gonthier (2022) tackles a crucial topic: are non-verbal intelligence tests culture fair? This is important because you often see the reasoning "ethnicity/race 1 has lower IQ than ethnicity/race 2, & it must be genetic because non-verbal tests are culture fair".
2/5 This takes ugly extremes such as that there is "some genetic component in Black–White differences in mean IQ" (Rushton & Jensen 2005) etc. So the review here really matters to address threats to such conclusions and can set the record straight.
3/5 Gonthier investigates numerous sources of evidence, from controlled lab experiments in the US to n=1 qualitative reports from ethnologists dating many decadesback, concluding that there is substantial evidence that non-verbal tests are *not* culture fair.
Highly recommended if you always wanted to know what colliders are or do—and if you ever "added x and y as covariates" to e.g. a linear regression because that is "what your field does".
2/ This is in my part my 'fault' bc I kept submitting it to applied journals, but I really didn't want this paper in a "journal specialized on measurement" (quote from 6 rejection letters): I wanted to reach clinicians & applied researchers.
3/ But I had submitted several papers that year, so didn't feel too bad abt waiting. Today, paper has ~400 citations & spawned little mini-literature of folks doing similar analyses w other scales. You can find a bit of a summary in this tweet here:
1/ So @UniLeiden has now "lost" a second Prof in a short period of time. In my reading of the news, he is no longer allowed on uni premises due to ‘extremely undesirable behaviour’, but keeps salary & title.
What did he do? See screenshot below from uni executive board.
2/ The other Prof we "lost" 3 yrs ago had committed fraud, tampered w data & grant applications, taken blood samples w/o ethics approval, fabricated experiments, removed participants, dropped & added authors (and was then hired by TU Dresden for .. I don't know exactly for what).
3/ Both cases reveal highly problematic practices that went on for years without uni doing something. And there is just no way *some* people in power weren't aware.
When practices did come to light, it is bc (often female & junior) folks spoke up, at their own peril.
Lots of new followers in the last few weeks, so here's a short thread introducing you to some of the work we conducted since 2020.
Broadly speaking, our work tackles how to best 1⃣understand,2⃣measure, and3⃣model mental health problems.
🧵
1/
Pillar 1: 𝗧𝗵𝗲𝗼𝗿𝘆 𝗯𝘂𝗶𝗹𝗱𝗶𝗻𝗴 𝗮𝗻𝗱 𝘁𝗲𝘀𝘁𝗶𝗻𝗴.
Don Robinaugh has led fanastic work on this topic. Our first recent paper I recommend is conceptual work on the importance of having clear theories.
Another project led by Don is our work on a formalized theory for panic disorder, sort of walking the theory walk instead of just talking the theory talk ;).
This is still in preprint stage, but there'll be some updates and news on this soon.