So in 2007, physicists wrote a paper that made the headlines: according to their calculations, human coin flips aren’t 50/50 - more like 51/49.
Why is that, and did students in Amsterdam really flip 350,000 coins to find out?
🧵
Diaconis et al 2007 showed that coins tend to land with the same side up that the coin started with ().
They were also able to adjust coin flipping machines flip to 100/0, “proving coin flip physics aren’t random”. info.phys.unm.edu/~caves/courses…
Now a group of students in Amsterdam flipped 350,000 coins in a preregistered study & painstakingly recorded the results. In fact, they video taped all coin flips and uploaded them with the paper, so their study is fully reproducible ;).
After careful consideration, the FDA advisory comission voted today 9:2 that MDMA has *not* been shown to be effective for treating PTSD, given massive concerns around validity threats in this literature. They also voted 10:1 that MDMA has *not* shown to be safe.
1/8 New tutorial preprint led by @b_siepe in which we present different descriptive statistics & data visualization techniques with the goal to better understand EMA item functioning.
2/8 EMA data collection is increasing exponentially, but there are many challenges:
🔎 data are complex
🔎 psychometric properties of EMA items often not investigated
🔎 most scales are neither standardized nor validated beyond face validity
So .. how *valid* are our data?
3/8 Validity is a very thorny issue, so instead we decided to write a tutorial on better understanding item *functioning* as a necessary precursor to discussing validity.
In other words: "Look at your data carefully" (a much repeated call over the last century).
1/22 Our new paper led by @ashleylwatts (w @ashlgreene & @wesbonifay) is now published; I view it as the first critical evaluation of the statistical and theoretical p-factor & resulting literature. Here a brief overview of the core arguments in the paper.
We start by clearly differentiating the theoretical p-factor (from here on: P, thought to describe and perhaps cause variation in all forms of psychopathology) from the statistical 'general factor of psychopathology' (from here on: GFP, usually derived via latent variable models)
P has been taken to mean a variety of things, including an (unusually unspecified) causal mechanism, intellectual functioning, disordered thought, negative emotionality, emotion dysregulation, and others.
Authors use GFPs to derive P often don't specify what they mean with P.
1/8 We published several big picture papers on how to best conceptualize, classify and understand mental health problems / psychopathology recently, so here's a very brief overview in case you missed some of them.
I will link to all full text PDFs at the end.
2/8 A discussion paper providing numerous perspectives on mental health classification — based on a super interesting panel discussion with lots of different perspectives at a conference!
3/8 Together with @MiriForbes & @uma_phd, we organized a special issue in JOPACS on 'Studying Fine-Grained Elements of Psychopathology to Advance Mental Health Science'.
Our editorial provides a summary, and here is a list of all papers + PDFs ().
1/ The special issue @MiriForbes, @uma_phd & I put together on future of #MentalHealth research is out!
We feature papers on the importance of studying fine-grained clinical elements: 1 editorial, 6 empirical papers, and 2 commentaries.
Brief 🧵 of all papers with URLs.
2/ Our editorial introduces the topic of fine-grained clinical elements and proposes pluralistic, multimethod, & multisystem approaches as way forward. It also features results of a survey in which we asked the author teams about their perspectives.
3/ Empirical paper by Rowe et al. who examine which specific dimensions of internalizing psychopathology are associated w decreases in hippocampal volume over a 6-month period in 80 community-recruited adults.