#CausalInferenceQuestions I observe a certain effect in schizophrenia and I want to test whether it's *specifically* related to some symptoms (e.g. Delusions) and not just to the severity of the condition (e.g. as measured by positive symptoms, SAPS). 1/n
Would it make sense to model it as effect ~ Delusions + SAPS (NB model is simplified)? or better as SAPS(minusDelusions)? I'm a bit worried about the psychometric validity of the latter measure. 2/n
Also, I observe that very low scores in Delusions and SAPS show effects similar to high scores (but not medium), plausibly due to the fact that a patient with low positive symptoms needs high negative symptoms (here SANS) to be diagnosed (some sort of collider bias?).
Should I simply add SANS to the model? Trying to think this through DAGs, but not yet there! (maybe @rlmcelreath has suggestions?)
• • •
Missing some Tweet in this thread? You can try to
force a refresh
DAG question for #CausalInference and #epitwitter tweeps: TL;DR: How do we use DAGs in typical pharmacosurveillance scenarios, when the entities of interest are unobserved? A thread 1/
We are interested in whether the administration of a drug is causing an increase in the probability of an adverse event (thus, an adverse drug reaction), vs. there not being any causal relation. 2/
However, the data we have access to are the spontaneous reports of practitioners and patients, about the co-occurrence of drug & event. So, drug & event are unobserved variables, only the report of their co-occurrence is reported. 3/
Should we use findings from previous studies and meta-analyses to shape our statistical inferences (aka informed priors)? What are the advantages and issues? Strap on for a loooong thread (link to a video of the talk at the end) 1/
TL;DR - Systematic use of informed studies leads to more precise, but more biased estimates (due to non-linear information flow in the literature). Critically comparison of informed and skeptical priors can provide more nuanced and solid understanding of our findings. 2/
How do we understand each other in conversation? A thread based on my recent IACS4 plenary, covering a critical perspective on interactive linguistic alignment - the tendency to re-use each other's linguistic forms. 1/
TL;DR: by building cumulative scientific approaches & standardised automated tools we can show even basic mechanisms like priming and alignment are shaped by the short- & long-term communicative context. Plus, there's no escaping both qualitative and quantitative approaches. 2/
Problem: social interactions are complex: listening to what your interlocutor is saying & how (prosody, gesture), anticipating where they are going, to plan your turn, its content, timing & delivery, shaping it according to expected reactions, etc. Easy to get overwhelmed. 3/
How do we build a more explicitly cumulative and yet self-critical scientific approach? In a just published paper (onlinelibrary.wiley.com/doi/10.1002/au…), we provide one of many possible paths.
TL;DR and a thread below 1/
TL;DR: design following systematic review, analyse with meta-analytically informed priors, critically assess and compare with skeptical priors, build and promote open science practices. (freely accessible preprint here: biorxiv.org/content/10.110…) 2/
A few years ago I got interested in how autistic individuals sound "different" - noted already in Asperger's and Kanner's early descriptions -, how this is used in current assessment processes (e.g., ADOS) and how it has been scientifically investigated 3/
Conversation is a dance, how do we learn? In this systematic review & meta-analysis we thoroughly explore models & evidence for how turn-taking develops and which factors are involved. Comments & suggested pub venues are very welcome. Long thread 1/ psyarxiv.com/3bak6
This was a brilliant student-led project by Vivian Nguyen & Otto Versyp from Ghent University, who spent their Fall 20 on an internship (aka regularly zooming) with me and @ChrisMMCox 2/
This thread is making me think critically about ongoing work with @AlbertoParola2 and separately with @ethanweed. After looking meta-analytically at vocal markers of psychiatric conditions, we launched projects to systematically replicate and extend them cross-linguistically 1/n
Is there a distrust? Possibly some, looking at the studies and at effect sizes of "1.89". Should there be? I'm not sure. I mean I'd really want to be able to build on these findings to better understand the underlying mechanisms. 2/n
and that's where it stroke me. This work shouldn't stand on its own, but with much needed complementary work on the mechanisms underlying the phenomenologically clear atypicalities (and what they can do in helping us to understand the conditions). Without that, 3/n