Análise Real Profile picture
Causal Inference and Statistics in the Social/Health Sciences and Artificial Intelligence. Assistant Professor @uwstat @uw #causaltwitter #causalinference
Apr 6, 2020 8 tweets 2 min read
(1/n) Examining the answers on this thread, it seems some of the confusion stems from the dashed bidirected arrows. So here's a really crash course on *semi-Markovian* DAGs.

Whenever you se a graph with *dashed* bidirected arrows, that graph is still a valid DAG (still acyclic) (2/n) Those dashed arrows are used to indicate *dependent* error terms, or more substantively, latent (unobserved) common causes.

So first let's start with Markovian models. We say a DAG is Markovian if all error terms are assumed to be independent.
However...
Jan 22, 2020 10 tweets 3 min read
(1/10) Happy to announce that my paper with @chadhazlett is officially out on JRSS-B! (tinyurl.com/vo7o6wz)

In this paper, we develop new sensitivity analyses tools to precisely quantify how strong confounders need to be to overturn your research conclusions. (2/10) Among other things, we introduce two new sensitivity statistics for *routine reporting*: the (i) robustness value and (ii) the partial R2 of the treatment with the outcome. These simple statistics reveal how robust your estimates are to potentially unobserved confounders.
Oct 11, 2019 5 tweets 2 min read
(1/2) I've seen this paper getting a lot of attention but this is not proper causal reasoning. Imagine you are studying a population in which everyone has a very serious disease, except one person. Then, of course, you find that the disease explains little variation in happiness. (2/4) Would you then conclude that the "effects are too small to warrant policy change"? Surely not. The low "variance explained" is due to low variation in exposure, but the effect of an intervention could be huge. Thus, if screen time affects one's happiness substantially,
Apr 2, 2019 5 tweets 2 min read
Just want to re-emphasize that this point is not just a matter of taste. If you express your causal model (only) as a list of counterfactual statements, there's currently no systematic procedure to find its testable implications. If, however, you write the same model as a DAG... ...not only you can detect those testable implications immediately with the naked eye, but there are efficient algorithms that can find these testable implications automatically for you---both conditional independences, as well as "verma"-type of constraints.