You’re no dummy. You already know diverging trends in the pre-period can bias your results.
But I’m here to tell you about a TOTALLY DIFFERENT, SUPER SNEAKY kind of bias.
Friends, let’s talk regression to the mean. (1/N)
dx.doi.org/10.1111/1475-6… (2/N)
Diff-in-diff nets out baseline differences…right? (4/N)
In her subtly different simulation, Jamie generates treatment and control data from DIFFERENT populations.
Suppose they’re exactly as far apart as the Ryan et al. case. (10/N)
Matching FIXES bias in the Ryan et al scenario.
Mathcing CAUSES bias in the Daw & Hatfield scenario.
And in NEITHER case are there any violations of parallel pre-trends. (13/N)
@ryan_dydx wrote a commentary for @hsr_hret dx.doi.org/10.1111/1475-6…
@jamie_daw and I responded dx.doi.org/10.1111/1475-6… (16/N)
@Lizstuartdc @Michael_Chernew @colleenlbarry et al. developed symmetric PS weighting for diff-in-diff that avoids the problem dx.doi.org/10.1007/s10742… (17/N)
(THE END)