Fortunately, the speculations I made in my linear DiD paper about extension to the nonlinear case turn out to be true -- with a small caveat. One should use the canonical link function for chosen quasi-log-likelihood (QLL) function.
So, exponential mean/Poisson QLL if y >= 0.
Logistic mean/Bernoulli QLL if 0 <= y <= 1 (binary or fractional). (We call this logit and fractional logit.)
Linear mean, normal (OLS, of course).
These choices ensure that pooled estimation and imputation are numerically identical.
It's not a coincidence that these same combos show up in my work on doubly robust estimation of treatment effects and improving efficiency without sacrificing consistency in RCTs. Latest on the latter is here:
Now I just have to finish writing the nonlinear DiD paper and updating Stata files. On the plus side, I'll be posting later the recent revision of the linear paper, which has, I think, additional useful results on imputation and parallel trends. And corrects some mistakes ....
• • •
Missing some Tweet in this thread? You can try to
force a refresh
On my shared Dropbox folder, pinned at the top, I posted the latest version of my TWFE/TWMundlak paper. It's essentially complete (and too long ...). I've included the "truly marvelous" proof of equivalence between pooled OLS and imputation.
I also fixed some of the material on testing/correcting for heterogeneous trends. A nice result is that the POLS approach with cohort-specific trends is the same as the obvious imputation approach.
This means that using the full regression to correct for non-paralled trends suffers no contamination when testing. It's identical to using only untreated obs to test for pre-trends. But one must allow full heterogen in cohort/time ATTs for the equiv to hold.
I finally got my TWFE/Mundlak/DID paper in good enough shape to make it an official working paper. I'll put it in other places but it's currently here:
I changed the title a bit to better reflect it's contents. I'm really happy with the results, less happy that the paper got a bit unwieldy. It's intended to be a "low hanging fruit" DID paper.
Now I've more formally shown that the estimator I was proposing -- either pooled OLS or TWFE or RE (they're all the same, properly done) identifies every dynamic treatment one is interested in (on means) in a staggered design.
For my German friends: What is the German equivalent of "Ms." when addressing a woman (not yet a Dr.)? I noticed on a course application form in English -- I assume translated from German -- only two choices, "Mr." and "Mrs." Is "Frau" used for both Mrs. and Ms.?
As a follow-up: If I use English, I assume "Ms." is acceptable. I never address anyone as "Mrs." in English. It's interesting that "Frau" was translated as "Mrs." rather than "Ms." I would've expected the latter, especially in an academic setting.
My formal German courses were in the 1970s, and I learned that "Frau" is for married women only. I think I can make the adjustment, though. 🤓
I'm still intrigued that there is no "Ms." equivalent in German ....
Here's a panel DID question. Common intervention at t=T0. Multiple pre-treatment and post-treatment periods. Dummy d(i) is one if a unit is eventually treated. p(t) is one for t >= T0. Treatment indicator is w(i,t) = d(i)*p(t). Time constant controls are x(i).
I should admit that my tweets and poll about missing data were partly self serving, as I'm interested about what people do. But it was a mistake to leave the poll initially vague. I haven't said much useful on Twitter in some time, so I'll try here.
I want to start with the very simple case where there is one x and I'm interested in E(y|x); assume it's linear (for now). Data are missing on x but not on y. Here are some observations.
1. If the data are missing as a function of x -- formally, E(y|x,m) = E(y|x) -- the CC estimator is consistent (even conditionally unbiased). 2. Imputing on the basis of y is not and can be badly biased. 3. Inverse probability weighting using 1/P(m=0|y) also is inconsistent.
Several comments on this paper. First, it's nice to see someone taking the units of measurement issue seriously. But I still see many issues, especially when y >= 0 and we have better alternatives.
1. A search is required over units of measurement.
How do a compute a legitimate standard error of, say, an elasticity? I've estimated theta but then I ignore the fact that I estimated it? That's not allowed.
2. As with many transformation models, the premise is there exists a transformation g(.) such that g(y) = xb + u.
u is assumed to be indep of x, at a minimum. Often the distrib is restricted. In 1989 in an IER paper I argued this was a real problem with Box-Cox approaches b/c u >= -xb. If I model E(y|x) directly I need none of that. It's what Poisson regression does.