I should admit that my tweets and poll about missing data were partly self serving, as I'm interested about what people do. But it was a mistake to leave the poll initially vague. I haven't said much useful on Twitter in some time, so I'll try here.
I want to start with the very simple case where there is one x and I'm interested in E(y|x); assume it's linear (for now). Data are missing on x but not on y. Here are some observations.
1. If the data are missing as a function of x -- formally, E(y|x,m) = E(y|x) -- the CC estimator is consistent (even conditionally unbiased). 2. Imputing on the basis of y is not and can be badly biased. 3. Inverse probability weighting using 1/P(m=0|y) also is inconsistent.
4. Missingness based on x is "exogenous selection" to economist and is "not missing at random" to statisticians. To me, this language is difficult to reconcile, but it is "just" language. "Selection on observables" is neutral: observables could be y or x.
5. In economics, it seems odd to assume x is missing based on y rather than based on x. At a minimum, if I use CC and MI and get different answers, I can't prefer MI. They use diff assumptions, and I prefer selection based on x.
6. If one assumes selec based on y -- MAR to statisticians -- then between IPW and MI, I prefer IPW. It requires fewer assumptions for consistency because only P(m=0|y) needs to be specified in addition to E(y|x). And, IPW applies directly to any model without full dist assumps.
7. IPW identifies the params in the linear projection L(y|1,x) if P(m=0|y) is correct. Imputation has no known robustness. This robustness of IPW is the source of certain doubly robust treatment effect estimators based on IPW and regression.
• • •
Missing some Tweet in this thread? You can try to
force a refresh
Several comments on this paper. First, it's nice to see someone taking the units of measurement issue seriously. But I still see many issues, especially when y >= 0 and we have better alternatives.
1. A search is required over units of measurement.
How do a compute a legitimate standard error of, say, an elasticity? I've estimated theta but then I ignore the fact that I estimated it? That's not allowed.
2. As with many transformation models, the premise is there exists a transformation g(.) such that g(y) = xb + u.
u is assumed to be indep of x, at a minimum. Often the distrib is restricted. In 1989 in an IER paper I argued this was a real problem with Box-Cox approaches b/c u >= -xb. If I model E(y|x) directly I need none of that. It's what Poisson regression does.
A year ago on Facebook, at the request of a former MSU student, I made this post. I used to say in class that econometrics is not so hard if you just master about 10 tools and apply them again and again. I decided I should put up or shut up.
I cheated by combining tools that are connected, so there are actually more than 10 .... 1. Law of Iterated Expectations, Law of Total Variance 2. Linearity of Expectations, Variance of a Sum 3. Jensen's Inequality, Chebyshev’s Inequality 4. Linear Projection and Its Properties
5. Weak Law of Large Numbers, Central Limit Theorem 6. Slutksy's Theorem, Continuous Convergence Theorem, Asymptotic Equivalence Lemma 7. Big Op, Little op, and the algebra of them.
Have we yet figured out when we should include a lagged dependent variable in either time series or panel data models when the goal is to infer causality? (For forecasting the issue is clear.) Recent work on macroeconomics on causal effects is a positive sign.
And the answer cannot be, "Include y(t-1) if it is statistically significant." Being clear about potential outcomes and the nature of the causal effects we hope to estimate are crucial. I need to catch up on this literature and I need to think more.
In panel data settings, if our main goal is to distinguish state dependence from heterogeneity, clearly y(t-1) gets included. But what if our interest is in a policy variable? Should we hold fixed y(t-1) and the heterogeneity when measuring the policy effect?
When I teach the senior seminar to economics students I sometimes take 20 minutes to discuss common grammar mistakes. I was worried it would come off as patronizing (even obnoxious), but I actually got positive comments about it on my teaching evaluations.
1. James will present his report to Kayla and I. 2. James will present his report to Kayla and me. 3. James will present his report to Kayla and myself.
1. He should not have went home early today. 2. He should not have gone home early today.
1. I should of taken less cookies. 2. I should’ve taken fewer cookies. 3. I should’ve taken less cookies.
1. She and I are going to visit my parents. 2. Her and I are going to visit my parents. 3. Her and me are going to visit my parents.
Durbin-Watson statistic.
Jarque-Bera test for normality.
Breusch-Pagan test for heteroskedasticity.
B-P test for random effects.
Nonrobust Hausman tests.
D-W test only gives bounds. More importantly, it maintains the classical linear model assumptions.
J-B is an asymptotic test. If we can use asymptotics then normality isn't necessary.
B-P test for heteroskedasticity: maintains normality and constant conditional 4th moment.
B-P test for RE: maintains normality and homoskedasticity but, more importantly, detects any kind of positive serial correlation.
Nonrobust Hausman: maintains unnecessary assumptions under the null that conflict with using robust inference. Has no power to test those assumps.
Historically, economics has fallen into the bad habit of thinking the fancier estimation method is closer to being "right" -- based on one sample of data. We used to think this of GLS vs OLS until we paid careful attention to exogeneity assumptions.