Have we yet figured out when we should include a lagged dependent variable in either time series or panel data models when the goal is to infer causality? (For forecasting the issue is clear.) Recent work on macroeconomics on causal effects is a positive sign.
And the answer cannot be, "Include y(t-1) if it is statistically significant." Being clear about potential outcomes and the nature of the causal effects we hope to estimate are crucial. I need to catch up on this literature and I need to think more.
In panel data settings, if our main goal is to distinguish state dependence from heterogeneity, clearly y(t-1) gets included. But what if our interest is in a policy variable? Should we hold fixed y(t-1) and the heterogeneity when measuring the policy effect?
For a one-period causal effect, we often include the pre-treatment outcome to account for confounded assignment. But doing that over numerous time periods, it isn't clear what that recovers.
Ideally, putting in y(t-1) is always equivalent to an infinite distributed lag in the policy variables. Unfortunately, this requires strong assumptions -- as I discuss in Ch 18 of my intro econometrics book.
I start sweating every time someone asks, "Should I include y(t-1) in my panel data model?"
• • •
Missing some Tweet in this thread? You can try to
force a refresh
A year ago on Facebook, at the request of a former MSU student, I made this post. I used to say in class that econometrics is not so hard if you just master about 10 tools and apply them again and again. I decided I should put up or shut up.
I cheated by combining tools that are connected, so there are actually more than 10 .... 1. Law of Iterated Expectations, Law of Total Variance 2. Linearity of Expectations, Variance of a Sum 3. Jensen's Inequality, Chebyshev’s Inequality 4. Linear Projection and Its Properties
5. Weak Law of Large Numbers, Central Limit Theorem 6. Slutksy's Theorem, Continuous Convergence Theorem, Asymptotic Equivalence Lemma 7. Big Op, Little op, and the algebra of them.
When I teach the senior seminar to economics students I sometimes take 20 minutes to discuss common grammar mistakes. I was worried it would come off as patronizing (even obnoxious), but I actually got positive comments about it on my teaching evaluations.
1. James will present his report to Kayla and I. 2. James will present his report to Kayla and me. 3. James will present his report to Kayla and myself.
1. He should not have went home early today. 2. He should not have gone home early today.
1. I should of taken less cookies. 2. I should’ve taken fewer cookies. 3. I should’ve taken less cookies.
1. She and I are going to visit my parents. 2. Her and I are going to visit my parents. 3. Her and me are going to visit my parents.
Durbin-Watson statistic.
Jarque-Bera test for normality.
Breusch-Pagan test for heteroskedasticity.
B-P test for random effects.
Nonrobust Hausman tests.
D-W test only gives bounds. More importantly, it maintains the classical linear model assumptions.
J-B is an asymptotic test. If we can use asymptotics then normality isn't necessary.
B-P test for heteroskedasticity: maintains normality and constant conditional 4th moment.
B-P test for RE: maintains normality and homoskedasticity but, more importantly, detects any kind of positive serial correlation.
Nonrobust Hausman: maintains unnecessary assumptions under the null that conflict with using robust inference. Has no power to test those assumps.
Historically, economics has fallen into the bad habit of thinking the fancier estimation method is closer to being "right" -- based on one sample of data. We used to think this of GLS vs OLS until we paid careful attention to exogeneity assumptions.
Yesterday I was feeling a bit guilty about not teaching lasso, etc. to the first-year PhD students. I'm feeling less guilty today. How much trouble does one want to go through to control for squares and interactions for a handful of control variables?
And then it gets worse if I want my key variable to interact with controls. You can't select the variables in the interactions using lasso. I just looked at an application in an influential paper and a handful of controls, some continuous, were discretized.
Discretizing eliminates the centering problem I mentioned, but in a crude way. So I throw out information by arbitrarily using five age and income categories so I can use pdslasso? No thanks.
I was reminded of the issue of centering IVs when creating an example using PDS LASSO to estimate a causal effect. The issue of centering controls before including them in a dictionary of nonlinear terms seems it can be important.
The example I did included age and income as controls. Initially I included age, age^2, inc^2, age*inc. PDSLASSO works using a kind of Frisch-Waugh partialling out, imposing sparsity on the controls.
But as we know from basic OLS, not centering before creating squares and interactions can make main effects weird -- with the "wrong" sign and insignificant. This means in LASSO they might be dropped.