A somewhat common device in panel data models is to lag explanatory variables when they're suspected as being "endogenous." It often seems to be done without much thought, as if lagging solves the problem and we can move on. I have some thoughts about it.

#metricstotheface
First, using lags changes the model -- and it doesn't always make sense. For example, I wouldn't lag inputs in a production function. I wouldn't lag price in a demand or supply function. In other cases, it may make sense to use a lag rather than the contemporaneous variable.
Under reasonable assumptions, the lag, x(i,t-1) is sequential exogenous (predetermined). You are modeling a certain conditional expectation. But, logically, it cannot be strictly exogenous. Therefore, fixed effects estimation is inconsistent with fixed T, N getting large.
However, if the idiosyncratic errors are I(0), the "bias" in FE is on the order of 1/T; it may be acceptable if T is somewhat large. First difference OLS is a disaster as the bias never goes away no matter how large is T.

The alternative is to FD and look into the past for IVs.
But if one takes this Arellano-Bond approach, you might as well use x(i,t), first difference, and look in the past for IVs.

Better is to find convincing external instruments for the endogenous elements of x(i,t) -- easier said than done.

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Jeffrey Wooldridge

Jeffrey Wooldridge Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @jmwooldridge

13 Mar
More on LPM versus logit and probit. In my teaching, I revisited a couple of examples: one using data from the Boston Fed mortgage approval study; the other using a balanced subset of the "nonexperimental" data from Lalonde's classic paper on job training.

#metricstotheface
In both cases, the key explanatory variable is binary: an indicator being "white" in the Fed study (outcome: mortgage approved?), a job training participation indicator in the Lalonde study (outcome: employed after program?)
In just adding binary indicator alone, the probit, logit, linear give similar stories but the estimates of the average treatment effects do differ. In the Lalonde case by 4 percentage points (19 vs 22 vs 23, roughly).

So, I decide to practice what I (and many others) preach ....
Read 5 tweets
10 Mar
In 2018 I was invited to give a talk at SOCHER in Chile, to give my opinions about using spatial methods for policy analysis. I like the idea of putting in spatial lags of policy variables to measure spillovers. Use fixed effects with panel data, compute fully robust ses.
For the life of me, I couldn't figure out how putting in spatial lags of Y had any value. After preparing a course in July 2020, I was even more negative about this practice. It seems an unnecessary complication developed by theorists.
As far as I can tell, when spatial lags in Y are used, one always computes the effects of own policy changes and neighbor policy changes, anyway, by solving out. This is done much more robustly and much more easily modeling spillovers directly without spatial lags in Y.
Read 6 tweets
8 Mar
I taught a bit of GMM for cross-sectional data the other day. In the example I used, there was no efficiency gain in using GMM with a heteroskedasticity-robust weighting matrix over 2SLS. I was reminded of the presentation on GMM I gave 20 years ago at ASSA.

#metricstotheface
The session was organized by the AEA, and papers were published in the 2001 JEL issue "Symposium on Econometric Tools." Many top econometricians gave talks, and I remember hundreds attended. (It was a beautiful audience. The largest ever at ASSA. But ASSA underreported the size.)
In my talk I commented on how, for standard problems -- single equation models estimated with cross-sectional data, and even time series data -- I often found GMM didn't do much, and using 2SLS with appropriately robust standard errors was just as good.
Read 4 tweets
7 Mar
I think frequentists and Bayesians are not yet on the same page, and it has little to do with philosophy. It seems some Bayesians think a proper response to clustering standard errors is to specify an HLM. But in the linear case, HLM leads to GLS, not OLS.

#metricstotheface
Moreover, a Bayesian would take the HLM structure seriously in all respects: variance and correlation structure and distribution. I'm happy to use an HLM to improve efficiency over pooled estimation, but I would cluster my standard errors, anyway. A Bayesian would not.
There still seems to be a general confusion that fully specifying everything and using a GLS or joint MLE is a costless alternative to pooled methods that use few assumptions. And the Bayesian approach is particular unfair to pooled methods.
Read 5 tweets
6 Mar
What about the control function approach to estimation? It's a powerful approach for both cross section and panel applications. I'm a fan for sure.

However, the CF approach can impose more assumptions than approaches that use generated IVs.

#metricstotheface
In such cases, we have a clear tradeoff between consistency and efficiency.

In models additive in endogenous explanatory variables with constant coefficients, CF reduces to 2SLS or FE2SLS -- which is neat. Of course, the proof uses Frisch-Waugh.
The equivalence between CF and 2SLS implies a simple, robust specification test of the null that the EEVs are actually exogenous. One can use "robust" or Newey-West or "cluster robust" very easily. The usual Hausman test is not robust, and suffers from degeneracies.
Read 7 tweets
6 Mar
If you teach prob/stats to first-year PhD students, and you want to prepare them to really understand regression, go light on measure theory, counting, combinatorics, distributions. Emphasize conditional expectations, linear projections, convergence results.

@metricstotheface.
This means, of course, law of iterated expectations, law of total variance, best MSE properties of CEs and LPs. How to manipulate Op(1) and op(1). Slutsky's theorem. Convergence in distribution. Asymptotic equivalence lemma. And as much matrix algebra as I know.
If you're like me -- and barely understand basic combinatorics -- you'll also be happier. I get the birthday problem and examples of the law of very large numbers -- and that's about it.
Read 4 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!

Follow Us on Twitter!