Jeffrey Wooldridge Profile picture
Feb 27, 2021 6 tweets 1 min read Read on X
Based on questions I get, it seems there's confusion about choosing between RE and FE in panel data applications. I'm afraid I've contributed. The impression seems to be that if RE "passes" a suitable Hausman test then it should be used. This is false.
I'm trying to emphasize in my teaching that using RE (unless CRE = FE) is an act of desperation. If the FE estimates and the clustered standard errors are "good" (intentionally vague), there's no need to consider RE.
RE is considered when the FE estimates are too imprecise to do much with. With good controls -- say, industry dummies in a firm-level equation -- one might get by with RE. And then choosing between RE and FE makes some sense.
Unfortunately, it is still somewhat common to see a nonrobust Hausman test used. And this makes no logical sense when every other statistic has been made robust to serial correlation and heteroskedasticity. So either the traditional Hausman test should be adjusted, or use CRE.
In Stata, the following is common, and correct:

xtreg y i.year x1 ... xK, fe vce(cluster id)
xtreg y i.year x1 ... xK z1 ... zJ, re vce(cluster id)

But often it is followed by this:
xtreg y i.year x1 ... xK, fe
estimates store b_fe
xtreg y i.year x1 ... xK z1 ... zJ, re
estimates store b_re
hausman b_fe b_re

In addition to being nonrobust, the df in the test will be wrong: It should be K, not (T - 1) + K. The latter is easy to fix, the former is tricky ....

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Jeffrey Wooldridge

Jeffrey Wooldridge Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @jmwooldridge

Jan 24
I wish as a profession we would be more careful about tossing around terms like "endogeneity" -- especially with panel data. For many years, I've been emphasizing that the error consists of two components; I call them c(i) and u(i,t). I always include time dummies, say, f(t).
Endogeneity WRT c(i) and f(t) is handled by TWFE. But that leaves u(i,t), the idiosyncratic, time-varying shocks. For that, we generally need IV along with TWFE.

In terms of DiD, the assignment can be correlated with the level in the control state, y_it(0) -- so it can be endog.
Treatment assignment cannot be correlated with the difference or trend, y_is(0) - y_it(0). This is the parallel trends assumption, and adding controls can help. The unobserved effect, c(i), is part of y_it(0) but it gets removed by differencing or the within transformation.
Read 5 tweets
Jan 1
Nice stuff! Pedro knows I'm competitive, and now he's thrown down the gauntlet. I'll to have to clean up my shared Dropbox (see pinned tweet). For starters, I finally have a new version of my extended TWFE paper -- posted there. It's shorter and hopefully more to the point.
Includes a bunch of equivalences that I've discovered over the past few years -- some recent. And I show that the regression-based "event study" approaches of Sun- Abraham/Callaway-Sant'Anna are the same when S-A includes covariates fully flexibly as with my ETWFE method.
Plus, even the event study ("leads and lags") with full flexibility can be computed by imputation estimation. In previous versions, I only showed this for ETWFE and for estimation with heterogeneous trends.
Read 11 tweets
Nov 3, 2024
There's a good reason the Frisch-Waugh-Lovell Theorem is taught in intro econometrics, at least at the graduate level. It's used to characterize omitted variable bias as well as the plim of OLS estimators under treatment heterogeneity and also diff-in-diffs. And more.
I also teach the 2SLS version of FWL, where exogenous variables, X, are partialled out of the IVs, Z, with endogenous explan vars W. It's important to emphasize that the IV needs to be residualized with respect to X. Let Z" be those residuals. This is the key partialling out.
Then apply 2SLS to any of the equations
Y = W*b + U1
Y" = W*b + U2
Y" = W"*b + U3
Y = W"*b + U4
using IVs Z".

All four deliver the 2SLS estimates of b on the full equation Y = X*a + W*b + U with IVs (X,Z). All " variables have X partialled out from them.
Read 8 tweets
Sep 28, 2024
I think the most commonly used treatment effect estimators when treatment, D, is unconfounded conditional on X, are the following:
1. Regression adjustment.
2. Inverse probability (propensity score) weighting.
3. Augmented IPW.
4. IPWRA
5. Covariate matching.
6. PS matching.
RA, AIPW, and IPWRA all use conditional mean functions; usually linear but can be logit, multinomial logit, exponential, and others.

I like RA because it is straightforward -- even if using logit or Poisson -- and it is easy to obtain moderating effects.
But, technically, RA requires correct specification of the conditional means E[Y(d)|X] for consistency.

IPW uses only specification of the PS. We now know we should use normalized weights. IPW can be sensitive to overlap problems because p^(X) can be close to one or zero.
Read 17 tweets
Sep 28, 2024
It's been too long since I've made a substantive tweet, so here goes. At the following Dropbox link you can access the slides and Stata files for my recent talk at the Stata UK meeting:



It's taken me awhile to see connections among various estimators.dropbox.com/scl/fo/50imn36…
Perhaps even longer to figure out some tricks to make standard error calculation for aggregated, weighted effects easy. I think I've figured out several useful relationships and shortcuts. Ex post, most are not surprising. I didn't have them all in my WP or my nonlinear DiD.
The talk is only about regression-based methods, but includes logit and Poisson regression (and even other nonlinear models). In the linear case, slide 28 shows a "very long regression." I was tempted to call it something like the "grand unified regression."
Read 23 tweets
May 25, 2024
Okay, here goes. T = 2 balanced panel data. D defines treated group, f2_t is the second period dummy, W_t = D*f2_t is the treatment. Y_1 and Y_2 are outcomes in the first and second period. ΔY = Y_2 - Y_1. X are time-constant controls. X_dm = X - Xbar_1 (mean of treated units).
Eight equivalent methods:

1. OLS ΔY on 1, D, X, D*X_dm (cross sec)

2. Pooled OLS of Y_t on 1, W_t, W_t*X_dm, D, X, D*X, f2_t, f2_t*X; ATT is coef on W_t (t = 1,2)

3. Random effects estimation with same variables in (2).

4. FE estimation of (2), where D, X, D*X drop out.
Imputation versions of each:

5. OLS ΔY on 1 X using D = 0. Get residuals TE^_FD. Average TE^_FD over treated units.

6. POLS of Y_t on 1, D, X, D*X, f2_t, f2_t*X using W_t = 0 (control obs). TE_t^_POLS resids. ATT is average of TE_t^_POLS over W_t = 1 (treated observations)
Read 6 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us!

:(