Jeffrey Wooldridge Profile picture
Feb 27, 2021 6 tweets 1 min read Read on X
Based on questions I get, it seems there's confusion about choosing between RE and FE in panel data applications. I'm afraid I've contributed. The impression seems to be that if RE "passes" a suitable Hausman test then it should be used. This is false.
I'm trying to emphasize in my teaching that using RE (unless CRE = FE) is an act of desperation. If the FE estimates and the clustered standard errors are "good" (intentionally vague), there's no need to consider RE.
RE is considered when the FE estimates are too imprecise to do much with. With good controls -- say, industry dummies in a firm-level equation -- one might get by with RE. And then choosing between RE and FE makes some sense.
Unfortunately, it is still somewhat common to see a nonrobust Hausman test used. And this makes no logical sense when every other statistic has been made robust to serial correlation and heteroskedasticity. So either the traditional Hausman test should be adjusted, or use CRE.
In Stata, the following is common, and correct:

xtreg y i.year x1 ... xK, fe vce(cluster id)
xtreg y i.year x1 ... xK z1 ... zJ, re vce(cluster id)

But often it is followed by this:
xtreg y i.year x1 ... xK, fe
estimates store b_fe
xtreg y i.year x1 ... xK z1 ... zJ, re
estimates store b_re
hausman b_fe b_re

In addition to being nonrobust, the df in the test will be wrong: It should be K, not (T - 1) + K. The latter is easy to fix, the former is tricky ....

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Jeffrey Wooldridge

Jeffrey Wooldridge Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @jmwooldridge

Jan 23
Thanks for doing this, Jon. I've been thinking about this quite a bit, and teaching my perspective. I should spend less time teaching, more time revising a certain paper. Here's my take, which I think overlaps a lot with yours.
I never thought of BJS as trying to do a typical event study. As I showed in my TWFE-TWMundlak paper, without covariates, BJS is the same as what I called extended TWFE. ETWFE puts in only treatment dummies of the form Dg*fs, s >= g, where Dg is cohort, fs is calendar time.
ETWFE is derivable from POLS using cohort dummies, which derives directly from imposing and using all implications of parallel trends. That's why it's relatively efficient under the traditional assumptions. To me, this is the starting point.
Read 12 tweets
Dec 19, 2023
I sometimes get asked whether, in the context of interventions using DiD methods, whether an "always treated" (AT) group can be, or should be, included. Typically, there are also many units not treated until t = 2 or later. But some are treated at entry and remain treated.
The short answer is that these units don't help identify true treatment effects except under strong assumptions. Suppose we have only an AT and never treated (NT) group. Units have a string of zeros or string of ones for the treatment indicator.
Any estimated policy effect is comparing avgs between these groups. But there's no way to control for pre-treatment diffs between them. I might as well have one time period and use a diff in means estimator across the two groups.
Read 7 tweets
Nov 22, 2023
Here's a simple result from probability that I'm not sure is widely known. It has important practical implications, particularly for incorporating heterogeneity into models.

Suppose one starts with a "structural" conditional expectation, E(Y|X,U) = g(X,U), where U is unobserved.
Usually g(.,.) is parameterized, but, unless the model is additive in U, the parameters may not mean much. We tend these days to focus on average partial effects. So, for example, E[dg(X,U)/dx] when X is continuous. The expectation is over (X,U).
Here's the result: if U and X are independent, then the APEs from g(X,U) are identical to the APEs from E(Y|X) = f(X). In other words, if the focus is on APEs, introducing U that is independent of X is largely a waste of time. And it can confuse the issue.
Read 10 tweets
Oct 28, 2023
How come Stata doesn't report an R-squared with the "newey" command?
In my opinion, the correct answer is (c): no good reason. Supposed "problems" with the R-squared with heterosk or ser correlation seem to be holdovers from old textbooks. There's no unbiased estimator of the pop R^2, so discussing bias really is off base.
The "bias" discussions are in terms of sigma^2_hat, anyway, and the bias in that is 1/T. But the R-squared is consistent for pop R^2 very generally with heterosk and/or ser correlation. Its exclusion from "newey" can confuse the beginner.
Read 4 tweets
Jun 2, 2023
Unfortunately, indiscriminate use of the term "fixed effects" to describe any set of mutually exclusive and exhaustive dummy variables seems to be generating confusion about nonlinear models and the incidental parameters problem.

#metricstotheface
With panel data, the IPP arises when we try to include unit-specific dummies in a nonlinear model with a small number of time periods. We have few observations per "fixed effects." In other cases, IPP arises if we put in group-specific dummies with small group sizes.
But if we include, say, occupation dummies when we have lots of people in each occupation, this clearly causes no problem. Or, including interviewer "fixed effects" when we have lots of subjects per interviewer.
Read 5 tweets
Mar 26, 2023
If Y, D (treatment), and Z (IV) are all binary with controls X, to obtain LATE you can use a linear model and estimate by IV:
Y = a + b*D + X*c + Z*(X - Xbar)*d + U
First stage:
D = f + g*Z + X*h + Z*(X - Xbar)*m + V
Or look at this recent WP by @TymonSloczynski, @sderyauysal, and me to use separate doubly robust estimates of the numerator and denominator. Can use logit outcome models for Y and D.

scholar.google.com/citations?view…
I also like trying separate probits that account for endogeneity of D using Heckman selection. Assumes two-sided noncompliance of Z is binary, but it can be used for general Z. Seeing how LATE estimates with covariates differ from ATE with covariates can be informative, I think.
Read 4 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us!

:(