Since it is Monday, it's time for the 'metrics paper of the week! #MetricsMonday
Today I want to highlight "Bounds on Distributional Treatment Effect Parameters using Panel Data with an Application on Job Displacement", by Brantly Callaway.
Brant's paper provide a set of tools that allow us to better understand heterogeneous effects in a diff-in-diff setups.
Brant is particular interested in the distribution *of* the treatment effects for the treated units, and the quantile *of* treatment effect for the treated.
These parameters are much harder to identify than the popular ATT.
In fact, even if you have data from a *perfect* RCT, you will not be able to point identify these distributional parameters without relying on additional restrictions on the DGP.
Brant note that these extra conditions to get point identification can be too strong in many diff-in-diff applications.
To avoid these drawbacks, he then propose alternative assumptions that you can use to *bound* these distributional parameters.
Brant also makes life a bit easier for us by providing an easy-to-use #R package that implement his proposed tools: bcallaway11.github.io/csabounds/
I find the paper very nice and well motivated.
But that should not be too much of a surprise as I always learn a lot from Brant, either by co-authoring with him or just reading his papers!
• • •
Missing some Tweet in this thread? You can try to
force a refresh
🚨Hello #EconTwitter! I am very happy that my paper with Brantly Callaway, "Difference-in-Differences with multiple time periods", is now forthcoming at the Journal of Econometrics. sciencedirect.com/science/articl…
What are the main take aways? I will ask my daughter to help me out.
1/n
Our main goal here is to explain how one can transparently use DiD procedures in setups with (a) multiple time periods, (b) variation in treatment timing (staggered adoption), and (c) when a parallel trends is plausible potentially only after conditioning on covariates.
2/n
But why should one care? Don't we all know all these things already? Why can't we just use TWFE regressions and move on?
Today I want to talk about my paper with Jun Zhao (absolutely great PhD candidate from Vanderbilt ), "Doubly robust difference-in-differences estimators", which is now forthcoming at the Journal of Econometrics!
Before I go on, let me make it clear that everything that I say here or that we proposed in the paper can be easily implemented in #R via the package DRDID: pedrohcgs.github.io/DRDID/
I hope you find this easy to use!
2/n
Now to the paper. First, why should you pay attention to *another* Difference-in-Differences paper?
I think we propose a cool set of new tools that can be very handy. We talk about robustness, efficiency, and inference.
I'll cover the main points here, one-at-a-time!
3/n
Recently, Correia, Luck and Verner (2020) (CLV) put forward a very interesting paper that, among other things, analyze whether non-pharmaceutical interventions helped mitigate the adverse economic effects of the 1918 Spanish Flu pandemic on economic growth.
2/n
CLV find suggestive evidence that NPIs mitigate the adverse economic consequences of a pandemic.
Although today's society has a different structure from 100 years ago, these findings can help shape the current debate about covid policies.
3/n
Today I want to give a shout-out to @TymonSloczynski paper, "Interpreting OLS Estimands When Treatment Effects Are Heterogeneous: Smaller Groups Get Larger Weights", that is currently available here:
This is a very interesting paper that highlights some potential pitfalls of not separating the identification and estimation/inference steps when doing causal inference.
In other words, OLS may be messing up your regression interpretations.
So good to see that I am not alone!
Let go straight to the main message of Tymon's paper.
We all have seen and probably run linear regressions like this and attach causal interpretation to \tau after invoking selection on observables type of assumptions.
Well, although I skipped last week, here I am with another interesting econometrics paper that I really enjoyed reading --- Chen and Santos (2018, ECMA), "Overidentification in Regular Models",
The main idea of the paper is simple and very powerful.
In (unconditional) GMM models, we know that some estimators are more efficient than others when you've # moment restrictions > parameters of interest. In this case, we can also test the validity of the moments (J-test)
2/n
In many applications, however, our model is not based on unconditional moment restrictions, but on conditional moment restrictions . Furthermore, many times we are also interested in estimating functions (i.e., infinite dimensional parameters).
3/n
Their paper provides a guided way to think about what to do when you are worried about violations of a parallel trend assumption. The main idea is using information on pre-trends to bound potential violations of post-treatment parallel trends.
Plus, they provide a companion R package, making the adoption of these tools much simpler.