🚨 WP alert! 🚨 I design equivalence tests for running variable (RV) manipulation in regression discontinuity (RDD), show that serious RV manipulation can't be ruled out in lots of published RDD research, and offer the lddtest command in Stata/R. 1/x
Credible RDD estimation relies on the assumption that agents can’t endogenously sort their RVs to opt themselves into or out of treatment. If they can, then RDD estimates are confounded: agents who manipulate RVs are likely different in important ways from agents who don't. 2/x
Such manipulation often causes jumps in RV density at the cutoff, which can either come from genuine distributional distortions or from strategic reporting. E.g., consider the French examples below. 3/x
Good news: You can test for RV manipulation by assessing the discontinuity in the RV’s density as it crosses the cutoff. Many do so using McCrary’s (2008) DCdensity procedure. Recently, Cattaneo, Jansson, & Ma's (2018; 2020) rddensity procedure has also become popular. 4/x
Bad news: Most interpret statistically insignificant tests as evidence of negligible manipulation. This is not good practice. These tests leave researchers w/ no burden of proof to evidence their identification assumptions; absence of evidence is not evidence of absence. 5/x
My 3-step procedure can provide stat. sig. evidence that RV manipulation @ the cutoff is practically equal to zero. (1) set the largest ‘economically insignificant’ ratio ε > 1 between RV density estimates @ the cutoff. Credible ε judgments can be aggregated from survey data. 6/x
(2) Run McCrary's procedure, get logarithmic density discontinuity estimate θ. (3) Run two one-sided tests: one with Ha: θ > -ln(ε), and one with Ha: θ < ln(ε). If both are stat. sig., then that's stat. sig. evidence that RV manipulation @ the cutoff practically equals zero. 7/x
This procedure restores the burden of proof to show that RV manipulation around the cutoff is practically insignificant before ruling out meaningful RV manipulation @ the cutoff. I use it to show that RV manipulation @ the cutoff is still a serious problem for RDD research. 8/x
I leverage replication data on 36 RDD publications in top political science journals from @StommesDrew, Aronow, & Sävje () to conduct 45 RV manipulation tests. Many RVs exploited for RDD in these papers fail even lenient versions of my test. 9/xdoi.org/10.1177/205316…
@StommesDrew In this sample, > 44% of RV density discontinuities @ the cutoff can’t be significantly bounded beneath a 50% upward jump (or equivalently, a 33.3% downward jump). 10/x
@StommesDrew 50% is not a ‘special threshold’. To bring the ‘failure rate’ for my test beneath 5%, you’d have to be willing to argue that a 350% upward density jump at the cutoff is practically equal to zero. 11/x
@StommesDrew In fact, precise meta-analytic estimates suggest that for the average RV, manipulation causes an absolute density discontinuity equivalent to a 26% upward jump at the cutoff. This is likely a practically significant degree of manipulation in many relevant RDD settings. 12/x
@StommesDrew I recommend that researchers use my equivalence testing procedure to reassure against such meaningful RV manipulation around the cutoff in RDD research. In Stata, this can be done using my lddtest command. 13/x
A new working paper for holiday reading! @peder_isager and I provide an introduction to three-sided testing, a framework for testing an estimate's practical significance. We offer a tutorial, Shiny app, + commands/code in #Rstats, Jamovi, and #Stata (🔗 below!) 1/9
#EconTwitter
Equivalence testing lets us test whether estimates are stat. sig. bounded beneath practically negligible effect size Δ (e.g., pink estimate). But estimates can be both stat. sig. diff. from zero and stat. sig. bounded beneath Δ. 2/9
Estimates can also be stat. sig. bounded outside of Δ (e.g., blue estimate). What should we conclude about estimates like these blue/orange estimates? Standard equivalence testing frameworks don't give us clear answers. We introduce researchers to a framework that does. 3/9
Do real stakes/incentives matter in experiments? Recent studies say they don’t. My new paper shows that these studies’ results — and those of most hypothetical bias experiments — are uninformative when we care about experimental treatment effects. 1/x
🔗: papers.tinbergen.nl/24070.pdf
Historically, experimental economists virtually always tied experimental choices to real stakes/payoffs to improve generalizability. That’s changing: many economists now use hypothetical stakes in online experiments + large-scale survey experiments. 2/x
There’s also recently been a wave of new studies showing that certain outcomes don’t stat. sig. differ between real-stakes and hypothetical-stakes experiments. These results are affecting thinking at the highest levels of experimental economics. 3/x
🧵 on my replication of Moscona & Sastry (2023, QJE).
TL;DR: MS23 proxy 'innovation exposure' with a measure of heat. Using direct innovation measures from the paper’s own data decreases headline estimates of innovation’s mitigatory impact on climate change damage by >99.8%. 1/x
Moscona & Sastry (2023) reach two findings. First, climate change spurs agricultural innovation. Crops with croplands more exposed to extreme heat see increases in variety development and climate change-related patenting. 2/x academic.oup.com/qje/article/13…
Second, MS23 find that innovation mitigates damage from climate change. They develop a county-level measure of 'innovation exposure' and find that agricultural land in counties with higher levels of 'innovation exposure' is significantly less devalued by extreme heat. 3/x
My paper is out in @PNASNews! I replicate a paper on the impact of COVID vaccine mandates on vaccine uptake. Removing a single bad control variable sign-flips several of the paper’s headline results. The reply’s findings are also not robust. 1/x pnas.org/doi/10.1073/pn…
@PNASNews Rains & Richards (2024) — henceforth RR24 — reach two findings. First, RR24 claim that difference-in-differences estimates show that US state COVID vaccine mandates had imprecise impacts on COVID vaccine uptake. 2/x pnas.org/doi/10.1073/pn…
@PNASNews Second, RR24 find that states that mandated COVID vaccination statewide now see lower uptake of COVID boosters and both adult + child flu vaccines than states that banned local COVID vaccine mandates. 3/x