3/ There are natural reasons to believe that there are strong differences by Republicans vs. Democrats: survey data suggests that there are big differences by party ID on Covid-19 vaccination: kff.org/coronavirus-co…
4/ The challenge, of course, is whether it’s really about Republicans vs. Democrats living in these areas, or just the areas where individuals sort into are different.
5/ This statistical analysis runs into a serious challenge, driven by the fact that publicly available data on Covid deaths, and measures of political party, are typically only available at the county level.
6/ The focus on Covid deaths and counties has lead researchers to try to account for these locational differences (by controlling for features at the county level), but are still limited by the aggregated nature of the data: healthaffairs.org/doi/full/10.13…
7/ The other issue with this approach is that it focuses on reported Covid deaths as an aggregate measure. This measure may not fully capture the “counterfactual” deaths in the absence of the pandemic. Our world in data does an excellent discussion: ourworldindata.org/excess-mortali…
8/ Intuitively, calculating excess death rates requires a prediction of death rates in 2020 and 2021 based on previous years for the group of interest: namely Democrats and Republicans. Fortunately, we have mortality data with party affiliation, age, and location in this paper!
9/ We construct data using *individual-level* voter registration in 2017, linked to death records from 2018 to 2021, for Ohio and Florida. We then construct excess death rates that control for differences in mortality rates (pre-Covid) at the age-by-party-by-county-by-month level
10/ This lets us ask and answer three questions:
11/
Q1: Does excess death in 2020 and 2021 differ by political party, how much and when does this occur?
A1: Yes, the excess death rate for Republicans was 5.4 p.p., or 76%, higher than for Democrats. The gap was exclusively in the post-vaccine period (10.4 pp or 153%).
12/
Q2:
Is this difference explained by geographic or age differences in political party affiliation?
A tiny share of the difference is explained by differential impacts of age-by-county *during Covid* (recall that excess deaths already controls for pre-Covid differences):
13/
Q3: How much can we point to vaccines?
A3: This is harder, since we don't have individual-level data on vaccines. However, two facts emerge:
A. The association between the Rep.-Dem. gap and county-level vaccination rates grows significantly after they become available:
14/
B. Moreover, *pre-vaccine*, the relationship across counties between realized vax rates and excess deaths was identical for both groups.
Post-vaccine, the Democrat rate fell and Republican rate climbed; and the gap between the two was near zero in high-vax counties.
16/ If this is really a story about vaccines, the continued story of low take-up of vaccines + boosters among Republicans may perpetuate some of these differences: kff.org/coronavirus-co…
17/ We’re working on expanding this out now to contrast our results with the existing literature a bit and highlight some more points, but would welcome any comments or suggestions.
fin/ It is important to reiterate that our results hold fixed differences in mortality by age, location, and party pre-Covid, and can account for location-by-age differences post-Covid. Hence these are within-age-and-location differences in mortality outcomes by political party.
• • •
Missing some Tweet in this thread? You can try to
force a refresh
I try to understand how the credibility revolution (as termed by @metrics52 and @PinkseJoris) has permeated economics, especially across different fields.
In particular, are different empirical methods being adopted evenly?
In general: no -- there's significant heterogeneity
So how do I do this? I follow Currie, Kleven and Zwiers (2020) -- in fact, I follow much of their code, as they have a great repository thanks to @AeaData that I was able to use.
I pull all NBER working papers (from #1000 to #32,436) over time, and convert them to text.
Interesting paper that has sparked some discussion here! I think a lot of folks have focused on the headline abstract, so wanted to give my constructive feedback on interpretation of the results
First, it’s quite clear that these QT give substantial visibility! Thousands of views and non-trivial increase in likes (control mean of 3 likes, so enormous increase!)
Second, it’s not totally clear to me that the results on interviews and flyouts line up with the headline discussion. For reference, this is the main graph in the paper. You can then contrast this with the regression tables
Finally posting a new paper on how diffusion estimates on networks (e.g. for epidemics, information spread, tech adoption) can be highly non-robust to even tiny (vanishing!) measurement errors. 🧵
What is a diffusion model? It's a way to study how things spread through a network. For example, how a disease spreads through a population, or how information spreads through a social network.
These models are used to do research and make policy decisions!
2/
In practice, when we operationalize these models, we estimate the network relative to the true network.
Concerns about measurement error in networks are not new, but it turns out that with diffusions, measurement error are especially bad.
3/
2023 was a crazy year -- remember how we had a bank run at Silicon Valley Bank that caused a banking sector collapse (over 20% decline in bank equities in a week!) and prompted a Federal Reserve facility intervention?
No? Well then I have the thread and working paper for you! 🧵
Following the collapse of SVB, there an immediate response in the stock market. Many banks other than SVB plunged (most notably First Republic Bank, who finally failed many weeks later). Overall, the bank sector corrected sharply downward.
There was, however, significant heterogeneity in this downward correction. In the first week, the decline was quite skewed, with smaller banks experiencing less of a decline. By May, more banks had followed, leading to a dispersed, symmetric (and negative distribution).
Bank failures are a common phenomenon in the United States.
In the 30s and 40s, the FDIC had 20 bank failures a year.
In the 50s and 60s, a lull of 3 per year.
In the 70s, this picked up to 9 per year.
In the 1980s and 90s, an average of 150 banks failed per year.
🧵
What is distinctive is how much the size of the banks failing has changed, even when adjusting for inflation. The total assets is swamped by the size of the crisis in 2008, but even just looking at the average size banks that fail, it is striking:
It's not even possible to discern much of the bank failures in the 1930s and 40s on this scale when compared to the 2008 crisis, but we *can* see them if we use a log scale.
The market definitely thinks there are more banks that will be run on.
Here's the 7 banks with the largest decline in the market in the last two weeks.
1/🧵
We have:
SVB Financial Group (SIVB) (-60%)
PacWest Bancorp (PACW) (-54%)
Signature Bank (SBNY) (-36%)
Western Alliance Bancorp (WAL) (-32.4%)
First Republic Bank (FRC) (-31.3%)
Customers Bancorp Inc (CUBI) (-23.5%)
First Foundation Inc (FFWM) (-20.3%)
2/
But they're not alone! Several banks experiencing 15-20% declines: