FYI there is some evidence that polarization reduces the size of convention bounces, and polls in recent fall campaigns have been much stabler than in, say, the 1980s
General election polling in 2020 has been very stable (as we predicted). That _could_ change before November, yes, but there are at least two good reasons to think it won't: the low variance from Jan to July, & that election polls now are like half as volatile as they used to be.
Yes, candidates can get convention bounces, but #ackshually polls in the first half of the campaign have been more volatile than polls in the second half, historically speaking. This is the trend for the variance over the whole electio nyear:
Anyway, there seems to be some disagreement among prognosticators about how volatile (or not) modern contests are. I'm pretty compelled by these charts and the research into political polarization, but reasonable people can disagree & there is uncertainty around our assumptions.
(PS that uncertainty about variance is in our model; if the election suddenly gets more volatile, the variance in the forecasting component is allowed to increase a bit. But the mean estimate we provide constrains outcomes; & as we've discussed it might be like 0.1 % pts to low)
Oh, I should also say that our average expectation IS for the election to get tighter before November (a change in the mean of the distribution of outcomes) but that's not the same as expecting more volatility later (a change in the variance of the distribution). /end
• • •
Missing some Tweet in this thread? You can try to
force a refresh
Some early vote Qs: What % of 2020 early voters have voted so far in 2024? Does that differ by party? What about E-day voters?
Now that we have a substantial number of votes — above 10m in the swing states, or around 37% of likely voters in those states — we can start tracking:
This is the % of 2020 ABEV voters who have voted in 2024, as of yesterday
AZ 39% of Ds, 39% of Rs
GA 58%D 66%R
MI 43%D 45%R
NC 44%D 47%R
PA 40%D 35%R
WI 35%D 36%R
(No data in NV because our voter file vendor, L2, has been lagging there, and Clark County returns have been weird)
Other big caveat is that in MI, WI and GA, party registration is based on a model, so comes with a lot of potential measurement error. Partisan splits here may be less indicative of an advantage than in, say, AZ, NC or PA.
📊Today 538 is releasing an updated set of our popular pollster ratings for the 2024 general election! Our new interactive presents grades for 540 polling organizations based on their (1) empirical record of accuracy + (2) methodological transparency. 1/n abcnews.go.com/538/best-polls…
There’s tons to say but I’ll hit a few main points. First, a methodological note. For these new ratings, we updated the way 538 measures both *empirical accuracy* and *methodological transparency.* Let me touch on each. (Methodology here: ) abcnews.go.com/538/538s-polls…
(1) *Accuracy.* We now punish pollsters who show routine bias toward one party, regardless of whether they perform better in terms of absolute error. We find that bias predicts future error even if it’s helpful over a short time scale.
if you want to understand polling today, you have to consider *both* the results and the data-generating process behind them. this is not a controversial statement (or shouldn't be). factors like nonresponse and measurement error are very real concerns stat.columbia.edu/~gelman/resear…
given the research on all the various ways error/bias can enter the DGP, if your defense against "polls show disproportionate shifts among X group. meh" are "well X group voted this way 20 years ago," i am going to weight that pretty low vs concerns about non-sampling error
at the same time, if a critical mass of surveys is showing you something ,you should give it a chance to be true. interrogate the data and see if there's something there. i see tendencies both to over-interpret crosstabs and to throw all polls out when they misfire. both are bad
There is good stuff in this thread, and I’ve been making the first point too for some time. But remember a lot can change in a year, and some of the factors that look big now may not actually matter. Uncertainty is impossibly high this far out.
I took a look yesterday at how much Dem state-lvl POTUS margins tends to change from year to year. It’s about 7pp in our current high-polarization era. That’s a lot! With 2020 as our starting point simulating correlated changes across states, you get p(Biden >= 270) around 60%.
that is obviously not a good place to start if you are team Biden. But the range of outcomes is laughably large—a landslide for either party is more than plausible. So there is a pick your own adventure element to analyses like these: Dobbs, Jan 6 help Ds; Economy, Biden age hurt
Lots to share, but for now I'll just say FiveThirtyEight was one of the outlets that inspired me to be a data journalist. Nate Silver did great work & the team he led changed political journalism for the better. We will be iterating on that, but we start with a strong foundation.
2/3 ABC and I have been in talks for 6 months to ensure there will be as little disruption as possible in transitioning from the aggregation + forecasting models Silver is taking with him when his contract expires to our new in-house methods, developed w input across ABC & 538.
pretty bleak picture for the GOP 10-20 years from now, unless the party changes its policy endorsements and messaging to shrink the gap in Gen Z/Millennial voting behavior catalist.us/whathappened20…
yes, however, rolling back convenience voting reforms for students is not going to be an effective voter suppression strategy when the average Gen Z voter is out of school (my back-of-envelope math says this should happen around 2028)
bad tweet!
the point is that crossing your fingers and pretending that young people just get more right-leaning as they age is not an effective electoral strategy,
not that there is a 100% probability of democratic electoral success for the next 30 years