In all seriousness, this tweet from @SeanTrende exemplifies a very important difference in how various poll aggregators view actual polling microdata — one that has the potential to reshape how news consumers view political polls writ large (for the better, IMO). Short thread:
Some polling aggregators (eg RCP) take the raw polling toplines data and assume that enough pollsters will follow best practices in weighting, sampling, etc to give you the best average prediction possible. This serves us well sometimes but is pretty naive when you drill down.
Others (eg 538) think that there are consistent differences between pollster modes, populations and polling firms that allow you to makes good predictions of which methods are best. They weight polls by accuracy and do other math to debias data to squeeze out all the extra juice.
And then there are those of us (including myself) that view polls with an eye toward their internal design and modeling. We think that critiquing crosstabs and methods is a good way of figuring out *why* some polls are better than others. In applied modeling, this means stuff
like adding corrections for which variables pollsters weight on. Instead of saying there are poll/topline-level characteristics that we can adjust for, we are saying there are crosstab-level differences — & incorporating them adds value to our averages that other approaches miss.
One ex of this is the Georgetown Battleground Poll, which earlier this year had a college-educated sample near 60% IIRC. Some aggregators just threw it in the average (bad). Some adjusted for historical accuracy (better). And some of us took the raw data and reweighted it based
on our own predictions of the racial and educational makeup of the electorate.
That is all indicative of, IMO, and important development in poll aggregation. It's not enough to just look at toplines and historical accuracy. We can use design-level info to improve our models.
• • •
Missing some Tweet in this thread? You can try to
force a refresh
NEW Strength In Numbers/Verasight poll out this AM. We find cuts to Medicaid deeply unpopular (58% oppose to 14%), Trump underwater on most issues (-16 net approval), Democrats ahead on the generic ballot by 6 points, and Harris leading a 2024 rematch
Full results in this thread
Full poll write up is here, including link to topline and methodology.
Subscribe to Strength In Numbers get the next poll results in your inbox early, and submit a potential question for our next poll in June
Trump approval is 40% vs 56% disapprove among all adults, with 42% saying they "strongly disapprove" of the job he has done as POTUS. Notable that the strong disapprove response is higher than the cumulative approve response.
Some early vote Qs: What % of 2020 early voters have voted so far in 2024? Does that differ by party? What about E-day voters?
Now that we have a substantial number of votes — above 10m in the swing states, or around 37% of likely voters in those states — we can start tracking:
This is the % of 2020 ABEV voters who have voted in 2024, as of yesterday
AZ 39% of Ds, 39% of Rs
GA 58%D 66%R
MI 43%D 45%R
NC 44%D 47%R
PA 40%D 35%R
WI 35%D 36%R
(No data in NV because our voter file vendor, L2, has been lagging there, and Clark County returns have been weird)
Other big caveat is that in MI, WI and GA, party registration is based on a model, so comes with a lot of potential measurement error. Partisan splits here may be less indicative of an advantage than in, say, AZ, NC or PA.
📊Today 538 is releasing an updated set of our popular pollster ratings for the 2024 general election! Our new interactive presents grades for 540 polling organizations based on their (1) empirical record of accuracy + (2) methodological transparency. 1/n abcnews.go.com/538/best-polls…
There’s tons to say but I’ll hit a few main points. First, a methodological note. For these new ratings, we updated the way 538 measures both *empirical accuracy* and *methodological transparency.* Let me touch on each. (Methodology here: ) abcnews.go.com/538/538s-polls…
(1) *Accuracy.* We now punish pollsters who show routine bias toward one party, regardless of whether they perform better in terms of absolute error. We find that bias predicts future error even if it’s helpful over a short time scale.
if you want to understand polling today, you have to consider *both* the results and the data-generating process behind them. this is not a controversial statement (or shouldn't be). factors like nonresponse and measurement error are very real concerns stat.columbia.edu/~gelman/resear…
given the research on all the various ways error/bias can enter the DGP, if your defense against "polls show disproportionate shifts among X group. meh" are "well X group voted this way 20 years ago," i am going to weight that pretty low vs concerns about non-sampling error
at the same time, if a critical mass of surveys is showing you something ,you should give it a chance to be true. interrogate the data and see if there's something there. i see tendencies both to over-interpret crosstabs and to throw all polls out when they misfire. both are bad
There is good stuff in this thread, and I’ve been making the first point too for some time. But remember a lot can change in a year, and some of the factors that look big now may not actually matter. Uncertainty is impossibly high this far out.
I took a look yesterday at how much Dem state-lvl POTUS margins tends to change from year to year. It’s about 7pp in our current high-polarization era. That’s a lot! With 2020 as our starting point simulating correlated changes across states, you get p(Biden >= 270) around 60%.
that is obviously not a good place to start if you are team Biden. But the range of outcomes is laughably large—a landslide for either party is more than plausible. So there is a pick your own adventure element to analyses like these: Dobbs, Jan 6 help Ds; Economy, Biden age hurt
Lots to share, but for now I'll just say FiveThirtyEight was one of the outlets that inspired me to be a data journalist. Nate Silver did great work & the team he led changed political journalism for the better. We will be iterating on that, but we start with a strong foundation.
2/3 ABC and I have been in talks for 6 months to ensure there will be as little disruption as possible in transitioning from the aggregation + forecasting models Silver is taking with him when his contract expires to our new in-house methods, developed w input across ABC & 538.