G Elliott Morris Profile picture
Sep 22, 2020 11 tweets 3 min read Read on X
I think there is still a fairly sizable chance of a systematic error in the polls, and how big it is depends on which forecasts/averages you look at. (This is an important thread, please read it carefully.) 1/8
2/8 While more battleground-state pollsters are weighting by education than at this point in 2016, about half still aren't. And there are other issues with pollsters who have clearly politically biased samples because of other errors (like weighting to the 2016 exit polls).
3/8 We adjust for some of this bias by including a term in our model that balances systematic difference between correctly and poorly-weighted polls. In 2016, it shaved about 2pts off Clinton's margin in the Midwest, so it IS helpful. BUT the model was still surprised on Nov 8.
4/8 Right now, this adjustment is currently shaving about 0.5 points off of Biden's average (more in some states). We should expect, on balance, that other forecasters/averages/etc who don't adjust for these biases will overstate Biden's margin by roughly the same amount.
5/8 I do wish that the other leading forecasters would take weights and polling design into account in their models. We know that not all pollsters act responsibly and we shouldn't assume that simply averaging data will remove the biases that injects into our models (see: 2016).
6/8 At the same time, we might be able to make improvements over how our model currently handles these factors. Our bias correction is formally defined to capture the difference between pollsters who weight by party reg/past vote/etc and those that don't.
7/8 This indirectly captures SOME of the bias from the education-weighting problem, but not all of it. In future versions of our model we might be able to further refine the 2016 and 2020 bias corrections by adding another adjustment explicitly for education-weighting.
8/8 I should clarify that all these adjustments are only factored in to our _average_ projection of election-day vote shares. Our margin of error is separately calibrated to capture historical polling errors, & the final 9pt MOE on state averages would be the same no matter what.
Let me help clarify: Our model still simulates a wide range of universes of polling error, including many (about half!) where polls actually underestimate Biden. But that's after we adjust for (some of) the errors introduced by not weighting by edu. 1/2

2/2 If a model ISN'T adjusting for bias from some polls not weighting by edu or party, then _on average_ we should expect them to overrate Biden's chances. But there is still a distinct chance of some new variable or phenomenon causing bias in a diff way, hence the uncertainty.
So, what I'm DEFINITELY NOT saying is that all polls and averages are rigged toward Biden and you should ignore them. What I AM saying is that there are reasons to believe that Trump beating the polls is likelier than Biden beating the polls, even though both are still possible.

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with G Elliott Morris

G Elliott Morris Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @gelliottmorris

May 14, 2025
NEW Strength In Numbers/Verasight poll out this AM. We find cuts to Medicaid deeply unpopular (58% oppose to 14%), Trump underwater on most issues (-16 net approval), Democrats ahead on the generic ballot by 6 points, and Harris leading a 2024 rematch

Full results in this thread
Full poll write up is here, including link to topline and methodology.

Subscribe to Strength In Numbers get the next poll results in your inbox early, and submit a potential question for our next poll in June

gelliottmorris.com/p/new-poll-ame…
Trump approval is 40% vs 56% disapprove among all adults, with 42% saying they "strongly disapprove" of the job he has done as POTUS. Notable that the strong disapprove response is higher than the cumulative approve response. Image
Read 11 tweets
Oct 28, 2024
Some early vote Qs: What % of 2020 early voters have voted so far in 2024? Does that differ by party? What about E-day voters?

Now that we have a substantial number of votes — above 10m in the swing states, or around 37% of likely voters in those states — we can start tracking:
This is the % of 2020 ABEV voters who have voted in 2024, as of yesterday

AZ 39% of Ds, 39% of Rs
GA 58%D 66%R
MI 43%D 45%R
NC 44%D 47%R
PA 40%D 35%R
WI 35%D 36%R

(No data in NV because our voter file vendor, L2, has been lagging there, and Clark County returns have been weird)
Other big caveat is that in MI, WI and GA, party registration is based on a model, so comes with a lot of potential measurement error. Partisan splits here may be less indicative of an advantage than in, say, AZ, NC or PA.
Read 8 tweets
Jan 25, 2024
📊Today 538 is releasing an updated set of our popular pollster ratings for the 2024 general election! Our new interactive presents grades for 540 polling organizations based on their (1) empirical record of accuracy + (2) methodological transparency. 1/n abcnews.go.com/538/best-polls…
There’s tons to say but I’ll hit a few main points. First, a methodological note. For these new ratings, we updated the way 538 measures both *empirical accuracy* and *methodological transparency.* Let me touch on each. (Methodology here: ) abcnews.go.com/538/538s-polls…
Image
(1) *Accuracy.* We now punish pollsters who show routine bias toward one party, regardless of whether they perform better in terms of absolute error. We find that bias predicts future error even if it’s helpful over a short time scale.
Read 19 tweets
Nov 21, 2023
if you want to understand polling today, you have to consider *both* the results and the data-generating process behind them. this is not a controversial statement (or shouldn't be). factors like nonresponse and measurement error are very real concerns stat.columbia.edu/~gelman/resear…
given the research on all the various ways error/bias can enter the DGP, if your defense against "polls show disproportionate shifts among X group. meh" are "well X group voted this way 20 years ago," i am going to weight that pretty low vs concerns about non-sampling error Image
at the same time, if a critical mass of surveys is showing you something ,you should give it a chance to be true. interrogate the data and see if there's something there. i see tendencies both to over-interpret crosstabs and to throw all polls out when they misfire. both are bad
Read 4 tweets
Oct 14, 2023
There is good stuff in this thread, and I’ve been making the first point too for some time. But remember a lot can change in a year, and some of the factors that look big now may not actually matter. Uncertainty is impossibly high this far out.
I took a look yesterday at how much Dem state-lvl POTUS margins tends to change from year to year. It’s about 7pp in our current high-polarization era. That’s a lot! With 2020 as our starting point simulating correlated changes across states, you get p(Biden >= 270) around 60%.
that is obviously not a good place to start if you are team Biden. But the range of outcomes is laughably large—a landslide for either party is more than plausible. So there is a pick your own adventure element to analyses like these: Dobbs, Jan 6 help Ds; Economy, Biden age hurt
Read 6 tweets
May 19, 2023
so, as they say... some personal news
Lots to share, but for now I'll just say FiveThirtyEight was one of the outlets that inspired me to be a data journalist. Nate Silver did great work & the team he led changed political journalism for the better. We will be iterating on that, but we start with a strong foundation.
2/3 ABC and I have been in talks for 6 months to ensure there will be as little disruption as possible in transitioning from the aggregation + forecasting models Silver is taking with him when his contract expires to our new in-house methods, developed w input across ABC & 538.
Read 4 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us!

:(