Deleting my Emerson tweet because we don't have proof that they polled or weighted incorrectly and I really don't want people to jump to a conclusion on the stats alone. To be honest, I have no idea what Emerson did here. I don't know how you can get a difference like that.
It is *extremely* weird for them to get a poll that very weirdly lines up with the old partisanship of the districts better than the new ones (#NM02: Trump +10 in 2020, Biden +5 in 2022, recalled vote Trump +11?). Also, why are natives 4% of the electorate in #NM03?
Same thing in #NM03, where the recalled vote is Biden +18 for a seat that was Biden +18 in 2020 and Biden +11 after a redraw.
Like, I really don't know what they'd be doing to get something that similar to the old partisanships. But then you look at the voter registration model and it...largely lines up? For example, in #NM02, they have registered Dems as 44% of the electorate and 46% in #NM03. Close.
My best guess is that someone messed something up badly here in the process of making their electorate screen. I am also somewhat surprised that their "very likely voters" screen is basically identical to the RV statistics, but I will defer to more experienced people on this.
Lastly, Biden +24 is their recalled vote in 2020 for #NM01. That lines up *perfectly* with the Biden +23 result before the redraw, which took it to Biden +14.4.
These are too similar to the very different 2020 seats to *not* think they messed up.
so, tl;dr: I think Emerson messed up badly, and I'm not at all certain that they knew what they were doing here, but I can easily see how they'd make a mistake like this! but that thread caught fire quickly and I don't want people confidently making conclusions from it yet.
• • •
Missing some Tweet in this thread? You can try to
force a refresh
Underlying this is a deeper methodological question. I've said repeatedly that this is not a model and that my forecast is an R+1 or R+2, but I'm under no illusions that people will listen. But I can't change the criteria now just because I disagree with the current result.
It would be quite easy for me to change things around a bit. I could go in and put in the Marist "definitely voting" screen, or I could relax the 538 grade criteria to take CNN/SSRS and NewsNation, and both of those things would show Rs leading by a margin that makes more sense.
But would I do that if the polls skewed it the *other* way (i.e. if CNN came out with a D+5 somehow today)? No.
So I can't do the reverse here. I'm not going to bend the criteria post-hoc simply because I don't agree with the results the aggregator shows.
No update to our nonpartisan generic ballot poll tracker this morning for @SplitTicket_, mainly because we have seen no new generic ballot polls (!). But I thought I'd take this thread to go through where our aggregator was at specific days, compared to 538 and RCP.
For the sake of transparency, I'll put up that we have data for our aggregate from June 1 onwards. The data is in the table, on our site. However, we only display the GCB from September 1 onwards in our graphic, which is around when the LV screens begin to hit polls. OK, now...
The polls have never once shown a red wave post-Dobbs, and as Ethan points out, neither do the primary turnout data or the specials, which have historically been very predictive of what November looks like and were accurate in 2010/14.
In 2010 and 2014, Dem primary turnout was lower than it has been in 2022. Dem share of the primary electorate went up by ~2.5 since 2014, which was an R+5 year...but that shift is slightly bigger when looking only at post-Dobbs data
So what are we basing a red wave assumption off of? Real Clear Politics? All the other 3 aggregators (FiveThirtyEight, The Economist, and Split Ticket) show a close race, shifted 4-5 points right from 2020. That's...exactly what the primaries suggest too!!
Honestly, I’m not an Elon Musk fan, but I really think people are sort of overreacting in that I doubt this site actually implodes and dies like everyone is saying. Doesn’t mean it’ll all be smooth, of course, but I guess I’m skeptical of a rapid decay based on speculated ideas.
Yeah, everyone’s going to be panicked of course, because some of these ideas are completely brain-dead. But this happens all the time in tech. Some new guy comes in, yells about how he’ll make a huge change and revamp, and then realizes things were the way they were for a reason.
See, for example, Elon’s statement to advertisers, which was the clearest example of reality setting in. There’ll probably be more rocky times ahead and more moments and ideas that make us all go “WTF?!”. If anything, that is something I’m confident in.
I'll risk sounding like the no-fun guy here, but folks on Election Twitter, be careful of things that aren't immediately obvious sarcasm on election nights, because I do not think most realize how many casual observers rely on this community to shape/inform their perceptions.
It's already kind of problematic with the Brazilian elections, but that would pale in comparison to if you said Josh Shapiro is losing because Luzerne's e-day ballots were GOP-leaning. To you, that's the most obvious bait. But most normal people don't get that!
Like, a good number of ordinary people on here do actually know Shapiro is running against a lunatic anti-semite in Pennsylvania, but what kind of normal, sane person has any idea of how some random county voted, lest of all its e-day/EV/mail splits?
So, something interesting here. I collected data going back to August 1. The #NY19 special, which Pat Ryan won by 2.4% in a Biden +2 seat, was on August 22. Here are the generic ballot aggregates for that day.
@FiveThirtyEight@RealClearNews@SplitTicket_ Part of this is because 538's model is "stickier" than ours and takes partisan polls as well, which isn't inherently bad -- it's just not what we do. The delta in our averages now is only 1 point, FWIW, so they have converged.
@FiveThirtyEight@RealClearNews@SplitTicket_ But, it's worth considering that perhaps there *was* genuinely a lag that private pollsters didn't pick up on. Impact/Trafalgar/OnMessage/Rasmussen all had leads of ~R+4 around then, which kind of drowned this out.