This is making a big splash. But there's a big problem, here.
@pewresearch has documented that opt-in online panels (the only way to get 100k+ respondents, so has to be the methodology here) have a lot of error in measuring young people's opinions - SPECIFICALLY on antisemitism.
It's connected to the fact that opt-in online panels reward "bogus" respondents who aren't taking the surveys seriously and are just trying to get rewards for taking it. They also may be trolling.
The best advice I can give anyone about polling is that if a result seems surprising or weird, you should check it out thoroughly before lighting the world on fire about it.
And, as always, thanks to @pewresearch for doing crucial work on methodology.
• • •
Missing some Tweet in this thread? You can try to
force a refresh
First, I don't dispute that low-quality pollsters might herd. But the originally quoted tweet is looking at higher quality pollsters.
Now on to the statistics of it:
The idea that poll results will be spread out along a normal distribution uses common statistical assumptions - if we take repeated random samples, they will fall along a normal distribution and be somewhat spread apart.
Part of it is undecideds/item nonresponse. If your margin is 90-7-3 (+83), then any deviation makes the margin difference look huge.
And if you’re weighing partisanship/past vote across the board rather than by subgroup, you will falsely inflate some group’s Trump margins.
I find that it’s white voters who need R/Trump upweighting for party ID/recall (although not a fan of recall weighting). You’re skewing the other groups to weight across the board.
CBS/YouGov hasn't shown the same age depolarization that other polls have - I believe this to be due to their sampling and weighting practices that keep samples steadier at the subgroup level. Topline-only weighting that many others do can result in weird subgroups.
To be clear, I do not have inside info on how public pollsters are currently weighting, I just know YouGov's practices, and I know what happens when I weight polls in different ways (topline vs. subgroup). Topline only has different results for subgroups.
Also to be clear - we do not know which is "correct" and won't until November.
I have a strong opinion that we need to deal with subgroups in weighting if you're going to report subgroups, but we won't know until we know.
When polls force respondents to pick one of two partisan jargon options without an effort to find out what *the respondents* really think, we're quite possibly misrepresenting public opinion.
@nationaljournal Now, if it's reported appropriately, great. But more often than not, both pollsters and headlines run away with the results as "this is what people want!!" (provided it aligns with their priors)
Or, "look how divided we are!"
Well, yes, if you ask questions that are literally designed to divide people between two options, you will get divided answers.
I've been saying for years that Dems have an organizing problem in Texas. Beto O'Rourke was never going to single-handedly solve that, nor was all the out-of-state support specific to Beto.
If there is a massive, and I mean MASSIVE blue wave in 2024, Ted Cruz could go down. He's the most vulnerable of the statewides. But the wave would have to be much bigger than in 2018.