I'm old enough to remember West Bengal exit polls and also West Bengal results.
It will be interesting to see how the results pan out versus the exit polls in U.P. If you have followed Indian state elections after LS19, you'll remember that exit polls consistently overestimate BJP prospects.
Maharashtra, they were supposed to storm back to power.
In West Bengal, they predicted a thin TMC win. Was a comprehensive rout.
In Bihar too, exit polls were wrong. In the opposite direction. An MGB win was supposedly assured. NDA eked out a win. Tamil Nadu, similar story. But looking deeper into the data, it's still been a case of overestimating BJP results. A couple of other states too.
There is a Modi effect in polls, is my guess, or maybe even a bandwagon effect.
That Modi fans who buy into the invincibility are more likely to participate in exit polls & more likely to share who they voted for. Response bias effects.
What happens in UP, we will find out soon.
Especially with all the EVM videos being circulated.
Bottomline, remember that Indian polling is notoriously wrong. US polls only got 2016 wrong. Indian exit polls are a bit like Punxsutawney Phil the groundhog. Taken very seriously even though predictions are wrong more often than right.
Plus there is the fact that almost all polling in India is done by Godi media. Without even a pretense of objectivity or rigor.
In US, WSJ is right wing, partners with NBC. Fox is right wing, often partners with ABC. And Fox polling team is famously sequestered.
Trump would regularly breathe fire at Fox for not having him winning in every single poll. Plus remember the rant when they called Arizona for Biden.
US media is a) majority non-RW, and b) even RW outlets strive for objectivity, not politics in terms of polling.
So in 2 days, BJP might still win. FPTP after all. But the fact that even the most "optimistic" polls for them predict a loss of dozens of seats means I'd wait before taking them too seriously.
Also Exit Polls skew towards elite voters, cos they take extra time & effort.
Oh also, the whole "Poll of Polls" concept works very differently in India vs US.
In US, most "mainstream" polls are remarkably consistent. So when Nate Silver or @LarrySabato or RCP or Princeton do an aggregation, it is statistically way more justifiable to do so.
There's rarely more than a 4-5% difference in US polls. Plus US is a 2 party system. So the underlying distributions assumed when doing a "poll of polls" are easier to aggregate given the binary nature of the results. It's like a coin toss.
Indian elections are dice rolls.
Given that
a) Indian elections aren't binary but between 3-5 parties very often
b) The polls differ WILDLY across media outlets, even within Godi media
c) You can have parties winning super majorities even with just 33% voteshare,
aggregation is not really the same.
Also, US media outlets give many many many details of their methodologies and weighting and distribution assumptions and margins of error and how exactly the aggregation works. Nate Silver made a career out of it despite being, all due respect, not exactly Sabato type scholarly.
Indian pollsters are very stingy with their methodology and other details. So I often suspect their "poll of polls" is just adding up all the numbers and taking an average.
That's so NOT statistically rigorous. You're supposed to aggregate distributions, not point estimates.
Also, Indian pollsters don't give any probability details at all! Just direct prediction.
People keep saying Nate got 2016 wrong. Just cos he gave HRC a 70% chance of winning.
In statistical terms, that is NOT a very emphatic prediction. Cos remember, baseline is 50% not 0%.
If you understand statistics, 30% probability of trump winning was a PRETTY huge probability. It still favored HRC. But not at all an open and shut prediction like 2008,2012, even 2020.
So he didn't really get it wrong statistically. Media and normies read it wrong.
Indian polling though is almost doggedly wrong and secretive and thin on details. And honestly, there just isn't enough statistical expertise in the Indian system. Just isn't. There is a random condescension towards stats in the Indian education system.
To me, the Indian Statistical Institute is a more premiere genuine "Research 1" institution than IITs, IIMs, university stats departments that are primarily teaching schools.
Stats taught in institutes other than ISI is shockingly primitive. Not cutting edge like western univs.
Basically the difference is, in US political polling or analytics or such is done by career academics or those affiliated with them. People who care more about scientific rigor than getting TRPs for news channels. People who put their reputations ahead of their politics.
In India, most polling and analysis is done by people who probably haven't gone beyond Casella & Berger in terms of serious training on the Theory of Statistics. Very much workshop type feel in Indian pollsters, not serious scientific rigor feel. Cos it's all for TRPs not science
Indian pollsters and the media outlets that are the driving force behind them, treat their polling as some valuable IP or competitive advantage to sell more ads. There isn't that culture of sharing tons of details like US pollsters do, cos they care more about their reputations.
Indian pollsters, especially in Godi media, are like Scott Rasmussen, a pollster who isn't really taken very seriously by serious analysts in the US.
Cos he is like Glenn McGrath who always predicts a 5-0 win for Australia anywhere. Scott always has GOP friendly numbers.
Scott doesn't care that serious polling people and academics point out the flaws in his methodology and predictions year after year.
Cos he isn't looking for legitimacy. He is hoping to remain a right wing media darling by giving consistently pro GOP numbers.
We are just culturally not able to fully grasp the concept of "too close to call" being a reflection of reality, not failure to accurately predict a close outcome, or being unable to do enough statistical thingamajig. There is no actual crystal ball, remember.
But when close outcomes are close, there is no possible stats magic that can always predict it. It just isn't theoretically possible. If you understand statistics and probability, you'll get what I'm saying.
Let me illustrate with a simple example. Consider a huge bag of m&m's.
There is a huge bag of 150 million M&M's with only two colors in it, red & blue. Properly shaken so they are randomly distributed.
It has x number of red ones, and thus, 150 million minus x of blue ones.
x is unknown.
But x is a REAL number. It is actually existing reality.
Stats comes and says, let me try to guess. Bear in mind, there is no theoretical way I could ever be a 100% sure what x is. But if you're okay with like 95% sure, I'm your science. Just let me sample a few of those m&M's a bunch of times. A thousand randomly chosen ones.
Stats dips a pail in. It's 700 red 300 blue. Okay! Another random sample, 550 red 450 blue. Another sample, more red than blue.
And so on and so on. Sometimes occasionally there are more blue than red. But if it's more red than blue after enough samples, stats is like, okay...
Stats says, y'all know my mom Math? Well, mom says that all long as I've dipped the pail enough times and the count seems to keep favoring red way more, I'm confident to say, there are more red than blue cos look how often it's red! And here is a range of my guess on x.
But if the reality is that there are 75,000,100 red and 74,999,900 blue m&M's, yikes, stats is fucked. Mommy math says that at those close margins, stats could sample till the candy melts. No way are the data going to be conclusive!
So stats goes and says, sorry, I just can't be sure. It's too close to call.
You say, but if you had to pick one.
Stats says I can't.
You say, I insist. We have an audience waiting on it.
Stats says, how can we know unless we count every single one?
You say, come on!
Stats finally is like, okay, since you're forcing my hand, I'm at best 70% sure that it's more blue than red. But I shouldn't even be telling you this. Cos there is no way I can be...
Yeah yeah, shut up.
Stats says more blue m&ms. Let's count them all.
Counting is done. It's more red than blue. Everyone is like, hmmpppfff, lies, damned lies, and statistics.
See what I mean?
In India, the bag has 4 different colors of m&m - saffron, red, green, blue. Stats doesn't have to predict if one of them is 50%, like the other example.
It has to decide which one has the highest number. Could be 51%, could be 26%. It's typically 32% cos FPTP.
If it was just about telling of the Indian bag has one color at least 50%, the poll of polls would be easy. And that's how they treat it.
But it's wrong to do in FPTP.
You need WAY more complex methodology to aggregate those multinomial polls. And I just don't see that rigor.
Just a reminder that in 2017, the BJP won 75% of the seats in the Uttar Pradesh assembly with less than 40% of the voteshare. And they don't even have electoral college type nonsense in India. It's just the FPTP system's vagaries.
When a 39% voteshare can give you a 75% majority in the legislature, that's a whole other kind of statistical analysis for aggregating polls or even predicting them. Especially when it's a close one like Bihar last time.
To clarify, I'm not throwing shade at actual statistics scholars in India, of whom there are unfortunately too few. Just compare the number of statistics PhDs with the rest of the world.
I'm throwing shade at the media pollsters. Who seem mostly IIT/IIM bros.
Just a few MBA courses in statistics and knowing Python does not an election analyst make.
In the US, even the blatantly partisan Rasmussen displays more scientific rigor than Indian media's "best" polls. In an already difficult job, Indian pollsters cut corners.
In Bihar 2020, the voteshare was
MGB - 35.75%
NDA - 34.85%
The MGB actually got MORE votes statewide.
But the seats, which decide the "winner"
MGB - 110
NDA - 125
No amount of statistical thingamajig would have predicted it with certainty. Even in this binary race.
But the thing is, Indian media pollsters didn't even try. They seemed more interested in "calling" a winner. And of course, the polls were so wildly off.
That's my point. Polling in India, what's in the media, is actually way harder than US but is treated more casually. 🤷🏽♂️🙄
• • •
Missing some Tweet in this thread? You can try to
force a refresh
Alright, let's start with Pune then. This is all from memory, so please bhool chook dyaavi ghyaavi.
Obviously not going to explain obviously famous names like Mahatma Gandhi Road and Tilak Road and such. Rather names of roads and spots that are famous.
Nal Stop only recently regained its status as a proper stop. Cos it's a stop on the brand new metro.
It gets its name from the fact that once upon a time, it was the final stop on the then bus lines. So the city had installed a lot of faucets of drinking water there.
नळ (Nal) is the Marathi for faucet. So it was quite literally the last stop, with a lot of faucets, where people could fill up water before heading out into the then wild lands of Erandwane and Kothrud and Pashan and whatnot.
It's that time of the year when I get pissed off at why India has to wait 3 days to count election results even at state level when the rest of the world usually starts counting votes right after voting ends.
It's such an obvious flaw in the system for malfeasance or suspicion.
I've heard the usual "India alag hai yaar" bros giving excuses like how big and diverse and all India is.
They really don't hold water. Abraham Lincoln's win was called the midnight after polls closed cos of a revolutionary new invention called the telegraph.
Maybe I'm being too "NRI" but I feel like India 2022 should be at least on par with USA 1860 when it comes to how democracy functions.
Don't people see the obvious problem with these long delays? It seems like a delay by design, not by compulsion.
I've been working on and off on a travel book about our long long Chile trip. It mostly sits in my drafts as I wonder if anyone would even read such a book. Just me rambling about our trip.
Here's an excerpt. What do you think?
Would you read 250 pages of this?
The book will feature tweets posted from back when we visited, so here's the tweet about Serena and Venus