Why do people have ridiculous beliefs?
Like climate denial, creationism. Or that DJT is a good president.
Many think such beliefs can be reconciled with a “rational,” “bayesian” account.
Here, I’ll walk through all the evidence against.
1) accurate beliefs would be quite *counterproductive,* in these domains
2) beliefs coincide *too well* with what we are incentivized to believe
3) disagreement is * known* to be *systematic* & *persistent*
5) when we *do* have stronger accuracy motives, our beliefs look quite different.
-I think it is great that academics raise this possibility. It is important to figure out what bayes *would* predict and *can* explain. (So that we can better know when and how we deviate. And so we conduct research that’s better able to differentiate).
But important to keep in mind the big picture: this is NOT where our ridiculous beliefs come from.
The science clearly points to “that’s interesting and sometimes pertinent, but not at all what’s going on with moral religious political beliefs.“
Now, is it a surprise that we have ridiculous beliefs given these strong incentives? Au contraire.
Would you expect our beliefs in these cases to be Bayesian?
Of course not. That would just get us into trouble. And serves zero purpose.
Sure we *say* that’s how we form our beliefs. I want you to believe that my beliefs are solely driven by info and geared toward accuracy.
But since when do economists take people at their word, and ignore their motives?
And in all these cases the beliefs line up all too well with the incentives.
But a better explanation: he made these kinda proclamations only after the followers were completely dependent on him
We all just happen to have evidence that indicates that we are more attractive or smarter than others’ evidence indicates?
Possible. But again, that would be quite a coincidence (more like 7 billion coincidences).
Or as hot as we can plausibly argue given the data (“oh sure i am short, but i am thin. And that’s what *really* matters.”)
Again suggests it’s the incentives driving us not the data.
It’s one thing to discount countervailing evidence.
After all, if you strongly believe Fox News tells the truth, then your belief that nyt is biased is warranted, and their stories should be less trusted. Fair enough. Perfectly Bayesian.
So says the bayesian.
That’s only a very very very limited application of bayes rule. Only rational if you don’t think too hard.
Bayesian updating implies a LOT more. And a bit more thought makes it clear all of the above is inconsistent with Bayesian updating.
A bayesian doesn’t just need to respond optimally to the data he receives, while ignoring the bias in the data generating process that yields it (learning attractiveness only from friends and theology only from coreligionists).
Friends don’t tell you you are ugly. Rabbis don’t tell you what’s great about Islam.
We all know that.
Great. A bayesian should adjust accordingly.
A bayesian would update accordingly.
But that can’t happen to everyone. Especially given we all see it happening to others. Not as bayesians
But it’s inconsistent with the *systematic bias* and *persistent disagreement*, that we all see all around us.
As mentioned, some of the biases in updating—like attending less to non-supportive evidence than supportive, or discounting anything coming from the opposing party— can be explained with Bayesian updating.
But other biases less so.
Like p-hacking or cherry picking vs making up data.
And, more or less, internalize this cheating.
Which of course they would be, if our biases are driven by the need to justify or persuade others, where we would take advantage of less observable information and cheat where we can.
To explain all you need to think not about the info that YOU have but the info OTHERS have (and hence what you can use in your favor when lying and persuading).
Sometimes we do have an accuracy motive.
For instance, academics have a strong(er) incentive to look reasonable.
We are (more) accurate when we have an incentive to be.