, 85 tweets, 12 min read Read on Twitter

Why do people have ridiculous beliefs?

Like climate denial, creationism. Or that DJT is a good president.

Many think such beliefs can be reconciled with a “rational,” “bayesian” account.

I don’t.

Here, I’ll walk through all the evidence against.
I’ll list 5 counterarguments

1) accurate beliefs would be quite *counterproductive,* in these domains

2) beliefs coincide *too well* with what we are incentivized to believe

3) disagreement is * known* to be *systematic* & *persistent*
4) belief updating is biased in ways that *cannot* be reconciled w/ bayes rule.

5) when we *do* have stronger accuracy motives, our beliefs look quite different.
Some caveats:

-I think it is great that academics raise this possibility. It is important to figure out what bayes *would* predict and *can* explain. (So that we can better know when and how we deviate. And so we conduct research that’s better able to differentiate).
-it is true that many existing results (eg attitude polarization, ignoring disconfirmatory evidence, echo chambers) isn’t on its own evidence for non-rational updating. That’s an important observation.
-our beliefs, and the way we respond to new evidence, does often take a form that is *plausibly* rational. That’s also important to notice and characterize. (Eg we add “auxiliary hypothesis,” as needed, that are unlikely but hard to falsify. That’s interesting.)
-in some cases (like w/ fake news?) accuracy motives *do* play a role (don’t wanna look like an idiot!). Useful to recognize that such beliefs (in what the fake news article says) may respond to info, or reminders to be reflective, etc. Even if that *usually* won’t help.
-if you allow for *arbitrary priors* you can make *almost anything* consistent with bayes rule. (But that’s a FAR cry from saying that’s the best *explanation* for people’s beliefs.)
IMO important to be aware of all of above points. (And know well what bayes can and can’t do. And instances where more pertinent.)

But important to keep in mind the big picture: this is NOT where our ridiculous beliefs come from.
Don’t get confused between *mathematical possibilities* and *parsimonious scientific explanations.*

The science clearly points to “that’s interesting and sometimes pertinent, but not at all what’s going on with moral religious political beliefs.“
(Beliefs in domains where there are no real accuracy motives but huge incentives to hold inaccurate beliefs.)
And don’t get confused between “this is pertinent to this instance or subdomain” w/ “this is actually sufficient for understanding the broad swath of ridiculous beliefs”
The scientific explanation that I think does a much better job, on the whole, fitting the evidence is, as I’ve alluded to “incentives for holding certain beliefs.”
All I mean by that is that New Englanders would get beaten up at the bar if they proclaimed Tom Brady in fact inflated the football. They have an incentive to not purport such a belief.
(and avoid any behaviors that might reveal such a belief. And maybe even internalize this belief, to some extent. Obviously, re internalize stuff that’s functional. What else would internalization and our “true beliefs” be about?)
By incentives i mean the incentive to help one’s coalition, including one ‘s political coalition. Or at least not get caught undermining the coalition. And the interests of the powerful elites that control it (ahem, climate denial.)
By incentives I mean the need to justify one’s behavior (or political of moral stance) in a way that seems selfless, reasonable, consistent with facts and desirable values, and well informed.
And by incentives I mean the need to signal commitment to an in-group. And dependency on that group’s norms, dictates, and coercion powers (such as by defying scienctific knowledge, believing in an omniscient and omnipotent god, and being steeped in bogus theology.)
Ok, my first argument:

Now, is it a surprise that we have ridiculous beliefs given these strong incentives? Au contraire.

Would you expect our beliefs in these cases to be Bayesian?

Of course not. That would just get us into trouble. And serves zero purpose.
(Does it really matter TO YOU if your assessment of of the cause or extent of climate change turns out to be wrong? Or just that you say the thing that’s deemed right by those you wanna impress and befriend?)
Bayesian beliefs in this context, in fact, is the IRRATIONAL thing. Cause that requires beliefs that get one into trouble. What for? So that when god does his truth functional accuracy tally after you die, you get a high score? Is that the right assessment of “rationality”?
Economists consider such “Bayesian” beliefs to be “rational.” Because they forgot what rational really means. it doesn’t mean applying their normative assessments (bayes rule) when there is no incentive to apply it. It means responding to the appropriate incentives at hand.
Economists are taking too seriously what we mean when we say “beliefs.” Thinking it must mean beliefs in the sense they mean it. As in, the stuff that responds to info and is motivated solely by accuracy.
But that’s accepting people’s propaganda at face value:

Sure we *say* that’s how we form our beliefs. I want you to believe that my beliefs are solely driven by info and geared toward accuracy.

But since when do economists take people at their word, and ignore their motives?
Next argument:

Does it look like people’s priors are just randomly assigned? Or best explained by the information they have had access to.

Or are our “priors” better explained by incentives, like the incentives described above?

(Recall New Englander’s. See image.)
Do new englanders really have particular information about how footballs are inflated or how likely Brady is to be a cheater? Or do they just have a stronger incentive to defend their team, and less of a need to diminish the team’s incessant winning streak?
How deeply so new englanders *really* believe this. I dunno. But less so than African Americans believed OJ was innocent—which also oddly divided all too well along in-group lines? Or than my dad believes in creationism?
(It’s kinda hard to say how much people “really” believe, or even what that world mean, in domains like this, where we don’t have to make decisions based on these beliefs, like choosing gambles, or buying insurance, whose payoffs depend on the revealed true state of the world.)
What I care about is the beliefs these people purport to have, and the related decisions they make, and arguments they have.

And in all these cases the beliefs line up all too well with the incentives.
Does Carson (the head Butler from downton abbey) really have more information about the advantages of the class system that he so dearly believes in? More so than the people who live upstairs that he serves, who like the system but are more willing to adapt as needed?
Of course not. Carson isn’t “better informed.” He’s just got stronger incentives.
(Not incentives in the silly sense that political scientists count. whereby poor people are thought to have an “incentive” to vote for more redistributive policies. That’s a dumb notion of incentives, given poor people don’t affect their own welfare receipts through their vote)
(Likewise Carson’s “incentives” have nothing to do with how much he would like to live in a classist society, given he himself has zero influence over whether his society remains classist.)
Carson’s incentive is to be hired as a butler. Or not fired as a butler. And that requires being trusted to maintain the class system, and it’s corresponding ridiculous rituals, and costly displays. And show signs he doesn’t wanna burn down the house or steal the silverware.
Likewise, cult followers, or kidnap victims, they often take on beliefs of their captors, or that benefit the religious leaders (“every is gay but jim Jones. Jim *has to* sleep with your partner in order to help him un-gay himself.” Yes Jim argued that.)
Jim Jones followers believed this obviously Jim-serving theology. Why? Because they had the best information available on the topic that us outsiders lack? Maybe

But a better explanation: he made these kinda proclamations only after the followers were completely dependent on him
Only at the point where their live savings, and retirement accounts had been signed over. Only to those who needed him for social security and community and jobs. And even moreso after he moved them to an isolated colony outside of the legal protection of the state’s.
As he ramps up their dependence on him, and also ramps up his displays of punishment against those who question his authority, and ability to punish more and more severely, surprisingly, people buy into his theology more and more, and take the theology to more extreme lengths.
Same exact story happened in Muenster, as that city went from a safe haven for Anabaptists, to a besieged city, where the religious leaders developed more and more ridiculous, and self-serving theology (check out dan Carlin’s “prophets of doom”).
There too it was dictated, coincidentally, that the leaders get to have multiple wives, and first dibs and the food that was becoming more and more scarce. Oh and the religious leader was some kind of god king.
Is that what the evidence suggested? Or just what John of Leiden wanted. And lucky for him, everyone else in the city, and the ideology they had to proclaim, and presumably internalize more or less, was a hostage to his will.
Muenster and jim Jones (And likewise Stockholm syndrome) are extremes. But is everyday religion that different on this regard? Can ultra-Orthodox Jews really get away with just proclaiming they believe in evolution and that the Bible was written by man and full of fallacies?
They *can,* but they will pay penalties. Not the same penalties John of Leiden handed out (beheadings), but still. It’s nice to be able to get a job at the Jewish day school, and to be considered a “Talmud cachum” when looking for a wife. Believing Darwin doesn’t help w/ that.
*Could* it be the case that as soon as Jones’s followers moved to Jonestown, they found all sorts of evidence for Jim’s wisdom (along with his theology of sodomy) that the rest of us are not privy to? Sure. Mathematically possible.
But is that really what causes Jim’s theology to stick to those whose passports he has taken, now surrounded by endless jungle, with neighbors watching your every move?
Sure bayes is a *possible* explanation. But not a *reasonable* explanation. Too many “coincidences” between people’s beliefs and their incentives.
Why are people over-confident?

We all just happen to have evidence that indicates that we are more attractive or smarter than others’ evidence indicates?

Possible. But again, that would be quite a coincidence (more like 7 billion coincidences).
More likely: we all have a strong incentive to persuade others how attractive or smart we are. And believing it might help, or at least not totally undermine our agenda.
Do we actually believe we are that hot? Not always. Not when we have a strong incentive to be accurate, like when deciding whether or not to settle down with current partner or keep searching. Not when noone is buying our charade and they all just act like we are full of ourselve
Not when you are in the presence of a much more powerful competitor who will beat you down if you fluff your feathers in his face.
Are the above exceptions driven by newly acquired information or current real time incentives?
Do we believe we are as hot as the data indicates?

Or as hot as we can plausibly argue given the data (“oh sure i am short, but i am thin. And that’s what *really* matters.”)

Again suggests it’s the incentives driving us not the data.
Next argument:

It’s one thing to discount countervailing evidence.

After all, if you strongly believe Fox News tells the truth, then your belief that nyt is biased is warranted, and their stories should be less trusted. Fair enough. Perfectly Bayesian.
It’s also one thing to be faced with one sided evidence to no fault of your own. My Facebook feed only shows stories my like minded liberal friends share. Does that mean I am biased if I only see evidence that trump and the gop are evil? Of course not.
Echo chambers are natural. That’s what facebooks algorithm gives us, because that maximizes time online and shares. Those are the friends w have. Because of shared interests and common backgrounds. & it also might be the only sources we find credible. Perfectly reasonable so far.
And if we only ask our friends how smart or attractive we are, perhaps no surprise that we have a biased perception. And if we learn our region from our parents, no surprise that we are more likely to learn favorable things bout that religion & stick to it.

So says the bayesian.

That’s only a very very very limited application of bayes rule. Only rational if you don’t think too hard.

Bayesian updating implies a LOT more. And a bit more thought makes it clear all of the above is inconsistent with Bayesian updating.
The problem is:

A bayesian doesn’t just need to respond optimally to the data he receives, while ignoring the bias in the data generating process that yields it (learning attractiveness only from friends and theology only from coreligionists).
A bayesian would also realize the data generating process is biased.

Friends don’t tell you you are ugly. Rabbis don’t tell you what’s great about Islam.

We all know that.

Great. A bayesian should adjust accordingly.
Of course Fox News viewers discount what nyt says. Cause they have an auxiliary hypothesis that says mainstream media is biased. And strong priors that trump is saintly. Sure.
But they ALSO know liberals think the opposite. And that TOO is information.

A bayesian would update accordingly.
Sure an Orthodox Jew only learns about the evidence for Judaism. But he has to know that the same is true for those raised Islamic and Christian. And they all stay their religion too. A Bayesian would take that into account.
The fact that we all stay the religion we were raised in, well that could happen to a Bayesian who gets lucky enough to be born into the religion w/ especially strong evidence

But that can’t happen to everyone. Especially given we all see it happening to others. Not as bayesians
Bayesian updating is consistent with some observed biases, like echo chambers and discounting of disconfirmatory evidence.

But it’s inconsistent with the *systematic bias* and *persistent disagreement*, that we all see all around us.
Technical aside: if we happen to be born w/ “uncommon priors” namely ridiculous beliefs, that differ from person to person, and are not based on any form of info, then you *can* sustain the above biases. Economists like to point that out.
But that’s cheating. Such “uncommon priors” are themselves left entirely unexplained (and inexplicable), and don’t really give us anything in terms of explanatory power. Just serves to make the priors story less falsifiable. And harder to see that it’s the wrong story. IMO.
It’s just a technical deus ex machina. Like saying god put the dinosaur bones there to fool us. Or positing whatever behavioral preferences fit the phenomena we observe. And calling that an explanation: Economists like doing that. But it’s not science. It’s mathematical theology.
Two arguments left:

As mentioned, some of the biases in updating—like attending less to non-supportive evidence than supportive, or discounting anything coming from the opposing party— can be explained with Bayesian updating.

But other biases less so.
Take the fact that we are more influenced by what we are influenced by which lies we can more easily justify. Or are less likely to get caught on or punished for.

Like p-hacking or cherry picking vs making up data.
Like doing a very extensive search for supportive evidence but a minimal search for non-supportive evidence, and reporting the results of search while leaving out the extent of search.
We cheat like this all the time.

And, more or less, internalize this cheating.
Why would a bayesian pay less attention to the extent of search than the results of search? Or the plausibility deniability of the cheating compared to the more overt cheating? To the putative veracity of the evidence and not just the actual diagnosticity?
They wouldn’t. But these biases are still evident.

Which of course they would be, if our biases are driven by the need to justify or persuade others, where we would take advantage of less observable information and cheat where we can.
The bayesian story can explain *some* of the biases in updating. But *far* from all.

To explain all you need to think not about the info that YOU have but the info OTHERS have (and hence what you can use in your favor when lying and persuading).
Last argument:

Sometimes we do have an accuracy motive.

For instance, academics have a strong(er) incentive to look reasonable.
Some decisions, like health decisions, you have more skin in the game, than political debates.
True, some of us *still* don’t believe in germ theory (or trust the advice of doctors that do). But that’s rather rare. Much rather than climate denial.
(And more frequently in cases when taking some useless supplement but not when get cancer.)
(Or when avoiding vaccines. Which, after all, are a public good, so most of us don’t actually have much of a motive to have accurate beliefs there either, unfortunately.)
Likewise, when logical arguments or evidence is stripped of its political or moral content, where we have an incentive to be *inaccurate* we get these more right.
(“what are the odds your vote will effect the election”? Vs “what are the odds 200 million coin flips will land exactly 100 million heads, give or take one?”)
And as mentioned before, when your false beliefs are more likely to be caught and penalized, say because the plausible deniability is removed (as with fake news?) all of a sudden we tend to care about getting it right.
We don’t correct when evidence indicates our beliefs are unlikely (as with climate denial) but only when they become implausible (as with pizzagate.)
Only in those instances does reflexivity, harder thought, intelligence, and being better informed, actually help. (The evidence seems to indicate. There are many smart people who deny climate change. Few who believe in pozzagate afaik)
Because only in those instances do we (most of us) actually get dinged for being inaccurate.

We are (more) accurate when we have an incentive to be.

Missing some Tweet in this thread?
You can try to force a refresh.

Like this thread? Get email updates or save it to PDF!

Subscribe to Moshe Hoffman
Profile picture

Get real-time email alerts when new unrolls are available from this author!

This content may be removed anytime!

Twitter may remove this content at anytime, convert it as a PDF, save and print for later use!

Try unrolling a thread yourself!

how to unroll video

1) Follow Thread Reader App on Twitter so you can easily mention us!

2) Go to a Twitter thread (series of Tweets by the same owner) and mention us with a keyword "unroll" @threadreaderapp unroll

You can practice here first or read more on our help page!

Follow Us on Twitter!

Did Thread Reader help you today?

Support us! We are indie developers!

This site is made by just three indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3.00/month or $30.00/year) and get exclusive features!

Become Premium

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!