, 79 tweets, 11 min read
My Authors
Read all threads
Two claims I often rail against:

1) Our beliefs are driven by evidence.

2) Our good deeds are driven by caring.

Both false.


Let me explain why I believe this. And why it’s fundamental for the social sciences.

By understanding why these claims are false:

-we can gain a deeper understanding of how preferences & beliefs *actually* work

-we can see the problems w/ trusting our intuitions on these topics

-we can see what social science needs to do differently to make *real* progress
To clarify:

I don’t mean *all beliefs.” Or *all good deeds.*

I mean political, moral, and religious beliefs.

And deeds that are targeting strangers and “causes”—like trying to save the planet or feed the poor.
We very clearly genuinely care about the impact our behavior has on our future-selves, our health and welfare, our reputations and legacies, and our family and close friends and partners.

(Which makes a lot of evolutionary sense.)
We also very much care about the truth vis-a-vis its pertinence for real life decisions.

Like whether smoking causes cancer. Or how frequently planes crash. If exercise is good for you. Or if certain types of people make good or bad life partners or bosses or employees.
Which isn’t to say there aren’t biases or limitations or mistakes in these domains.

But there is a genuine desire to know the truth, and a genuine concern for impact and efficacy.

(And it’s obvious why there evolved to be such.)
But now contrast w/ religion and politics and morals.

Is there any reason it’s helpful to know the truth?

To genuinely be concerned with the truth?

(Aside from the benefits of *appearing* to care about the truth!)
Yes, yes, society is better off if voters were well informed. I know. But I am asking if the voters themselves benefit from such?

(Aside from the benefits of *appearing* well informed.)
Clearly not.

If the voter *actually* ascertains the truth, that isn’t going to lead to better political decisions. (Cause her influence on national policy is negligible.)
Likewise knowing that the earth is or isn’t 6k years old doesn’t really lead to any better or worse decisions.

(Other than via appearances. Whether appear like a good Christian. Or like a reasonable scientist.)

Cause few of us make decisions that depend on the age of the earth
Morals are the same.

To the extent that there are moral truths (what does that even mean?)

It wouldn’t actually matter if you know what they are.

(Other than the being able to justify your behavior. Persuade others. Etc.)
Thus, there is good reason to think we are motivated by the need to *appear* reasonable etc.

But no reason to think we are motivated, in these domains, by an actual desire for truth.

(Latter motive would offer no benefits. Only costs. No reason for it to evolve. Unlike former.)
Likewise for doing good.

Other than the benefits from *appearing* to do good. And from *appearing* to care about others. Or pro-social goals, like the environment, or contributing to science.

There isn’t actually any benefit, to the self, from *accomplishing* these things.
Yes, saving the planet will benefit our grandchildren.

But nothing a typical person will do will have a meaningful impact on the world their grand children inhabit.
It clearly wouldn’t make evolutionary sense for us to evolve to *actually* care bout the planet.

Likewise re feeding the poor. Or spreading democracy. Or ...
Now again, we may have many reasons to care about the appearance of caring about these things.

And we may even benefit from deeply internalizing that desire.

But internalizing a desire that depends on appearances is *quite* different from *actually* caring.
And my argument, thus far, is just that it wouldn’t make any sense, evolutionarily for us *actually* to care.

Just like it doesn’t make any evolutionary sense for us to *actually* wanna know the truth.

(In these domains.)
Obviously, it’s different for kin, or our future-selves (despite what Parfit would have you think).

Because we evolved to look out for our genetic interest, not for the species or the planet.
Now you might imagine that somehow our psychology of caring can be stretched, via free will or cultural evolution or what not, to cover all of humanity, or any other arbitrary cause.

But that’s a claim. One that we can check against evidence. And see if it makes a priori sense.
Likewise, you might imagine our desire for ascertaining the truth in domains like health and mating, might be extended and applied to domains like politics and morality and religion.

Might be. But that’s a claim. Is it true?
Are beliefs in morality and politics and religion better understood as a misapplication of a truth seeking module developed in other domains, or better understood in terms of appearances, the need to justify and persuade, and *seem* reasonable?
Are pro-social deeds, toward non kin and non close friends, better understand as a misapplication of the way we care about the self and our family?
Or better understood as a need to *appear* to care? To *gain other’s trust* to act as if we care? Or to avoid sanctioning for *being caught* not caring?
That’s the kind of question social scientists *should* be asking.

But instead they (mostly) are just presuming.

Presuming despite the fact that the presumption is not a priori obvious. And despite the fact, I will argue, the evidence is quite contrary.


Presumably because this presumption is rather intuitive.

It’s what we feel inside our heads. It’s what we as humans think. It’s also the propoganda we spew.

(All of which you would expect if appearances are what matter. And we internalize our own propoganda.)
But that’s not what science is supposed to do. And not what it needs to do if it is to make progress.

Science is not supposed to be a propoganda arm of humanity. Or a codification of our intuitions.

It’s supposed to decipher what’s *actually* going on.
(Yes not everyone. Many social scientists don’t make this mistake. But many many do.)
So are we *actually* motivated by the truth? And by doing good?

I already argued that a priori this is a rather dubious assumption. One we should be skeptical of. But nevertheless plausible.

Is it true?
All the evidence we have suggests ow. Rather strongly.

And that thinking bout it that way just leads to confusion. And precludes all sense making.
Take the behavioral literature which shows our altruism works in all sorts of funny ways.
-we give when asked, but avoid being asked
-we don’t knowingly do bad, but avoid finding out the bad impact of our deeds
-we avoid doing bad, but feel less bad not doing good.
We also give in rather inefficient ways.

To shitty charities. To people who don’t need it.

And seem entirely unresponsive to the efficacy of our gift or potential gift.

(See Eg Paul Bloom’s recent book.)
And our giving is *very* sensitive to whether we are being watched or whether the norm is to give (as is our sense of empathy or the warm glow feeling we get from giving)
All of which is well explained by (an internalized) desire to *appear* to be givers, to abide by norms of giving, & avoid sanctions for not giving.

And to ensure others *believe* that we will comply with norms and act pro-socially.

But not at all consistent w/ *actually* caring
In contrast, toward our family, our allies, and our selves, we are quite attentive to efficacy.

And less attentive to plausible deniability and observability.
(Some people give even when gifts are anonymous. And some people, like the Gates’s are “effective altruists.” These exceptions are worthy of discussion, but I’ll save that for a future thread. Important to note though: they are exceptions. Not the norm.)
And it’s not just the behavioral literature.

Look at human history.

Or what you and I do in our daily lives
We eat meat. We buy lattes.

Despite the horrendous animal suffering caused. And the amount of good that can be done in the third world for the price of that latte.

Very hard to argue that’s consistent with our high minded claims to caring. (As Peter Singer rightly argues.)
As many have said: we are sad to hear of a tsunami in east Asia, killing 100,000.

Just not nearly as sad and not nearly as long as when we stub our toe.
And historically:

Every government throughout human history (as a first approximation) that has had the power and ability to exploit their people, or kill off or enslave other people’s, has done so (and come up with ideology to justify).

Does that look like we care?
And if ever we do extend rights beyond those we have to (abolition? vegetarianism?), we do so reluctantly, slowly, with a fight. And typically when the costs of doing so become rather small (Eg good meat substitutes. Maintaining slave empire becomes more costly & dangerous.)
So that’s my argument that what we think of and feel of as doing good, both theoretically and empirically, not actually explicable in terms of the motive to do good.
Now turning to our (moral, religious, political) beliefs:

All the facts suggest we are *not* truth seeking.

Let me summarize some of these facts...
We hold completely and obviously ridiculous beliefs (climate denial. Young earth creationism).

The extent and pervasiveness of such beliefs (long after the evidence that disproves becomes readily available.) <—really hard to jibe with a truth motive.
How hard is it to open a science textbook to learn about age of the earth? And to learn the number of obviously fabricated assumptions ken Ham needs to make to disregard this evidence?

Or to realize that science has no reason to conspire to reach concensus on this.
Likewise re climate denial.

Not hard to realize that all the doubt is being peddled by big oil (See “merchants of doubt”.) And that they have thrust this belief on the gop.

why else would *no scientists*, other than those few in their pay, deny man made climate change?
It’s also not hard to realize the nra talking points are full of shit.

You literally just need to know the fact that mass shootings don’t happen outside of the US. That’s it. That’s all you need to know.

It would also suffice to learn that no one outside of US buys nra story.
Can such outlandish beliefs really result from truth seeking behavior?

Maybe here or there? But for all of our political and religious beliefs?
And then there’s our moral beliefs.

Which we claim as being founded in self-evident truths.

Except they weren’t self evident to the rest of humanity that ignored it, or even the proponents when they had incentives to look the other way.
Self-evident truths that have not (and cannot) be backed by any kind of evidence (which is why they had to be proclaimed as self-evident, despite the fact that they weren’t evident for most people for most of history).
(proclaim self-evident truths seems better tailored for justifying desired actions in terms of these desirable principles than in discovering or proclaiming any kind of truthhood.)
Moral arguments that we claim are based on reason, and yet all the most well reasoned people still disagree.

In fact, an exercise in many high schools and colleges is to debate the morals of a dilemma designed to have no right answer.

(Is that how truth usually works?)
(Which seems to be more about training us *to make arguments* (ie *appear* well reasoned) than to actually discover truths.)
We are also (nearly) completely unresponsive to any argument or evidence we are confronted with (again with notable exceptions, but as a first approximation).

This is true in all 3 domains: moral, religious, political.
Our beliefs are also very well predicted by our coalitions, having no obvious relation to availability of facts or information. (New Englanders thinking Tom Brady is innocent, African Americans thinking OJ was.)
And our coalition’s beliefs are all too easily explicable in terms of what benefits the coalition

(climate denial benefits big oil, inferiority of non-whites helped justify slavery and colonialism, liberty benefitted American colonists who didn’t wanna pay British taxes.)
Were American colonists the first to discover the “truth” of liberty” and Europeans just forgot about it for a while until democracy took hold there a bit later? And then forgot again under nazi rule?
Did African Americans not see the same news re OJ and new englanders not watch the same Super Bowl game?

(Is that the best way to understand these differences in beliefs? Does better than the coalitional justification story?)
And the fallacious beliefs we stick to are all too explicable in terms of whether we *can* produce a viable explanation for the inconsistent facts. Not whether that explanation is likely or reasonable. (See what NRA says to explain mass shootings that happen nowhere else.)
Arguments that would be abandoned by any objective observable not motivated by mere *appearance* of reasonableness.

(Again, no non-American buys the NRA story. Or the gop story re climate denial.)
So those are the reasons I think beliefs (in these domains) are not driven by truth seeking behavior.

(And any effect of evidence and arguments are at best second order, and better explained by the need to seem reasonable.)
An aside:

Are there any a priori reasons to suspect, counter to evolutionary pressures, that beliefs and preferences would “spill over” into these domains in such a costly and suboptimal way?

No. And social scientists who presume this should have just pondered that q for a min.
Why not?

Well for one, it is uber costly. If we went against our coalitions and instead pursued policies or morals based on truth (not even sure what that would mean tbh) they would get in *a lot* of trouble.

Would there be *any* benefits? No (already argued.)
What about in pro-social domain? What if we genuinely cared bout impact to others (outside benefits to reputation and legacy etc)?
Well, your reputation would suffer accordingly. Cause you would be donating to less well respected causes. Or giving counter-normatively.
Plus you would be giving a lot of time and money away, time and money that could be spent on yourself, your kin, or your reputation. For no personal gain.
Huge cost. No benefits.

Evolution & learning works *very* hard against such.
Ok but maybe evolution and learning aren’t that good at correcting such errors? Are too rigid and inflexible? Too sticky?

Yeah, does it look like our morals and emotions (like empathy) and beliefs are *that* sticky?
Again, all the evidence in the world suggests ow.

Eg See “ordinary men”. Normal Germans, with typical empathy, turned genocidal. In days. Cause they learned to turn off their empathy.
But also can’t understand holocaust or slavery or abolition or...any cross cultural variation or changes over time. If think that rigid.

(And no “becoming enlightened” doesn’t explain this at all. Again see ordinary men. And my discussion earlier in thread.)
And our other types of beliefs, do they seem that inflexible and rigid?

Likewise there, it didn’t take long for republicans (post Koch takeover of party mid 2000s) to flip on climate science.

Or on anything trump post his takeover.
Beliefs seem quite flexible, when coalition shifts.

That seems like rather strong evidence imo that our belief system isn’t some rigid archaic system that’s super constrained by cognitive limitations, and a spillover from domains where is accuracy motive.
And our pro-social preferences?

Again see ordinary men. Or changes in our treatment of minorities. Or animals. Or everything in Pinker’s two recent books. Who we treat well and “care” about is very very in-rigid.

Not easily explained in terms of genuine caring “spilling over.”
Now why do i think all this is so fundamental for the social sciences?
Try making sense of moral or political or religious behavior with the completely wrong underlying model. With a complete missatribution of the fundamental motives at play.
And try building up a solid scientific understanding of how social behavior works. How our beliefs and preferences work.

While completely misdiagnosing the most fundamental ones under your nose.
Try predicting when people will be persuaded, or when they will do more good, or what will happen when trump takes over, or what the nazis might do to the Jews, if you don’t understand what shapes moral and political beliefs and preferences.
If appearances are what drives things, the need to look consistent and reasonable, or well motivated and caring, what kind of predictions will that make, what kind of science and math will be useful?

Compared to if we are genuinely motivated by doing good and discovering truth?
And what could this, what does this, teach us about how humans actually work? How our beliefs and preferences and behavior is actually shaped? And how does that differ from lay intuitions or traditional social science models?
Why not build up a solid scientific foundation on *that*? Consistent with what we *know* to be true. And what makes good *a priori* sense?

Missing some Tweet in this thread? You can try to force a refresh.

Enjoying this thread?

Keep Current with Moshe Hoffman

Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

Twitter may remove this content at anytime, convert it as a PDF, save and print for later use!

Try unrolling a thread yourself!

how to unroll video

1) Follow Thread Reader App on Twitter so you can easily mention us!

2) Go to a Twitter thread (series of Tweets by the same owner) and mention us with a keyword "unroll" @threadreaderapp unroll

You can practice here first or read more on our help page!

Follow Us on Twitter!

Did Thread Reader help you today?

Support us! We are indie developers!

This site is made by just three indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3.00/month or $30.00/year) and get exclusive features!

Become Premium

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!