(By psychology, here, I mostly mean social psych. And JDM. And def don’t mean cognitive psych. Or evo psych.)
(I also am not criticizing the experimental methods. Which are often quite clever. And sound. And, imo, offer clean and useful verification of fascinating phenomena.)
(Nor do I mean disrespect for the field or people within it. Despite my criticism, I think it’s an integral part of the social sciences. Adding insights and evidence I love to read and teach, experimental methods I have coopted, researchers I enjoy chatting w/ and learning from.)
So what is my criticism?
Psych has a tendency to explain phenomena in ways that *sound like* explanations—explanations that are intuitive and get the “aha that’s right” reaction.
But don’t make much sense, don’t really address the fundamental puzzle, and don’t fit the facts.
These *intuitive explanations* are not actually *scientific theories*.
Scientific theories aren’t just intuitions packaged with memorable names.
Scientific theories have to take something puzzling and not understood and show how it can be understood in terms of something else that is less puzzling, better understood.
Scientific theories fit known facts, and are grounded in a unifying, sound theoretical framework.
Psychology is so good at documenting phenomena, in very scientifically sound ways. Clever designs. Careful consideration of confounds.
Why don’t they apply the same critical inquiry to the explanations they proffer?
A few prominent, well accepted, examples I’ll discuss:
-cognitive dissonance
-motivated reasoning
-ineffective altruism
-minimal groups
Moreover, this lack of scientific theorizing, occasionally, leads to even more egregious errors, like:
-Not just poor explanations for interesting phenomena. But documenting a phenomena that sounds interesting, but isn’t.
-Yielding advice that is misinformed.
I’ll illustrate this with a few (not so prominent, but typical) examples, including:
“The surprising benefits of ...”
-talking to strangers on trains
-power posing
-apologizing
(It’s not that I think these are particularly worthy of criticism. Just illustrative.)
These cases—*seemingly* cool phenomena & misinformed prescriptions—would be avoided if psychologists had a sound theoretical framing to ask:
-Is this behavior puzzling or just seems so given a bad theory?
-Is this good advice or just seems so if don’t think bout it carefully?
I’ll start with cognitive dissonance:
When people are paid to give a speech favorable to communism, their internalized beliefs become more favorable to communism.
Important phenomena. Nice experimental documentation.
That part is great.
But what’s the explanation?
“We don’t like the feeling of dissonance between our beliefs and behaviors.”
Really?
But there isn’t an *actual* inconsistency; we hate communism AND are willing to lie for a buck.
Thats both true AND perfectly consistent.
Logical consistency, per sa, ain’t the problem.
Cognitive dissonance fails.
It fails on the first pass. The most basic test that should have taken psychologists a single seminar presentation to spot.
But it’s been half a century. And it’s the most famous theory in social psych.
(Note:
We don’t feel dissonance at logical inconsistencies. We feel dissonant when inconsistent with *socially desirable* values, like honesty and integrity.
Big difference.)
But also:
Is it the inconsistency that makes us feel dissonant? Or the *appearance* of such?
What if there is a *plausible justification* for our inconsistency?
Will we still feel dissonance?
No?
OK then the theory is wrong.
It’s not about inconsistency (w/ desirable values).
It’s about *appearance* of inconsistency. It’s about inconsistency *in a way that’s hard to justify.*
That’s different. A big difference.
Festinger missed.
Which should beg for better theorizing. A theory designed to answer:
Why do we care about *looking* consistent, w/ *desirable values*?
A question Festinger prevents us from asking.
Festinger’s dissonance story cuts us off just when things are getting interesting.
Prevents us from getting to the informative answer.
By giving the semblance of an answer, prematurely.
But also, think about it, why would humans have evolved to feel icky at inconsistencies?
You can’t just presume this. You gotta ask why. If you want to have a better sense of what will feel inconsistent and when we will care.
A step Festinger would have benefitted from taking.
And:
Why would this feeling of dissonance—between our old beliefs re communism and our paid speech— go away the moment we tell ourselves we don’t really believe communism is bad?
Don’t we remember that we felt differently a day ago? Is that really easier to forget than remembering that our pro-communist speech was bought and paid for?
(Easier to forget, or easier to convince others of? A distinction Festinger, again, misses.)
Again a “dissonance reduction drive” doesn’t really explain this perverse way of reducing dissonance.
It just sounds like it does.
And prevents us from asking the interesting questions.
Why would we care about plausible justifications, and what we can convince others of, and consistency w/ socially desired values, even when our behavior or beliefs are performed or assessed in private?
A legitimate question. But one that Festinger doesn’t lead us to.
(The answer, of course, is internalization. Learning and evolutionary processes, internalized tastes and beliefs, have the property that they can be designed for one context, where they make sense, and yet measured in another where they are less functional.)
(A premise needed to make sense of *any* psych study. A premise psych would do better to take more seriously. But that’s a different topic.)
Motivated reasoning:
It’s true that everyone believes they are smarter and more attractive then they are.
And when faced with ambiguous evidence, those who are against the death penalty more readily accept the evidence showing it doesn’t deter than the evidence showing it does.
But the question is: what drives this bias?
The intuitive answer is: we believe what we want to be true.
That seems right.
We want to be smart and attractive. We want the evidence to support our political stance.
We definitely feel worse if we don’t believe we are smart and attractive, or faced with irrefutable evidence against our political stance. We definitely conduct all sorts of mental and verbal gymnastics to try our best to justify our biased views.
But science doesn’t stop with the intuitive answer.
Science asks: is that a reasonable answer? Does that answer fit the facts? Does that answer actually explain the puzzling phenomena?
The problem is:
We don’t actually believe what we want to be true. This doesn’t explain the puzzling features of motivated reasoning. And a mind that could believe what it wants to be true just doesn’t make any sense.
Do any of us believe in hell?
Do we ever ruminate on the bad stuff, like when depressed after a breakup? Or when caught red handed and feeling guilty?
OK. So we don’t always believe what we want to be true. But when do we?
The motivated reasoning story can’t tell us that.
Do republicans believe immigrants are *less* liable to be criminals? Or is this just, secretly, the universe they hope they live in?
Do liberals really want the gender pay gap to be bigger and republicans want it to be smaller?
These are instances where motivated reasoning gets the bias in the wrong direction. People’s beliefs are biased in the very direction they wouldn’t want.
Motivated reasoning just gets this wrong.
Why is it that we are able to ignore disconfirmatory evidence, but are influenced by confirmatory evidence? (“Assymetric updating”)
If we can ignore the bad stuff, why not just act as if we have the good stuff when we don’t?
That is, the fundamental puzzling feature of motivated reasoning, is that we don’t just bias our beliefs upwards.
Instead, we bias them upwards in a particularly peculiar way—attending to good evidence and increasing our beliefs as much as this evidence justifies, but ignore the negative evidence, and don’t update at as a function of this evidence.
Why? That’s *the* puzzle.
Why don’t we just attend to both positive and negative evidence, but add a fixed positive boost to our priors, independent of the evidence? Why not just ignore all evidence, but start with artificially inflated priors?
We don’t do this.
Instead we attend to positive evidence and ignore negative evidence.
Why?
That’s the puzzling phenomena.
And believing what you want to be true doesn’t explain this at all.
It just feels intuitive. Oh right. That’s how we update our beliefs. Seems right.
True. But that’s not sciencing. Science requires explanations for intuitions.
And this intuition, the key feature of motivated reasoning, is left unexplained. Just intuited. And presumed.
Other odd, and unexplained, features of motivated reasoning:
We search for confirmatory evidence harder than disconfirmatory? Why?
Intuitively, because we are effected by the outcome of our search, and can easily ignore the extent of our search.
But why?
Why does outcome of search affect us more than extent of the search?
Again, intuitive. We do this.
But believing what you want to be true doesn’t actually explain this key feature.
One more:
Evidence that is putatively supportive of our desired beliefs is readily accepted as diagnostic, even if the evidence is cherry picked or p-hacked, or ow not particularly diagnostic.
Why?
Intuitively, we want this evidence to be supportive. So we lower our guard
OK. But...
Why does wanting it to be true mean you don’t attend to its veracity, but only to what it putatively shows? Why attend to what it putatively shows, but not to its degree of informativeness?
Not explained. Feels right. But not explained.
The actual question, the question not addressed:
Why does our self-deceptive abilities have exactly these peculiar features?
(attending to positive but not negative, outcome but not extent of search, putative findings but not veracity)
That’s what we should be asking.
(As before, my take is that it’s no coincidence self-deception is influenced by exactly the features that matter for convincing others, and not so much by what is easier to hide from others. And is biased not by what we want to be true, but what we want others to think is true.)
(And that these biases still show up when there is no one else to persuade, and you are measuring my deeply held convictions, because of internalization. And that’s the right way to think about motivated reasoning. But that’s, again, a different conversation.)
And then there is the most fatal problem with the motivated reasoning story. The one psychologists should have realized on day one:
It just doesn’t make sense.
Who would design an agent that can choose its own beliefs?
That’s a recipe for an agent that isn’t motivated to improve its state of affairs, but just improve its mental image of its state of affairs. <—a misinformed AND undermotivated agent.
Can you imagine writing a reinforcement learning algorithm that did this? Instead of telling the agent it has to optimize expected payoffs, but allows it to dictate its own mapping from the state to the payoffs?
No.
This isn’t how you would code an optimal algorithm. It’s not a reasonable way to expect evolution to code human minds.
It doesn’t pass the sniff test.
(Of course, this would be a reasonable way to design an agent, IF the agent is allowed to perturb its beliefs in exactly the way that optimizes its ability to persuade others. Cause that’s actually a goal evolution would want to instill in us.)
(Persuasion, unlike self-esteem maintenance, is actually a goal that evolution would care to instill in us. Actually a goal that’s reasonable to posit our minds would be designed to pursue.)
Continuing now w/ (inaffective) altruism.
The phenomena:
-We give the same to save 2 birds or 2,000.
-It doesn’t matter to us if donation is matched, or tripled.
-We donate hundreds of thousands to help a newsmaking child in a well, while ignoring millions starving in Africa
Why?
The explanation goes something like this:
“Our altruism is driven by empathy. And by a mental simulation of the plight that needs to be addressed.
And such emotions and simulations are constrained and impaired; they ignore numbers and magnitude. And attend to cute, local, and salient.”
Again, that all seems true. And quite intuitive. And easy to verify in clean, controlled lab experiments...
We do mentally simulate others plight. We do feel empathy.
This does shape whether or how much we give.
Empathy and mental simulations do work in funky ways, like being insensitive to scope of problem or impact we might have.
But is this *the explanation* for ineffective altruism?
The problem w/ this explanation is, once again, it doesn’t fit the facts, explain the puzzle, or make much sense.
It doesn’t make much sense b/c evolution would never *want* to instill in us preferences that optimize outcome for others (unless those others share our genes).
No more so than it would instill in us indiscriminate sexual attractions
That’s just not the kinda tastes that evolve
(And if evolution *did* wish us to be effective altruistists, would we really be *this* bad at it?
Evolution really can’t develop a more effective method than empathy and the representative heuristic?)
The other fundamental issue w/ this cognitive/emotional constraint story:
We *are* perfectly capable of ramping up and down our empathy, or using other mental procedures, when we need to be.
(In fact, this is a common non-sequitur in the social psych lit: Just cause our thoughts or feelings *have* a funky feature, doesn’t mean that’s the *cause* of that funky feature. Depends how flexible that feature is. And whether it’s there for a reason.)
If we were *constrained* by the representative heuristic:
Wouldn’t we be *as* insensitive to impact when giving to family members? (Are we?)
Or when our legacy depends on impact? (Is Gates? Was Rockefeller? Are the Koch’s?)
Are we *as* insensitive to numbers when we are not making charitable decisions, but deciding where to go for dinner? Or how much to put in savings?
In all these instances, we are, perhaps, *somewhat* bad with numbers.
But at least we try. We put in the effort. We simulate the consequences.
We don’t just apply the representative heuristic and be done with it.
In *these* instances, we aren’t *completely* insensitive to numbers/scope/impact.
As we (typically) are with charitable giving.
The constraint story gets all these wrong.
(Presumably because in *these* instances we *actually* have a reason to care about impact.)
Do we *have to* feel empathy for young, innocent-looking, local, victims?
Does empathy *necessarily* kick in in such instances?
Consider the Einsatzgruppen, tasked with shooting Jewish civilians in Eastern Europe.
Did these Jews not evoke empathy?
They sure did.
Which is why, at first, the would-be-killers missed their shots, found excuses to be absent, or threw up on the job.
(See the book “Ordinary Men.”)
But then the killers learned tricks to avoid these thoughts and feelings, like getting drunk, and not looking their victims in the face.
And thereby, these otherwise empathy inducing victims *stopped* inducing empathy.
Constrained empathy story misses this adjustment.
Do we feel empathy for the homeless man we pass on the street?
Sometimes.
But we have choices. We can avoid looking them in the eye. We can cross the street. We can imagine thei live’s growing up, we or focus on our upcoming meeting.
(Bethany Burum has some nice experiments showing we find a way to feel less empathy, when we are not being observed giving, or when giving would cost us more.)
(To extent empathy is a mechanism, endogenously determined according to the incentives we face— like what the norm is, or what kinda inference others will draw from our behavior—this all makes sense.
But not if we think of empathy as a fixed constraint on our mind.)
The empathy, and representativeness heuristic story, just don’t tell us when we will get drunk to tone down our emotions, avoid looking the beggar in the eye, or apply a more thorough procedure than the first simulation that comes to mind.
The empathy story doesn’t tell us why empathy was build to have these biases, or when it’s built with these biases (is our empathy toward our kids insensitive to scope?)
(It just “pushes back” it gets us to forget this issue.)
The representativeness story doesn’t tell us why (and when) we simulate a single randomly selected victim verses a population of victims.
(It just pushes back, or prevents us from noticing this question.)
Once again:
Psychologists documented a really cool/important phenomena.
And offered an explanation that’s intuitive and well documented in clean, controlled lab experiments.
Just wrong.
Wrong because it doesn’t make sense, mispredicts when, and doesn’t explain why.
• • •
Missing some Tweet in this thread? You can try to
force a refresh
We present+test a functional explanation for why we are “ineffective altruists”.
Allow me to summarize the paper. And use it to highlight what we think is “the right way” to test ‘functional’ explanations & rule out purely ‘proximate’ accounts.
Historical discourse, & discourse surrounding current events, often conflate ‘justifications’ w/ ‘causes’.
Let me give some examples illustrating the conflation, why common methods for de-conflating don’t work, some methods that might, and why it matters.
(Thread)
Of course, sometimes stated reasons are actually the the true reason.
If you asked me why I was carrying an umbrella, I would probably tell you the truth: cause I thought it would rain.
2/
And sometimes the truth is a mistaken belief:
People throughout the works practiced blood letting because of mistaken intuitions re bad fluids and drainage. And lack of knowledge re germ theory.
3/
Recent editorial 👇 (by Kahneman & Renshon) offers a readable summary of many interesting ‘judgment and decision making’ findings pertinent to trigger-happy leaders.
But imo ALSO exemplifies a key limitations of jdm-style explanations.
So the article talks about -overconfidence
-reactive devaluation
-sunk cost fallacy
-the fundamental attribution error
etc.
(I’ll explain.)
2/
That is,
We are prone to go to war b/c
-we over-estimate our likelihood of victory
-we presume our adversaries concessions are not meaningful
-we throw good troops after bad. (Think “the surge.” Or basically every stage in Vietnam.)
3/
1) incentives don’t just shape our conscious strategic behaviors, but also which ideas we generate, spread, and come to believe.
2/
This happens because ideas are not, much as we like to pretend, solely selected based on objective truth seeking, model building, and bayesian updating.
3/