Profile picture
Moshe Hoffman @Moshe_Hoffman
, 40 tweets, 5 min read Read on Twitter
Now a thread summarizing my criticism (and the contribution) of Behavioral and Experimental Economics.
Again, the good stuff to start:

Experimental econ has developed some really cool methods. Like all those games. Great way to measure, and see what pushes around, say, social preferences. Really useful. Nice contribution.
And documented some cool effects. Like how giving (as measured by, eg, behavior in the dictator game) is effected by observability, plausible deniability, and framing.
And behavioral econ has also made an incredibly valuable contribution to science.

Like Thaler’s original AER column.

And all the important field studies, since, showing how much of this stuff matters in real econ/market settings.
It’s def important to know where the canonical model is oversimplified or goes astray. And to have a good sense of which psychological quirks will influence decision making and markets. And how to utilize this info to “nudge.”

Behavioral econ nailed this.

That’s great science.
The problem comes when...

-behavioral economists try to model preferences.

-experimental economists try to test models.

Those are (often) quite silly. And misleading. And confusing. Lead a generation of researchers astray. And prevent science from progressing.
(Not *all* behavioral models. Some like beta-delta discounting make sense. That’s just what you get when you mesh a lizard brain that makes immediate gut level decisions, w/ an ape brain that is built for planning. The problem arises when the models are less grounded in sense.)
(And not all experimental tests. Again it depends how ill conceived the model being “tested” is. And how much thought goes into “is this actually what the model predicts?” Thought that is all too often absent.)
Some examples of the problem w/ behavioral models:

-models of “self-signaling” and “motivated beliefs”

-models of pro-social preferences, like “warm glow”, “reciprocity” preferences, or “equity aversion”

-models of “identity”
The problem in all these cases is that the economist is fitting an (intuitively appealing) utility function to an (interesting and well documented) behavioral phenomenon.

But w/o having a sense of what’s driving the phenomena.
It’s as if the economist saw people watching porn, knew nothing about sexual selection (or presumed it was irrelevant because porn is watched solo), and wrote down a utility function that kinda seems right to describe porn preferences.

A confused and highly problematic exercise.
Leading to all sorts of issues like:

-begging the original question

(B&T’s model of motivated reasoning “explains” assymetric updating, by presuming people are Bayesian, but only for confirmatory evidence. Assuming exactly what they are purporting to explain!)
-completely missing boundary conditions and moderators

(“Warm glow” preferences for giving tell you *nothing* about when we will feel warm glow, or what it will depend on.)
-Leading to non-sequitor debates

(“Are prosocial preferences better described by ‘warm glow’ or ‘inequity aversion’?” Clearly depends on which preferences are being measured in which contexts. Obviously.)
(Akin to asking: “Are consumer preferences better described by liking cold things or shiny things?” Kinda depends, no?)
-And prevents us from asking the real questions, like where do our prosocial preferences come from

(which imo is the only question that can get us to a sense of what shape they are liable to have, and in which contexts.)
-Not to mention adding confusion by positing completely absurd assumptions.
(People of course would never evolve a general “preference” against inequity. We of course wouldn’t evolve to be consistent w/ an identity *for its own sake.* And would never develop minds that pay costs to signal, and misinform, the self.)
(None of that makes *any* evolutionary sense. So *can’t* be the right answer. Can only mislead and confuse.)
-And such models don’t actually yield novel (& true!) predictions.

(I haven’t seen one yet. Have you?)
(Other than the “predictions” used to generate the model, and shared w/ our intuitions. Of course not. B/c prediction built off the *structural form* used to formalize a mere intuition *has to be* wrong. And that’s all the model adds, beyond the original intuition.)
As for experimental econ:

The biggest issue I see is that researchers attempt to “test” models very very literally.
They might ask:

“Is cheap talk the right model for communication with partially aligned payoffs and costless messages?

I dunno.

Let’s check by having people play a game with partially aligned preferences and costless messages.”
The problem with this way of thinking is that that *only* tests whether people play Nash in this particular game in this particular (novel, abstract, unusual) setting.

Which isn’t *really* the right question.
The *right* question is:

Does cheap talk appropriately capture insights about how we *actually* communicate (irl). *Why* we are sometimes purposely vague (irl)? *When* do we tend to be vague (irl)?
(I.e.: does language evolve, or social instincts emerge, consistent with the cheap talk model?)
The model is *meant* to teach us something about *that.*

Social instincts, language evolution.

Not players literally playing that abstract payoff matrix and information structure.
Experimental economists are (often) confusing the purpose of experiments:

The goal is not to understanding the pipet and Bunsen burner.

The goal is to use the pipet and Bunsen burner to understand how chemistry works.
Is that to say lab experiments are useless?

Of course not. No more so than saying pipets and Bunsen burners are useless.

Just don’t confuse them for the things we are interested in.
Can we test social instincts and theories of language evolution in lab experiments?

Most definitely.

Just not by having people play a literal cheap talk game.
(Instead, eg, we could check for comparative statics—qualitative predictions—wrt our intuitions about language across social settings. *That* would make sense. That might allow us to *legitimately* test the model...Admittedly doing this well is hard. And not always obvious how.)
A final example:

The ultimatum game is great at testing what angers us, what we find unfair, whether we are willing to punish mistreatment.

Eg more likely when our behavior is observed.

Great. Useful.
Such anti-social preferences are interesting and worth studying.

But we shouldn’t forget that they developed (presumably?) in repeated interactions and (like everything else in life) got internalized, and so “spill over” into behavior in anonymous, one shot, lab experiments.
Ultimatum experiments can teach us *about* anti-social preferences.

And maybe when or how they “spill over.”

Interesting. Useful.
But ultimatum experiments *don’t* teach us that people are “irrational.” Or that Nash/subgame perfection are “wrong.”

Such experiments only show *that,* if we are thinking abouy rationality, Nash, and sgp naively. And literally.
Such a naive “test” of subgame perfection, would be no better than
“testing” sexual selection by observing porn consumption...
But instead of noticing all the patterns in the porn people watch and how well that fits (or doesn’t) with the sexual tastes that are likely to evolve...

They (wrongly) conclude that sexual selection is wrong, b/c people get aroused by images they can’t reproduce with!
That’s not the right way to interpret porn consumption.

Likewise: the wrong way to interpret rejections in ultimatum game is as proof that subgame perfection is irrelevant to human behavior.
That is the wrong way to think about subgame perfection. And human behavior.

The wrong way to interpret experiments. The wrong way to test models.

/eom
Missing some Tweet in this thread?
You can try to force a refresh.

Like this thread? Get email updates or save it to PDF!

Subscribe to Moshe Hoffman
Profile picture

Get real-time email alerts when new unrolls are available from this author!

This content may be removed anytime!

Twitter may remove this content at anytime, convert it as a PDF, save and print for later use!

Try unrolling a thread yourself!

how to unroll video

1) Follow Thread Reader App on Twitter so you can easily mention us!

2) Go to a Twitter thread (series of Tweets by the same owner) and mention us with a keyword "unroll" @threadreaderapp unroll

You can practice here first or read more on our help page!

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just three indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member and get exclusive features!

Premium member ($30.00/year)

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!