, 157 tweets, 21 min read Read on Twitter
Why do I think our preferences and ideologies respond to incentives?

A thread.
A few types of evidence I’ll summarize:

-evidence for real time responsiveness
-explains o/w puzzling features
-fits individual/cultural variation
-offers unifying explanation for broad swaths of observed preferences/ideologies
-jibes w/ what we know bout how learning works
But most importantly, I’ll argue:

Can’t really understand what’s going on w/o thinking this way.
By preferences & ideologies I mean:

-what principles we hold
-what we are passionate about
-when and how we care bout others, or bout public goods like the environment
-whether we believe in climate change, or want guns to be regulated
-think everyone is created equal
By incentives I mean
-making $
-being liked
-developing a legacy
-not getting beaten up
-mating opportunities

The stuff we *evolved to like*. The stuff that, in turn, *acts as reinforcers* in learning processes. Determining what new things we learn to like.
Evidence our preferences/ideologies adjust, in real time, to fit our incentives:

I’ll cover few examples:
-empathy
-taste for luxury goods,
-“snius” clothes. (How orthodox Jewish women dress.)
I’ll start with empathy.

Empathy is just something we feel right? Fixed part of our thoughts and feelings. Not itself susceptible to incentives? Right?
Not so:

-when being observed giving to charity, people not only give more, they feel more empathy.

-when helping is more costly, they not only help less, they feel less empathy.
Also, think about the einzatsgruppen (germans tasked with shooting Jews in forests of Poland in ww2)

Were these people naturally selected b/c less empathetic? Or did they learn ways to be less empathetic, when their job (and financial incentives) demanded such?
The book “ordinary men” argues the latter. Here are some tricks the einzats developed to help suppress their empathy:

- drink more.
-avoid walking Jews 1-1 to forest but in groups.
-don’t look them in eyes when shoot, but shooting in back of head.
- hire locals to round up
Einzats felt empathy, at first. That’s why, at first, they were prone to:
-claim to sick to work
-hide behind trucks or wander off.
-throw up
-ask for permission not to

It took time till they learned how to handle the task. How to adjust to their newfound incentives to genocide
Now onto our sense of aesthetics.
Obviously a lot of things are naturally beautiful or tasty.

Like paintings of fertile women. Or calorific donuts.
But I also like wooden antiques with squiggly lines that show the wood was slowly “worked” by worms over centuries.
I wasn’t born liking such antiques. Other wealthy, “classy,” people I knew liked it.

And they explained to me why.

I started liking it too.

Of course I did.

I also have an incentive to be seen as tasteful.
Presumably it’s the same with the smelly, but expensive, cheeses I like.

Or the rothko’s I enjoy looking at.

Or the jewelery or handbags I find elegant.
None were tasty or smelly until I learned that’s what classy people think. Coincidence that I started liking them immediately thereafter? Especially as I started to be able to afford them?
OK one more example. (On “snius” clothing.)

This one about religious beliefs adjusting, in real time, to the incentives we face.
I have a friend (actually more than one) who was a fairly religious Jew.

The thing is she was in her twenties. And not yet married. And Yahweh doesn’t like premarital sex. And “revealing” clothes. Which makes being single in your twenties a bit harder.
Lucky for her though, Yahweh’s views on “snius” (orthodox Jewish regulations of women’s dress and sexual behavior) have become much more lax in the past few years.
It’s not that Yahweh doesn’t allow her to behave the way she wants, and she just ignores Yahweh.

No.

What Yahweh wants (or her beliefs of such) have shifted.
And lucky for her the Bible, and Jewish theology, is flexible and ambiguous enough to allow for her newfound theological stance.

That’s what I mean when I say our ideology adjusts according to our incentives.
Next let me talk about how this incentive view jibes with what we know bout how learning (and other adaptive processes) work.
Which adaptive processes? Here’s 3:

Reinforcement learning. Social imitation. And, the not so aptly named: act selfishly and find a way to justify and internalize that justification.
The second is what people like joe Henrich and Robert Boyd talk a lot about (check out each of their most recent books for great summaries).
The key feature of social imitation is that we are *more* likely to imitate those who are successful (especially if they are like us, and their success is closet to the domain we are imitating).
Why? That way whatever caused them to be successful, we are liable to pick up, and is likely to work for us.
But, crucially, that means that anything that yields success is likely to get imitated. “Yields success” is another word for the list of things i have been calling “incentives”. That is we are more prone to imitate those people who are high prestige, have $, sex, etc.
Obviously this story about (success biased) social imitation is well supported by empirical evidence (there is a huge lit on this) and grounded in evolutionary biology (we clearly evolved to do this b/c, umm, it’s a good way to adapt and get all these goodies evo wants us to get)
(Think bout all the pop stars whose sense of fashion the youngsters are liable to imitate.

Or the high status rabbis or professors or wise gurus everyone wants advice from.)
Here’s one study from that lit:

Ask a bunch of agricultural people’s who they learned their farming techniques from. And measure everyone’s yield. The farmers with higher yield got imitated more. Especially when the crops were more similar.
Here’s another study from this lit:

Show a bunch of toddlers an old dude turning on a lightbulb with his head. The toddlers copy him. But only if his hands are not occupied or he isn’t ow incompetent, like wearing shoes on his hands.
Reinforcement learning works differently. Instead of learning from others. It’s how we learn from our own experience. Through trial and error.
But shares the same key feature: any behavior (or the ideologies or tastes that correspond with such actions) that yields successful outcomes (where success will again be anything on my list at beginning of thread) gets repeated more frequently. We develop tastes/ideals that work
Of course neither RL or social learning are blank slate, dumb processes. They evolved to be quite sophisticated, to utilize information that might be pertinent.
Eg we learn to like foods that don’t make us sick. And touching things that don’t electrocute us. But are slower to learn not to like tasting things that cause us to get shocked, or touching things that make us nauseous.

It’s also really hard to learn to like spiders and snakes
But we still learn. Quite fast. A lot of stuff. That’s quite functional Even w/o conscious comprehension. (Which is often where tastes and ideologies come in.)
Henrich talks a lot bout corn, which native Americans knew needed to be mixed with an acid to produce an amino acid needed for a complete diet.
When colonialists came to America and brought the corn back to Europe, w/o understanding the chemistry, and crucially w/o the culturaly evolved cuisines, and corresponding tastes, they didn’t think to add acids, like lemon juice or ash. So they got pellegra. A lot of people died.
That’s a case where social learning, and the corresponding cuisines and tastes, do better than our conscious awareness or causal understanding.

Here’s another: try living in the snows of Alaska, as the natives did.
Do you know how to make an igloo? W/ your lofty physics and engineering degrees? It’s not hard if you know the tricks. Like how to cut and stack the blocks, at the right angels, or to put the entrance slightly below ground level to create the right pressure differential.
But this is not something you could figure out (w/o internet access). How do the natives figure it out? They don’t know the physics. They just learned the tricks that work. Those tricks culturally evolved. Successive generations of success biased imitation.
The natives have an incentive to stay warm. They found a way. They have an incentive to have the right balance of amino acids. They learned to like the corresponding way of preparing corn. No conscious choice or uber-rational homo-economicus needed.
Another cool paper:

Why do Indians like spicy food? Spices, especially on meats, especially in the kind of combinations seen in their recipes, added to the meats at the right times, kill bacteria. Which is crucial if you wanna serve meats in warm climates pre-refrigeration.
That is, their preference for spicyness is adaptive. W/o any conscious awareness, it serves a function. They have an incentive to spice their food. So they learn to like spicy foods.
(Naturally when any learning or evolutionary process is in play, there are time lags, such that Indians who move to the US or own refrigeration’s, still might like spicy food. At least for a while.)
One more cool study: the Fijians have a taboo against eating certain type of fish while pregnant.
A taboo. When asked, they may justify it w/ something like their kids will come out smelly if they violate it. And when asked, they’ll tell you they learned which foods to avoid from other moms. Or someone wise and elderly in the community.
They won’t mention, cause they have no clue, anything about mercury or other medical reasons we know not to eat these foods while our bodies are immune suppressed so a fetus can grow.

Their beliefs, and corresponding taboos, serve a function. They don’t need to be aware of it.
(And is with spicy food, learning processes are not perfectly fine tuned. They sometimes avoid foods that are perfectly fine to eat. W/o a full causal model, it’s easy to overapply or underapply what you learn. Learned tastes are functional. But scattershot.)
Doing what’s sensible, and then justifying, and internalizing the justification:

There’s the classic moral dumbfounding studies. (See Haidt)
Consider a brother and sister who copulate, but using two forms of contraceptives, only once, telling no one about it.

Immoral? Gross? Yup.
But why? The obvious explanation is we evolved to be disgusted by incest (and to enforce norms re incest on others).

But most people don’t know this. They just find it gross.
But we like to have reasons. And be able to justify our morals. So we make up contorted arguments. Like maybe just maybe just maybe both forms of contraceptives will fail. Or someone will know and imitate, even though we told you no one will ever know and it won’t happen again.
Another case where people do what’s in their interest and then find a way to justify: pretty much everything the gop has done in the past five years. Watch Fox News. They are a act selfishly, find a way to justify, machine.
Is it internalized? Do they really believe the ideals they use to justify? More or less. Maybe some know it is bs.
But I bet a lot of republican congressmen really believe giving a trillion dollars to the rich will help out the poor. And ignore the fact that the rich they are helping are funding their campaigns. I bet (some) read Ayn Rand and really think that’s the right ethical system.
Likewise, Martin Luther (not King Jr.) happened to have exactly thetheilogy that perfectly fit the incentives he faced (see this thread )

Does that mean he didn’t really believe that theology? No. He seems quite genuine in his beliefs.
What likely happened. He tried out some theological arguments. The ones that seemed cogent, got him followers, his protector liked, well I bet he kept discussing those, and looked for more biblical backing for them, and thought less about the others.
Before long he was *sure* that’s the only theology consistent with the Bible. And willing to die for those beliefs.

That’s what we do. We form beliefs that fit our incentives. And then really come to believe them. (Where else would they come from? Facts and logic? Hah.)
Now let me discuss some o.w. inexplicable, features of beliefs/preferences, and how they are easily explicable w/ incentives.

(Part four of thread?)
The puzzles I’ll cover:

-altruism: plausible deniability, scope neglect.

-motivated reasoning: asymmetric updating, confirmation bias.

-passions: crowding out, flow, grit.

-principles: foot in door technique, cognitive dissonance.

(I’ll also summarize each.)
The basic incentives I’ll use:

-altruism: norm enforcement

-motivated reasoning: optimal persuasion w/ private information.

-passions: optimal human capital investment.

-principled: reputational capital.

(I’ll explain what I mean.)
Altruism. It has all these quirky features, that behavioral scientists have done a nice job documenting.
Scope neglect:

For instance, people give the same to charity, regardless of whether their donations is matched 1:1, 2:1, or 10:1. Which tells you they don’t *really* care about impact. (As any “effective altruist” will confirm.) Why not?

(See Karlan and List 2006)
Behavioral scientists will tell you, as they are prone to, that this is because of some kind of cognitive limitation; we are bad with numbers and can at best simulate a representative example.
But that doesn’t explain why we are quite sensitive to numbers in other domains, eg private financial decisions, where we are quite responsive to interest rates. Or when giving to family, where we very much think bout impact. (Bethany Burum has some nice studies on this.)
But if you think of our sense of altruism stemming from social rewards (like avoiding peer sanctioning, or gaining peer esteem), then this puzzling feature of altruism is easily explained.
Namely, it’s harder for our peers to know impact. Easier for them to tell if we give. And even when they happen to know former, it’s not commonly known, which is crucial if their motive for sanctioning/rewarding your good deed is to accrue kudos themselves.
That is, scope insensitivity is easily reconciled with any norm enforcement model, where good behavior is enforced in equilibria via “third party punishment” (bystanders sanction noncompliance) and “higher order punishment” (bystanders that don’t, are susceptible to sanctioning).
Norm enforcement, because of the reliance on “higher order punishment” is heavily dependent on norm violations being observable and commonly known. Which will explain many of the the quirky features of our sense of altruism. Not just scope neglect.
Plausible deniability:

Why does our sense of altruism depend on whether we have a plausible justification for not being pro-social?
For instance, Daylian Cain et al has a nice study where people can pay to avoid playing a dictator game. Still selfish. But maybe less noticeable.
A real world analog is a study by Andreoni et al where people stand outside a grocery store asking for charity. People give if confronted, but leave out a side door if they can.
Hard to reconcile w/ naive way of thinking bout altruism as giving b/c we care. Easy to reconcile w/ giving when it would be hard to come up w/ an excuse (I didn’t see you!) for not.
And excuses like that, make it harder to sanction, b/c not only is sanctioned less sure that the selfish behavior was purposeful, but she is less sure others are sure, which is crucial if her (subconscious) motive is avoiding “higher order sanctioning.”
Jason Dana et al have a nice study of “strategic ignorance,” where subjects in a lab experiment can choose between two options, one they know gives them more $, but one gives another player less. They don’t know which gives other player less.
If they are told, they often choose the one that benefits the other player, even if it harms them. They are pro-social, when they are in the know.
But they prefer not to know. If it is up to them to check which option benefits the other, they don’t check, and choose option that’s best for themselves. They are strategically ignorant.

Pro social when in the know. But prefer to be selfish and not know.
Again, this is hard to reconcile with what we claim motivates us to do good—like any good puzzle, it points out a flaw with traditional way of thinking and points to the need for a theory that can explain what we know but also fit the puzzle.
The norm enforcement story again does this with ease.

The norm enforcement story again depends on observability and common knowledge.
Strategic ignorance, unlike knowingly behaving selfishly, is harder to know was intentional, and when known not commonly known, reducing motive to sanction. So if we wanna be selfish, strategic ignorance is safer way to do so.
A lot more about altruism can be explained with this norm enforcement story. (The omission-commission distinction, means-byproduct distinction, the effect of framing and descriptive norming, conditional cooperation...)
But hopefully you get the point that it helps to think about the incentives at play. A simple incentive or two (avoiding sanctioning, which in turn is motivated recursively by higher order sanctioning) can explain (many of the) can explain many quirky features.
(And of course the incentives used to post-hoc explain, can be checked. Do people in fact penalize non-compliance? Do people penalize non-penalizing of non-compliance? The answer, cross culturally, among toddlers, in lab experiments, in everyday life, is a definitive yes.)
(And of course, once one has a post-hoc incentive explanation, one can check if it gives the right “comparative static” prediction. Do the quirks go away when these incentives are more or less at play? Yes.)
(You are less strategically ignorant, more scope sensitive, when deciding how to treat kin. Because altruism toward kin isn’t, primarily, motivated by norm enforcement. It’s kin selection. That’s what Bethany’s studies show.)
(And when you are making decisions not in pro-social domain, but about say savings, these kind of quirks also disappear. Which rules out that these quirks aren’t driven by cognitive limitations, as many a psychologist will have you think. Again as B’s studies verify.)
Moving on to motivated reasoning: the tricks we use to fool ourselves into biased beliefs, like that we are prettier and smarter than we are in fact are. Or our research is better supported by evidence than it in fact is.

Again let’s focus on the puzzling features of m.r.
(But also make sure that the incentives we use to explain those puzzling features, can explain the broad swath of biassed beliefs we observe. And can be verified as in play in these instances. And can help us predict when m.r. will be weaker or stronger.)
So here’s one puzzling feature of m.r.:

People incorporate supportive evidence, but ignore non-supportive evidence. (“Asymmetric updating”)
As much as this is discussed in the behavioral economics and social psych lit. It is *never* explained.

Just presumed. It’s presumed if we are gonna fool ourselves this is the natural way to do it. But that’s false.

It’s intuitive b/c it’s in fact what we do. But puzzling.
Think about it: why don’t we make up supportive evidence as easily as we can ignore non-supportive evidence? Why don’t we pay attention to all evidence but just add a fixed boost to the belief we want, irrespective of the positive and negative evidence.
Why would our bias interact with the *type* of evidence observed?

That’s a quirky feature of motivated reasoning. This asymmetry. Noticed. Well documented. But unexplained.
So Trivers and von Hippel describe the key incentive at play in m.r.

We are just trying to persuade others. Our false beliefs are just an internalization of what we want others to think we believe
(Why internalize? There might not be much reason to keep track of the truth. And there is a strong reason to avoid giving up your game, to the extent others get cues as to what you believe.)
This key incentive obviously fit the direction of our bias (we certainly want others to believe we are more attractive or smarter. We also want them to think the evidence supports our research.)
(In fact, the direction of our bias in some instance looks different according to this story than in the standard social psych story. And this story gets those right. As when republicans are asked how dangerous illegal immigrants are. But that’s another discussion)
But can this incentive story explain the puzzling feature? The assymetry?
Well, which will help your persuasion game more, making up confirmatory evidence, or not collecting or sharing disconfirmatory evidence?

That’s obvious. The latter is a way to cheat that’s harder for others to detect and penalize.
No surprise that then when we are persuading other we are less liable to lie by commission, than by omission. Which is definitely what we observe. (We also observe correspondingly higher punishments for getting caught doing former.)
But then not a big surprise that our internalized beliefs evince this same assymetry.
Notice crucially that this assymetry requires that one type of lie is more likely to get caught and punished.

Which makes a lot of sense when we think bout lying to others.
If m.r. was about lying to the self (an odd premise to begin with. But what social psych often posits) you have no reason to presume anything about changes of getting “caught” or “punished.” So you can’t explain this assymetry. You are stuck just saying it’s intuitive.
Another quirk that only makes sense if you think about internalization of incentives—in this case the incentive to persuade, and not get caught lying.
Let me jump forward to passions. And some of the quirks therein.

(Again see this thread for more details: )
Some of the puzzling features of our passions (aside from the fact that they are so varied and seemingly haphazard. And why we have them in the first place.):
Why does our passion damper when an additional incentive, like being paid for our work, is added? (“Crowding out”)
Why are we more passionate when our work is “lasting” (as shown in cute studies by Dan Ariely et al where subjects are paid build LEGO’s that are saved or destroyed then given chance to build more.)

And also when we are given praise for our work?
Why do we enjoy our work and work harder when it gives us a sense of “meaning”? And what determines whether it does? Can we just tell ourselves it’s meaningful?

Is it free and always advisable for our employers to tell us such?
See Ariely’s “payoff” and Pink’s “drive.” And Frankl’s “man’s search for meaning.”

(They don’t answer these questions. As usual in behavioral work, they offer “proximate” explanations that just beg the question. But do a nice job documenting and explaining the phenomena.)
Also, why do some people have higher self-esteem, more perseverance and grit? If grit is the miracle sauce Angela Duckworth suggests, why didn’t we all naturally evolve to always have it? Likewise re self-confidence (And Amy Cuddy’s corresponding prescriptions.)
Last puzzling feature of passion: why does it feel good, give us a sense of “flow”, as Csikszentmihalyi shows, when we are “in the zone,” challenged but still succeeding? Why would our sense of intrinsic motivation have this odd feature and look for this zone? Czik doesn’t say.
All these puzzles are easily explained, as is the existence of passion in the first place, by noting that passions are dangerous. They induce many many many hours of focused work on a single minded task. We evolved to make such hefty investments, when there are cues it’s worth it
Should everyone be passionate (gritty?)? That’s dangerous. Passions cause people to neglect their kids and their health and social life. (What happened to the paragons of passion like Einstein, Ramanujan, and Fischer?) Passions take time and resources.
Should everyone fake it till they make it? Again, dangerous. You may have to invest a lot trying. And still fail. (Not to mention risk socially sanctioned for deceiving others, or be rejected, or seen as a threat, for demanding more than your worth.)
Should you work harder when paid for your work? Maybe, but, the payment (as Benabou and Tirole 2003 rightly realize) doesn’t just give direct benefit. It also conveys information. The information that others think you need financial incentive to be motivated.
That’s (sometimes) a good cue you shouldn’t invest huge amounts in this task, long term, after the financial motives are removed. I.e. don’t become passionate.
Social approval? Lasting impact? Yeah, better to be passionate about things others value (and are liable to reward you for in long run.)

Meaning? Yeah, that’s what we call it when something is socially valued. (What else could it mean?)
Flow? Yeah, it’s good to enjoy investing in building up valuable human capital, in practicing skills that you have cues you are relatively good at, is socially valued, and your practicing is optimizing for learning and success.
Now try understanding passions (or offering self-help advice) w/o thinking about the function they evolved to serve. W/o thinking about our “incentive” for being passionate.

(Which in this case, I summarized with the phrase “optimal human capital investment.”)
Onto principled behavior:

(Last application. Then I’ll conclude.)

So people try very hard to justify their behavior based on abstract moral principles.

And sometimes live their lives to abide by such principles.
But where do these principles come from? Which principles do we choose? What determines whether we live up to them, pretend we do, or don’t even bother with the charade?
The simple incentive at play here, summarized in Frank’s “passions within reason,” is that we take on the principles that we (subconsciously) want others to trust us to follow.
So it’s about trust. Which is crucial when contracts are incomplete, rewards/sanctions are constrained, or observability is limited. And there is a moral hazard problem.

Like when we hire someone, or marry them, or give them political power, or join their religious movement.
In such instances, we are more prone to hire, marry, vote for, or follow, someone who comes off as principled. Especially if they are principled about the very things they will have “residual control rights.” The very thing we will have to depend on them for.
People who depend on such trust, are expected to be come more principled, principles on the dimension they are expected to be trusted. That’s the key insight. (I’ll review some evidence for this shortly).
(Actually maybe i won’t. Maybe just check this thread, starting from this tweet, for many historical examples that seem to fit:

)
(And think about which principals you see in which professions. And which people within those professions are more or less principled. It’s not hard to see. Once you are already thinking about who needs to be trusted for what.)
But what sustains this trust in equilibrium? Why not just violate the principle as soon as others have signed up, as soon as you have residual control rights?
So this is where some game theory comes in. Which can offer additional insights/predictions re how principled behavior works.
In particular, we should expect those trusting you w/ such “residual control rights” to pay very close attention to whether you are closely monitoring your selfish benefits from violating the principle.
If you are, then well you might be following the principle now, or in the past, but won’t as soon as the incentives to deviate are high enough.

Which in turn might motivate you to not consider the selfish benefits to deviating. One key feature we observe in principled behavior.
Eg
-we penalize people for considering “taboo trade offs” (see tetlock)
-we prefer people who “intuitively cooperate” (see jillian Jordan )
-and friends who “don’t count” (see joan silk)
-and consider it immoral to think bout people as a “means” and not just an ends (see kant)
notice that w/o this story re principled behavior, it’s unclear why we penalize people for merely considering violating a taboo, or why Kant’s 2nd formulation of his categorical imperative is so intuitive. Why dont we just care bout how people act? Why also care how they *think*?
This story also suggests principled behavior should be especially prominent and desirable in contexts where there are occasionally, but infrequently large temptations to defect. Which fits my intuition.
(Eg non-strategic, principled behavior would be especially important in romantic relations. Or when electing a president. But not when choosing a tennis partner. Or lawyer. Where the temptation to defect are, typically, small and consistent.)
What other feature do we observe in principled behavior, that might tell us something about how it’s sustained in equilibria, and when?
People who have been made to act consistent with a principle in the past , are more prone to do so in the future.

Eg if you ask someone to put up a Hilary Clinton sticker on their window, they are more likely to volunteer to gotv for her, then if you start by asking them to gotv
Cialdini calls this trick the “foot in the door” technique.

And explains it with reference to “consistency motives” and “cognitive dissonance.”
Which are nice names (and well documented phenomena). But not an explanation.

That is, as usual in social psych, just a relabeling of the phenomena. And begging the question.

That question beig: why do we feel icky when we are “inconsistent”? And when do we feel this way?
Which only the incentive story can help us with.

How? Well think about the incentive to adhere to a principle. After you have accrued some “reputational capital” as abiding by it. Is it smaller or lower than before?
Typically larger. (And hence you see “complimentarity” between past and future behavior consistent with a given principle.)
Why? Suppose you are monitoring me to see if I can be trusted to abide by a principle. And I have done so 50 times in the past. Well then, you have decent reason to trust me in this case.

But moreover, I have a strong reason to be trustworthy in this case as well.
Because I stand to lose more if I violate the principle. I lose all that trust I have built up, as being principled, and won’t get the added trust I have accrued, when people are deciding whether to trust me tomorrow.
(It takes a bit more work to prove this is an equilibria. Which technically requires a game theory model known as “repeated games with mutants,” where we assume there is a small chance any given person is actually constrained to abide by the principle. )
So I have a “consistency” motive.

But not consistency based on some purely logical sense. As if I just feel dissonant if my behavior is illogical.

No. Consistency with a *desirable principle.*
(That’s an insight you get from thinking about the incentive at play. That you don’t get from the “proximate” social psych story.)
But also it’s not just consistency w/ the principle in the sense that a logician would determine consistency. Cause “plausibly” consistent still counts for a lot.

By plausibly I mean, it’s not common knowledge that your behavior is not motivated by the principle.
Why is that important? Cause I can still trust you even if know you are not genuinely motivated by the principle, so long as I think you still care about *other* people thinking you are motivating by the principle.
Cause then you will still be motivated to keep your “reputational capital,” even if you don’t actually care about the principle.

And that’s good enough for me to be willing to trust you w/ the residual control rights.
So this incentives story, what does it give us?
It tells us what kinda consistency to expect (plausible justifications, not logical consistency but consistency w/ desirable principle).

When (when you need to be trusted.)

Who (those who need to be trusted, especially if have accrued reputational capital.)
Now to conclude more generally:

I argued that incentives shape our tastes ideologies. We know this because we can see it if we attend to short term adjustments to tastes and ideologies. And this is consistent with how we know people learn.
And that this can explain puzzling features of altruism, motivated reasoning, passions, and principles.

And that it’s really hard to understand those things w/o thinking bout incentives.
And that in each case, there’s a fairly straightforward set of incentives at play. And those incentives can be checked and verified. And when they are not at play, we won’t see the same types of quirky behaviors.
And that’s why I believe it’s crucial to recognize that incentives shape our preferences and ideologies.

And that’s really the only way to understand what’s going on.

/eom
Missing some Tweet in this thread?
You can try to force a refresh.

Like this thread? Get email updates or save it to PDF!

Subscribe to Moshe Hoffman
Profile picture

Get real-time email alerts when new unrolls are available from this author!

This content may be removed anytime!

Twitter may remove this content at anytime, convert it as a PDF, save and print for later use!

Try unrolling a thread yourself!

how to unroll video

1) Follow Thread Reader App on Twitter so you can easily mention us!

2) Go to a Twitter thread (series of Tweets by the same owner) and mention us with a keyword "unroll" @threadreaderapp unroll

You can practice here first or read more on our help page!

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just three indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member and get exclusive features!

Premium member ($3.00/month or $30.00/year)

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!