TWO REASONS PEOPLE REFUSE “COIN FLIPS” BETS
and the importance of considering what’s “out of scope”
Thread, 1/N
2/ A classic “surprising phenomenon” is that people offered a bet such as “I flip a coin; if heads, you win $1000, if tails, you lose $950” tend to refuse playing.
The surprise comes from the fact that, in theory, the bet has positive expected outcome: $25.
(1000*50%-950*50%)
3/ However, there are two explanations for which it’s rational to refuse such a bet.
4/ First, if someone comes to you with a “free lunch” bet, it’s rational to be skeptical. Is he a swindler?
5/ Second, non-ergodicity, explained in the example below
6/ These two reasons explain why we often decide not to engage in “games” with positive expected outcome.
7/ However, we also often engage in activities with negative expected outcome. Why?
8/ Three reasons. The first, is a mirror of swindler argument above. There might be things we don’t know, and if the known downside is low, it might be worth trying.
9/ Second, non-ergodicity again, but this time flipped over.
Let me understand. It took hundreds of thousands of years to understand that cows can contribute to greenhouse gases, but a few years of small-scale development of lab-grown meat are enough to say it doesn’t have negative side-effects?
Also: the side-effects of something (not just the product, but the infrastructure needed to produce it, it’s byproducts etc) are different whether it’s “lab-studied” and “industrialized. Small scale and large scale can’t be equated.
Most examples of non-ergodicity are activities in which the outcome of a person completing them many times is lower than the outcome of many people completing it once. For example, Russian Roulette.
But there are cases in which its higher.
1/9
2/ First, if you don’t know about ergodicity, I suggest reading this thread:
3/ One classic example of non-ergodicity is Russian Roulette. The expected outcome of 600 people playing it once is 100 dead and 500 winners, whereas the outcome of a single person playing it 100 times is 1 dead and no winner.
How come that as we get better tools to be more productive (software, …), in some jobs, productivity didn’t increase too much?
More tasks that don’t add value, of course. But why do we choose to engage with them, rather than being productive?
1/N
2/ To explain this phenomenon, called productivity homeostasis (which roughly translates to “stays the same”), we must first look at a similar phenomenon: risk homeostasis.
3/ The Fence Paradox (see image below) is an example of risk homeostasis: the idea that, when an activity becomes safer, people often react by increasing their risk taking.
A peek inside my adaptive systems course starting on the 23rd of February.
In this thread, a list of what participants will learn.
1/N
MODULE #1: HARNESSING ANTIFRAGILITY
The organic is both antifragile (we lift weights → our muscles grow) and fragile (we lift too much → we injure ourselves).
What determines antifragility?
What's the relationship between it and fragility?
What to do about it? 2/N
3/ Antifragility can make us stronger (exercise → stronger muscles) or weaker (no exercise → muscles atrophy).
It can make us adapt (famine → we adapt by storing more nutrients) or maladapt (lack of famine → we take risks & store less nutrients, making us more fragile).
Societies are adaptive systems. What a policy does is less important than how people adapt to it.
Our body is an adaptive system. We lift weights not to move them, but for how our muscles adapt to it (they grow).
(thread, 1/N)
Teams are adaptive systems. In the short-term, a manager's decision matters for what it does. In the long-term, it matters for how the team adapts to it. What behaviors does it make more likely?
2/N
Markets are adaptive systems. Many strategies only work until the market adapts to them.
Marketing, sales, and strategy are about adaptive systems. In the long-term, what matters is how customers, competitors, and suppliers adapt to a new product.
3/N
A Nobel prize can tell us two things: how good is the recipient or how bad is the committee.
“Unless you have confidence in the ruler’s reliability, if you use a ruler to measure a table you may also be using the table to measure the ruler.” – @nntaleb
2/ I used to express Wittgenstein’s ruler as follows: the more the free parameters, the less you know what is being measured.
For example, last spring COVID mortality could have been informing us about how aggressive is the virus or how good is a country’s testing
3/ In addition, and this is the point of this new thread, it just dawned to me that Wittgenstein’s ruler is not just about the precision of the ruler but also about its choice.