TROLLEY PROBLEMS We spend too much time on deciding which way to pull the lever, and not nearly enough time on slowing trolleys down and asking ourselves "why are there people bound to the tracks"?
Thread with examples, 1/N
2/ One example: should Trump be banned? Was the election stolen?
These are lever questions.
The trolley question is: how come fraud and/or "changing the rules at the last minute" are plausible?
3/ It's important to focus on trolley questions, because pulling a lever doesn't stop the trolley – it will keep being a problem in the future.
4/ Another example: COVID.
The lever question: save lives or livelihoods?
The trolley question (back in early 2020): how do we prevent the spread? Ground planes, wear face masks immediately, etc.
The trolley question is what prevents the leverquestion, which is a lose-lose.
5/ Another example: student debt.
The lever question: do we forgive debt or not?
The trolley question: how come that some degrees are not a profitable investment anymore? Why do they cost so much or deliver so little value, that people cannot easily repay their debt?
6/ Yes, the lever problem is more urgent.
But addressing the trolley problem is more important.
Otherwise we'll keep facing lose-lose situations.
We can't play by the problem's agenda.
We can work on the important before it becomes urgent.
7/ 🎯 The utilitarian calculus restricts us to think about the current instance of the problem only.
When you have a plausibly repeating problem, it doesn’t make sense. The root must be addressed. With urgency.
I’ve personally consulted for a pharma company on biological contamination due to behavioral mistakes (though at lower security levels than the labs involved here) and, based on my experience, I find very plausible that one day a rushed or tired employee makes a misstep.
Of course, I would like to believe that after SARS, protocols for way stronger.
But I also believe that virological labs will have leaks again, eventually. Not if, but when.
Let me understand. It took hundreds of thousands of years to understand that cows can contribute to greenhouse gases, but a few years of small-scale development of lab-grown meat are enough to say it doesn’t have negative side-effects?
Also: the side-effects of something (not just the product, but the infrastructure needed to produce it, it’s byproducts etc) are different whether it’s “lab-studied” and “industrialized. Small scale and large scale can’t be equated.
TWO REASONS PEOPLE REFUSE “COIN FLIPS” BETS
and the importance of considering what’s “out of scope”
Thread, 1/N
2/ A classic “surprising phenomenon” is that people offered a bet such as “I flip a coin; if heads, you win $1000, if tails, you lose $950” tend to refuse playing.
The surprise comes from the fact that, in theory, the bet has positive expected outcome: $25.
(1000*50%-950*50%)
3/ However, there are two explanations for which it’s rational to refuse such a bet.
Most examples of non-ergodicity are activities in which the outcome of a person completing them many times is lower than the outcome of many people completing it once. For example, Russian Roulette.
But there are cases in which its higher.
1/9
2/ First, if you don’t know about ergodicity, I suggest reading this thread:
3/ One classic example of non-ergodicity is Russian Roulette. The expected outcome of 600 people playing it once is 100 dead and 500 winners, whereas the outcome of a single person playing it 100 times is 1 dead and no winner.
How come that as we get better tools to be more productive (software, …), in some jobs, productivity didn’t increase too much?
More tasks that don’t add value, of course. But why do we choose to engage with them, rather than being productive?
1/N
2/ To explain this phenomenon, called productivity homeostasis (which roughly translates to “stays the same”), we must first look at a similar phenomenon: risk homeostasis.
3/ The Fence Paradox (see image below) is an example of risk homeostasis: the idea that, when an activity becomes safer, people often react by increasing their risk taking.