@nntaleb's brilliant lecture series on probability:
Inferences drawn based on observations of a fat-tailed distribution will fail out of sample - which is to say, in the future.
The lessons here are so important that I’m sharing my notes. 🧵👇
youtube.com/playlist?list=…
1. The Law of Large Numbers (LLN) states that sample mean converges to distribution mean for n large. The problem is that we live in the preasymptotic real world - before “n large.” In particular, n is never large enough in Extremistan.
2. Mediocristan vs. Extremistan: In Mediocristan, tail events are the result of many moderate events. If you find two people with a combined height of 13 feet, the most likely combination is 6’6” and 6’6”.
3. In Extremistan, tail events happen alone. If you find two people with a combined wealth of $36M, the most likely combination is not $18M and $18M, but $35.999M and $0.001M.
4. We call distributions in Extremistan “fat-tailed.”
In a fat-tailed distribution, a small number of observations account for the bulk of statistical properties. Examples are distribution of wealth and fatalities from pandemics.
5. In a fat-tailed distribution, sample mean is not a reliable indicator of distribution mean. This is because n isn't large enough and LLN doesn’t work. For the same reason, metrics like standard deviation are not usable. They fail out of sample - which is to say, in the future.
6. The empirical distribution is not empirical. Even exhaustive historical data is mere sampling from a broader phenomenon (the past is in sample; inference is what works out of sample).
7. At one point, the risk of dying from a car accident in California was higher than the risk of dying from covid. But card accident risk belongs to Mediocristan - it’s stable. Covid risk is from Extremistan. The risk of 1000 people suddenly dying is much, much higher from covid.
8. This is why naive empiricists were always wrong about covid.
Never use Mediocristan methods to forecast Extremistan problems.
9. Forecasting is overrated.
The key is to be right about expected payoff.
HUGE thank you to @nntaleb for sharing these lessons!
I just finished watching the series for the second time and supplemented what I didn’t immediately understand with chapter 3 of “Statistical Consequences of Fat Tails.”
I presented the ideas in a different order - I also couldn’t cover everything here. Watch the series for yourself! I’m looking forward to future episodes.
At the end of “Fooled by Randomness,” @nntaleb talks about a generator, or axiomatic framework, for the book. I submit the following for the lectures:
Inferences drawn based on observations of a fat-tailed distribution will fail out of sample - which is to say, in the future.
Share this Scrolly Tale with your friends.
A Scrolly Tale is a new way to read Twitter threads with a more visually immersive experience.
Discover more beautiful Scrolly Tales like this.
