My Authors
Read all threads
Some highlights / takeaways / thoughts / comments from #RWRI 14, Day 7:

1. Just because x is normally distributed does not mean that f(x) is normally distributed.
In other words, just because you can predict a specific variable / input for a model does NOT mean that the model itself should be used to forecast.
This is because x could be solely probability-based and have no second-order consequences. f(x), however, is based on the *effect* of that probability, which introduces second-order consequences (i.e. degrees of separation between the probability and the end result).
2. A key theme / asymmetry: you can *never* be certain that a distribution is thin-tailed (one single event can disprove it); you can be certain that a distribution is NOT thin-tailed.
Put another way: what you think is a normal distribution might actually be fat-tailed. But when you think you have a fat-tail, it will never turn out to be thin-tailed.
3. Be comfortable using heuristics / rules of thumb, especially if they've survived generations. The statistical significance of a rule of thumb that has survived 100 years is far higher than some sophisticated-looking model that hasn't been tested on reality.
4. When you take a known system and (a) change a key variable and/or (b) add a new constraint, you blow up the uncertainty. A small, unforeseen second-order effect can have drastic consequences, as seen with the Boeing 737 MAX.
5. "Quantitative models are largely non-sense."

Real life is too messy and unpredictable to model.
Models can actually be much worse than just non-sense. Quantitative models provide a false sense of security, which can lead to an organization going bust or lives being lost, depending on the model's use.
6. If you are in the linear world, you are limited to how badly you can screw things up. If you are in the non-linear world, look out, because you're playing a completely different game.

If you aren't sure which world you're in, assume non-linear.
7. There is an asymmetry between gains and losses in a market: the gain needed to recover from a loss is larger than the original loss.

If you lose 50% of your net worth on Monday, you need a 100% gain on Tuesday to get back to where you were.
8. Market drawdowns show very fat tails for the past 180 years. Even if you only analyze from the 1830s to right before the Great Depression, the level of fat-tailedness does not change. Same thing if you only analyze from post-World War II until the present.
So while the Great Depression was a rare event, it was NOT an outlier, nor was it unforeseeable based on the data up to that point.

If you find someone who claims the Great Depression is an outlier and removes it from their data, you should find a way to short them.
If you think you are in Mediocristan and find yourself tempted to remove what you consider to be an 'outlier,' the joke is on you: the 'outlier' you are removing is the highest signal piece of data that you have! It is a strong indicator that you are actually in Extremistan.
9. Our tendency / bias is to underestimate the probability of a market drawdown. Thus, the need for tail risk hedging and extra dry powder is higher than you think.
10. Rather than using a 'risk model' that tells you the system won't fail, just assume that the system will fail. This forces you to build in redundancies / fail-safes.

If a system's effectiveness is reliant upon a 'risk model,' run away.
Most systems will eventually fail. When it happens, the goal is to fail gracefully rather than catastrophically. Relying on a 'risk model' will lead you toward the latter.
11. Models can be an illusion of sophistication / usefulness with their bells, whistles, and knobs to adjust.

Without objectivity, we will be inclined to adjust the knobs so that the model confirms our preconceived biases and aligns with our incentives.
12. The goal of a model is to represent reality.

Reality is unpredictable, yet its properties can be understood. Thus, models should be used as a tool to understand properties rather than make predictions.
12. "This can't fail" is about as large of a red flag as you can get that something will fail.
13. #RWRI (and the Incerto) is great at improving your BS detector. Before #RWRI, you might consider this a convincing statement:
After #RWRI, as soon as you read a paragraph like this, your BS detector goes wild. You know these people should be avoided at all costs, as they are using thin-tailed techniques in a fat-tailed world.
The question to consider is not 'have there been any confirmed negative consequences yet?'

Rather, they should be asking: 'is it possible that our hypothesis is wrong? And if so, what happens if we are wrong?'

Which method sounds more scientific / rigorous?
Missing some Tweet in this thread? You can try to force a refresh.

Keep Current with Mitch Morse

Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

Twitter may remove this content at anytime, convert it as a PDF, save and print for later use!

Try unrolling a thread yourself!

how to unroll video

1) Follow Thread Reader App on Twitter so you can easily mention us!

2) Go to a Twitter thread (series of Tweets by the same owner) and mention us with a keyword "unroll" @threadreaderapp unroll

You can practice here first or read more on our help page!

Follow Us on Twitter!

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3.00/month or $30.00/year) and get exclusive features!

Become Premium

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!