Understanding the technical stuff is about 1% of the battle.
1/n
Naïve probabilism doesn't necessarily need to be defined it's more/less something that needs to be recognised/ understood & hopefully not practiced but certainly there is plenty of practice going on in this realm.
2/n
It's a concept that's been around for probably thousands of years but it is becoming especially more prevalent.
One reason is that we hear more about data science, machine learning, AI, etc. more people know about statistics - how it's used, or really how to misuse it.
3/n
It's very easy to put something in to a program and get something spit out - it's become kind of a modern form of rhetoric (really a branch of rhetoric).
4/n
Naïve probabilism become so dangerous in a lot of the more practical domains - policy making, medicine, anything that effects a large society.
Hear the language of statistics or probability used sets alarm bells off.
5/n
Naïve probabilism:
All decisions under uncertainty boil down to probability calculations.
All of your beliefs are represented by a probability distribution - there's some distribution in your head & when you decide whether or not to carry an umbrella it's because of that.
6/n
Enlightened Probabilism:
Probability applies only in specific circumstances, even then, be sceptical.
7/n
Mantra
When gambling, think probability
When hedging, think plausibility
When preparing, think possibility
In all other cases, stop thinking, just survive.
8/n
Trying to win -> what's probable
Avoiding loss -> what's plausible
Avoiding Ruin -> what's possible
(Logic is bottom up ^)
9/n
If you believe everything boils down to a probability distribution when making a decision it just becomes a question of figuring out what that distribution is.
What makes this a nonstarter is the essence of decision making *is* figuring out how to make a decision.
10/n
Our default reflex is what's the model? How do we model this? How do we understand this? & in some situations that might work & in some it does, but in a lot of situations where we have absolutely no idea what's going on the question isn't how can we model this? etc.
11/n
You're better to actually figure out all the different ways you don't understand.
Because those are all the ways you're going to mess it up.
12/n
Models kind of work the other way.
We put assumptions into the models & then the assumptions give us something magical.
It's not a question of what the assumptions give us, it's what they don't give us.
What's not contained in them.
13/n
The key thing to keep in mind isn't just what is likely to happen but also what is the consequences of that thing happening?
Our first goal is just to survive above all else.
14/n
Pandemic
A year ago, nobody knew what was going on, or knew how long it was going to last, or for that matter how bad it was going to be for that period of time.
15/n
On stocking up:
Not because I thought it was likely but it was certainly possible that I might need it.
It'd certainly not want to be in the reverse situation where an unlikely thing happens and I'm stuck, I've got no outs, I can't rely on myself, and I'm kind of hopeless.
16/n
That's kind of the base case trying to survive trying to avoid ruin.
Think about what's possible
Beyond that we can try and chip away and not necessarily be just always preparing for the worst.
17/n
We have to prepare for the worst within reason.
(But beyond that we can at least hedge)
18/n
Once we have shored things up, once we know we aren't going to lose that much then we can try to win.
19/n
We only want to do things optimally when we have the luxury of doing things optimally.
20/n
In most situations doing things suboptimally is actually optimal in a different sense.
21/n
Naïve probabilist beliefs
The house always wins. Debts are always paid. The model is right. Real world obeys the theory. Good intentions are more important than good results. Sometimes you get unlucky. There's nothing you can do about it.
Ignorance begets knowledge. etc.
22/n
Naïve probabilist Axioms
-Until proven otherwise, assume that the future will resemble the past.
-The more complex the problem, the more complex the solution.
-In the presence of uncertainty derive wisdom from ignorance.
23/n
It's a perfectly sound point of view from the standpoint of theory.
24/n
There's two kinds of ways the theory gets applied in the real world.
One applied by practitioners who distort the theory so it actually works and doesn't ruin anybody
Two it gets applied by other theorists or bureaucrats - taken out of the box and just put into something
25/n
Like taking a domestic animal and putting it out in the wild, it's going to get eaten alive pretty quickly because it's not where it belongs.
Which is pretty telling about the theory in the first place.
26/n
"A mask is a thing, and things block other things from going through them...
and that logic has actually been thankfully verified by peer review science."
27/n
A very simple solution or idea doesn't have the gravitas of much more complicated or sophisticated explanation (such as one that suggests masks are actually bad because the virus gets stuck in there etc. etc.)
28/n
Until proven otherwise just assume the future will resemble the past,
it does usually,
except when it doesn't.
29/n
Getting luck is not really a strategy.
30/n
There are many layers to risk/uncertainty/probability
(Easy) Theory: What they taught in school - (doesn't have to be there)
(Hard) Practice: What you learned in the school yard.
(Harder) Psychology: How much you can handle.
(Hardest) Ethics: Who you really are.
31/n
Theory really is the easy part of probability, it's hard academically speaking, but it's the easy part because you have full control over the whole thing, control over what assumptions are made then work through them & everything works perfectly.
32/n
There was what you were taught in school and what you learned in the school yard.
Those were never the same.
They never quite agreed with each other.
33/n
There's putting it into practice and there's being forced to put it into practice, and these are actually two different things.
If you try to put something into practice it's a little bit different.
34/n
How much can you handle when in an uncertain or an uncomfortable situation? Do we choke under the pressure? Can we handle it? Can we put the theory into practice?
That's not just as simple as saying here I am, here's the theory & here's the answer.
You gotta execute.
35/n
What's risky for the politician is actually good for the people.
What's risky for the people is good for the politician.
36/n
Does someone, regardless of the outcome, just say they got it right?
37/n
Three basic principles
Fundamental Principle of modelling
-Models should fit the real world, not the other way around
Fundamental principle of probability
-Pay up when you lose
Fundamental principle of gambling
-Don't get free rolled.
38/n
Anytime there's an uncertain decision, there's always an explanation after the fact to justify what you did. It's the ethical part of owning up to it & saying yes that was the wrong decision, or maybe it was the right decision but things go wrong - you still have to pay up.
39/n
Take the risk - if you win great, if you lose, own up to it and don't blame or make excuses, especially when somebody is making a decision under uncertainty that imposes risk on somebody else
40/n
Harry explains the rules of Baccarat
The House edge comes from a model assuming the cards are shuffled randomly etc. implementation has been done properly, dealer, table, etc.
41/n
Your optimal strategy is one of two things - know what cards are going to come out so bet depending on who you know is going to win.
or
Don't play at all because you're expecting to lose money. (Which is the better choice than playing if you care about money.)
42/n
But things aren't always as they seem, we're assuming the model has been implemented properly, there's an assumption there.
43/n
Everything has a limit, infinity doesn't exist, which is something that is important to keep in mind when we go into practice.
44/n
Big picture about risk/probability modelling:
You might think the only situation in which these models actually apply is in a casino.
Even in the casino things aren't as they seem.
Even when the model is supposed to work it doesn't work all the time.
45/n
There just isn't a scenario in which the model actually applies perfectly as it does in theory, there's no such thing.
There are many situations where we can't find the defect, things appear to behave according to the model, but that doesn't mean we can conclude they are.
46/n
There is no 'supposed to work'.
It either works or it doesn't.
47/n
Joe - "The real world is an informal system and all of the formal systems/models that we build are always living inside of that, and they're not isolated, they're open to these larger informal systems & they're leaky."
48/n
There's information that naturally leaks out of real world situations & simply when you formalise it cuts out a lot of detail.
That's what it means to make it formal/abstract.
Those details can indeed carry info which is potentially actionable/vulnerability inducing etc.
49/n
The fact of the matter is that every single game is beatable.
The naïve probabilist thinks the house always wins & they do most of the time against people who don't know what they're doing, but there are ways to systematically beat these, the model does not apply perfectly.
50/n
Tells the story of Phil Ivey and Cheung Yin “Kelly” Sun playing baccarat
Brian gem - 'Never underestimate counterparty risk'
52/n
How do robots make decisions?
They have a probability, they have a model, there's statistics behind it they don't have the ability to make decisions the way we do in single case scenarios.
53/n
There is a question of how does this all apply to multiscale or group survival.
(What's that bad joke? maybe I'll sacrifice my life for two siblings or eight cousins?).
Harder to say, in a sense I can only make decisions for myself.
54/n
Joe doesn't check on hand raises
(Except he did)
(It worked)
Common sense is a much more powerful tool for reasoning then any formality can give us.
55/n
The amazing thing about human beings and animals (and actually anything that is alive and surviving) is that we can make decisions based on no data,
Based purely on experience and intuition.
We're actually pretty bad otherwise at making decisions.
56/n
Computers are the exact opposite so there's actually a good collaboration there and that's what we've used it for.
We shouldn't let it take over - it's a collaboration.
The computer is useful for something (but not everything).
57/n
What's the pitfall?
The hyper-rationalist/the naïve probabilist argument against common sense is that people make stupid decisions, people make inconsistent decisions.
58/n
In the same situation you might go left and I go right and then I get hit by a bus.
We each had our way of making decisions, but how does that go to the higher level of how we survive as a species?
Well your decision means you're still alive.
59/n
If we are both robots and we both followed the same protocol, we would have both gone the same way and if it was the same bad choice we're done.
60/n
We do make decisions, have common sense, and intuition
We don't always make the right decision and usually it doesn't kill us, but the fact that we are able to make different decisions and be able to be right and wrong.
It actually preserves us individually.
61/n
Because it allows us to be flexible but it also preserves us at a higher level, it allows some to get unlucky and some to get lucky as opposed to being systemically lucky or unlucky.
62/n
The answer is not the same depending on what scale you're looking at.
63/n
(This touches on things like higher order logic - different reasoning/randomness - requisite variety of things like temperaments, traits, and behaviour, prisoner's dilemmas etc.)
In dealing with reality you're dealing with layers of uncertainty & most can't be modelled explicitly as you don't even know what to model.
To deal with that level of uncertainty present the system with enough variety where hopefully some of the responses work out.
64/n
If everything is running on the same algorithm and everything makes the same decision - that's a kind of centralisation - you can certainly blow up the entire system that way.
65/n
What common sense is for dealing with these scenarios where logically there's no endpoint to the thing and you have to make a decision since the clock ticks - you've got to do it.
66/n
(Joe didn't buy GME for the record)
The first mistake you could make here is thinking you know anything about what is going on.
67/n
Watch 'The opposite' episode of Seinfield. Costanza realises every decision he has ever made in his entire life has been wrong.
The suggestion is made if every decision you've made is wrong - then the opposite must be right.
That's the philosophy of behavioural economics
68/n
The Sunstein/Costanza fallacy
'The thing that we're panicking about doesn't happen so that's probably what's going to happen this time.'
That's probably right but we don't know so that doesn't mean we shouldn't be panicking.
69/n
There's also the fallacy of no effective/no interaction effect when it could very well be that certain things didn't come to pass because we panicked.
70/n
'We've established what the past is like so now we know what the future is going to be like because the future is going to look like the past.'
71/n
Another Naïve probabilist problem:
Extrapolation of/imposing one decision process in one context (or even one that works in a large number of contexts) on to another context that it just might not apply for, and sometimes it doesn't.
72/n
It's absurd Sunstein is lauded as some great intellectual with the stuff he's saying like at the start of Covid.
It doesn't take a genius to identify that what doesn't make sense, in fact most people using their common sense could spot why what isn't good reasoning at all.
73/n
The guy who raises pigs for a living would get it instantly.
74/n
Common sense is far more, it's worth a lot more that books smarts.
75/n
Sunstein is the embodiment of the freeroll effect.
3 weeks too late he came out saying we should be using the precautionary principle - although saying he's usually against it. Contradicting his original stance on Covid.
But by then it was over, it was too late.
76/n
He then benefited multiple times from just horrible advice.
77/n
The free roll effect happens a lot in science.
Researchers/policy makers/pundits get full credit when a recommendation works, and face no consequences when it fails,
while society bears the risk.
78/n
We got told 'the risk remains low',
& then at some point it was determined the risk was very high.
Of course the risk was always high.
It was always there.
79/n
Those making the judgement weren't playing the same game we were forced to play.
We are forced to content with reality, their object was probably not stopping the crisis but stopping the panic about the virus.
Politics - it's all different - it's about perceptions instead.
80/n
Most science is bogus, most published science is bogus.
81/n
Replication crisis in science:
Every single scientist can actually be doing perfect statistical work/perfect science and the replication crisis would still exist.
The crisis is a crisis at a higher meta-level, at the system level, which is a result of publication bias.
82/n
The reason it exists is because of gatekeeping - trying to keep the bad or boring stuff out - anything that's not going to get them an extra notch on their belt/ impact factor/ headlines.
83/n
Actually, the published research is better off having bad stuff in it.
By trying to keep stuff out that means the stuff that gets through is taken for granted as true, when in fact no matter how good the filter something always sneaks through.
84/n
Once you learn how to publish a couple papers it's actually pretty easy to just keep doing, it's not a hard thing to do, it's a game and it's about saying the right things, citing the right things, organising your presentation the right way.
It becomes a game.
85/n
It's easier to give birth than to resurrect the dead.
86/n
It's harder trying to get people to do stuff that is fundamentally against their nature, against their incentives.
For better or worse, it's just the way things are.
88/n
As a heuristic anytime you see someone proposing a solution which demands that people change - you know it's BS - that's not the way.
89/n
Another Brian gem: 'Academia - creating problems to fit every solution.'
90/n
Maybe don't bother with a model at all.
Everything does not require a model.
Most things can't be explained.
91/n
If you have uncertainty and you're aware of uncertainty then the ethical thing to do is say just that -
I don't know.
92/n
In the cult of expertise there's a tendency to always project things with certainty - a sort of pathological certainty you might call it.
93/n
"Oh yeah this is the way this definitely the one thing that's going to do it for you"
94/n
Again something coming out of the work around black swans etc. is that you often don't know, so the right thing to do if so is first say you don't know - be forthright about it - then position yourself in such a way as if you don't - often that leads to pretty clear action.
95/n
"Maybe Harry will have to eat those cans of chicken he bought one day because he feels guilty about wasting them.
Which is pretty bad, but not so bad."
96/n
Beyond the individual ethic
There's a place we need to get to societally where sophistication isn't giving the answer all the time but also being sensitive to where we just really don't know and then acting accordingly.
97/n
Pathological certainty:
You see a lot of it - ethically often the certainty that things are presented with is unethical because deep down some of these people know that they don't know what the hell they're talking about.
98/n
You come across people that always have a framework to answer any question.
The structuring of the question always comes back to the same framework because they have an answer to every question - they never don't know - they always have an answer.
99/n
Nudge theory/behavioural economics catchphrase:
Tell me something and my answer will always be just 'because people are irrational'
To account for increasingly large and complex systems, we must take an ensemble perspective.
1/n
Instead of thinking about what the system is going to do, start thinking about what can the system possibly do what's the state space? what could it possibly do? what configuration could it possibly have? The complement of that is what possibility won't it manifest/actualise?
2/n
Determinism can only take us so far - whether it's a fundamental "randomness" or epistemological limitation, doesn't really matter, probabilistic processes become necessary to start dealing with systems & thinking about their future paths/trajectories/possible trajectories.
1/n
Epistemological limitations like chaotic dynamics or computation irreducibly introduce uncertainty future states of system.
2/n
At a certain point in time it has to have a paradigm shift or it doesn't work anymore:
Different materials, different, tools, etc. - just different
45/n
Something has got to give something has got to be different.
A whale can grow so large compared to elephant because it is in (and has to be in) the ocean.
46/n
Instead maybe try a smallish system then, run a new smallish system in a similar way e.g. cells
Sometimes duplication allows things to evolve differently.
47/n
There are tools that people bring to bear that become selective mechanisms over the possible objects of study and so we get this hugely biased sample of what we consider normal/regular/typical.
2/n