Richard McElreath 🦔 Profile picture
Sep 11 9 tweets 3 min read Twitter logo Read on Twitter
Forgive me, for I am about to Bayes. Lesson: Don't trust intuition, for even simple prior+likelihood scenarios defy it. Four examples below, each producing radically different posteriors. Can you guess what each does? Revealed in next tweet >> Image
Huzzah! Posterior distributions in red. The shape of the tails, which isn't so obvious to the eye, can do weird but logical things. Image
Gotta go to a meeting, but I will return to explain each of the four above later!
These are combinations of normal (Gaussian) & student-t (df=2) distributions. Gaussian has a very thin tails. Student-t has thicker tails.
Top-left: normal prior, normal likelihood
Top-right: student, student
Bottom-left: student, normal
Bottom-right: normal, student
>>
Normal prior, normal likelihood
y ~ Normal(mu,1)
mu ~ Normal(10,1)
The classic flavor of Bayesian updating - the posterior is a compromise between the prior and likelihood Image
Student prior, student likelihood (df=2)
y ~ Student(2,mu,1)
mu ~ Student(2,10,1)
The two modes persist - the extra mass in the tails means each distribution finds the other's mode more plausible and so the average isn't the best "compromise" Image
Student prior, normal likelihood
y ~ Normal(mu,1)
mu ~ Student(2,10,1)
Now the likelihood dominates - it's thin tails are very skeptical of the prior, but the prior's thick tails not so surprised by the likelihood Image
Normal prior, student likelihood
y ~ Student(2,mu,1)
mu ~ Normal(10,1)
Now the prior dominates, so reason as previous example but in reverse Image
Here's the code to reproduce:

The tail differences are easier to see on log scale. If I get some time later today, will make a version showing that.gist.github.com/rmcelreath/39d…

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Richard McElreath 🦔

Richard McElreath 🦔 Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @rlmcelreath

Jan 6
Week 1 of Statistical Rethinking 2023 is done. Here are the memes from this week's lectures
The Spidermen: Causal inference, descriptive studies, and research design are alike in that they all depend upon some generative/scientific model of how the sample was produced
Correlation implies causation is obviously (?) wrong. Correlation does not imply causation is true, but not helpful. Causation does not imply correlation either. *sad trombone* Reality is a simulation — a joke BUT when we simulate causation, what are we simulating?
Read 5 tweets
May 9, 2021
Okay so I made a transparent gif of Brandenburger Hasselhoff, in case anyone wants to add him to other historical events. Here he is e.g. at Zeppelinfeld in 1945
Hasselhoff alone for your creative pleasure
This version maybe easier to work with
Read 4 tweets
Apr 22, 2021
Working with a colleague on some household income data, where work is irregular. As usual, I start by writing a synthetic data simulation to talk through with colleague. Helps to ensure I understand the problem right. Also brings up fun (for me) like sources of measure error. Image
In this case, income data are reports and almost certainly suffer rounding and heaping. It's the little things like this that make even simple exercises not so simple.
Also beginning to worry I am a weird sort of economist now, since half of my recent projects are household income data and I've started using the word "elasticity" in casual conversation.

At least I don't use Stata.
Read 11 tweets
Jul 18, 2020
Many performers of music cannot read it. Okay. There are other, often more intuitive, ways to learn music.

Scientists perform stat models. Most scientists cannot read them. This is less OK, but there are other ways to learn models.

Short thread in which I strain this comparison
If you don't read music, the Rzewski excerpt above (left) is meaningless. If you do, it is perfectly clear. You'd read it not by each individual note, but through higher structure like chords & arpeggio patterns & progression.

It's not the notes so much as their relationships.
If you don't read math stats, the social network model above (right) is most meaningless. But again, when you do read these models, you read the model in chunks, through its grammar and phrasing.

It's not the variables so much as their relationships.
Read 7 tweets
Jul 6, 2020
Cats reacting to bad COVID-19 models, a short thread. [This is beneath me but I really need this right now let me have this]

Model 1
Model 2
Model 3
Read 6 tweets
May 16, 2019
In my dept today, I gave a bare minimum proof of why natural selection can favor strategies that do not maximize reproductive rate, provided variance is also reduced. Some papers to start with, if this literature is unfamiliar: >
Good place to start is this 2012 paper by Starrfelt & @kokkonutter : doi.org/10.1111/j.1469…
Then @seb_schreiber has this awesome 2015 paper on unifying within- and between-generation bet hedging : doi.org/10.1086/683657
Read 4 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us on Twitter!

:(