Chelsea Parlett-Pelleriti Profile picture
Jul 6, 2021 20 tweets 7 min read Read on X
SINCE @kierisi has threatened to sarcastically/chaotically say incorrect things about p-values during #sliced tonight 😱 just to annoy people 😉,

I thought I’d do a quick thread on what a 🚨p-value🚨actually is.

🧵
(1/n)
Computationally, a p-value is p(data as extreme as ours | null). Imagine a 🌎 where the null hypothesis is true (e.g. there is no difference in cat fur shininess for cats eating food A vs. food B for 2 weeks), and see how extreme your observed data would be in that 🌎 (2/n)
So, you can think of the p-value as representing our data’s compatibility with the null hypothesis. Low p-values mean our data is not very likely in a world where the null is true. High p-values mean it is relatively likely. (3/n)
In our cat example, we could measure the difference in mean fur shininess between groups A and B, and observe a difference of 3.5 shine points ✨. (4/n)
A p-value answers the question “if there were no true difference in fur shininess between the groups, how often do we expect to observe a difference between the groups we sampled that’s greater than or equal to 3.5?” (5/n)
When the data are pretty extreme/rare/unexpected (low p) in a world where the null is true, we start to get suspicious that the null might not be true…(it still could be tho! and might be a better fit than another hypothesis) (6/n)
In Null Hypothesis Significance Testing (NHST), we use the p-value to make decisions about our data. NHST is a decision-making tool. In this case, we choose a cut off (usually 0.05 but that’s COMPLETELY ARBITRARY)… (7/n)
…and decide in advance that if our p-value is smaller than the cutoff, we will act as if the null is FALSE (if the p-value is not smaller, we act as if the null could still be true), and we call that test “significant”. (8/n)
When we adhere to this decision-making rule (and a bunch of assumptions about the data/model…etc) the beauty of NHST is that we can CONTROL our Type I error rate. “We shall not be too often wrong” (9/n).
Our expected Type I error rate (False Positives, aka how likely we are to ACT like the null is FALSE when it’s actually TRUE) will be equal to that cutoff we chose “in the long run” (10/n)
A Type I error can only happen when the null is TRUE. So the Type I error rate (5% if using 0.05 cutoff) is not our OVERALL error rate. We can make another type of error. A Type II (False Negative) error where we act like the null is TRUE, but it’s FALSE. (11/n)
(aka WE FAIL TO DETECT a real effect. 🚨)

Often we want to balance our error rates rather than just choose a Type I error rate/cutoff. But that’s a thread for a different time. (12/n)
IN SUMMARY:

✅ p-values are a measure of how extreme our observed data (measured through a test statistic) would be in a world where the null hypothesis is TRUE. (13/n)
✅ we often see p-values being used in NHST, which is a decision-making tool. We decide on a cutoff and if a p-value is LESS than the cutoff, we will act like the null is false. (14/n)
✅If the p-value is > cutoff, we act like the null could be true. This decision-making tool allows us to know (*if* all assumptions are met) what our expected Type I error will be. Controlling error rates over repeated measurements is IMO, the benefit of NHST + p-vals. (15/15)
BONUS:
I like to tell people that p-values + NHST is a weaker form of reductio ad absurdum. RAA tries to disprove things by assuming the opposite and showing that that leads to something impossible happening. E.g.
Hypothesis: My run today was 100 miles.

Consequence: I run at 6 mph. That means my run would be over 16 hours long.

Impossibility: I only ran for an hour.

Conclusion: My run today was NOT 100 miles.
P-values do something similar.

Hypothesis: there is no difference between fur shininess of cats in groups A and B.

Consequence: then we expect observed differences to be ~ N(0,1) due to sampling variation.
Implausibility: our observed difference is 3.5, p < 0.05, WOW SO UNLIKELY!

Conclusion: We will ACT as if there is a difference between the fur shininess of cats in groups A and B.
ALRIGHT fellow statisticians, tell me what nuance I got wrong!

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Chelsea Parlett-Pelleriti

Chelsea Parlett-Pelleriti Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @ChelseaParlett

Apr 10
🪧Solution Thread🪧

We want a confidence interval for a two-sample z test! We're interested in the difference between replication rates for projects with and without open data.

This question has answer choices, so let's talk about my FAVORITE WAY to save time on these Qs!

2 sample proportion  James is interested in whether projects on OSF (an open science platform) with open data (data that is available for download) are more replicable than projects without open data. He collects separate random samples of projects from OSF. Here are the results:  (a 2x2 table is show with counts of replication and whether or not a paper had open data. The patterns show that open data papers tend to replicate more often).   James wants to use these results to construct a 95% confidence interval to estimate the difference in the proportion of projects that reproduce (open - ...
Answer choices. The answer choices vary in their critical z value, and the way the standard error is calculated. The correct choice is A.
FIRST: take a look at what's different between the answer choices. Here, the differences are:

- the critical z value
- the formula for calculating the standard error

so we don't even need to look at the other parts, they're all the same!
Let's start with the critical z value. The question asks for a 95% confidence interval. Either using our memory, or the z table given to us on the AP test, we can figure out that the *correct* critical z value is: 1.96.

So we can cross off answer choices B and D.
Read 7 tweets
Aug 9, 2023
📚 A Few of My Favorite Data Science Related Textbooks

(I use the term textbook loosely)

Here are some (non-exhaustive) of my favorites: https://t.co/BCgfQadCmN
📕 Elements of Statistical Learning

This book was the BIBLE in Grad School. It’s incredibly in depth and dense, but not so much that you can’t get through it. It’s comprehensive, well written, and is my go to reference to understand a ML algo more deeply😍 The book “the Elements of Statistical Learning”
📗Introduction to Statistical Learning with Applications in R

A gentler cousin of ESL, ISLR (and now ISLP!) is an great intro to ML algos. This book can be appreciated by undergrads, grads, and industry workers alike. The code examples are incredibly useful, and text is clear The book “an introduction to Statistical Learning”
Read 14 tweets
Jul 15, 2022
United healthcare’s student health insurance was a HUGE stressor during grad school🙃

I had to float large sums of money waiting for reimbursement (which they messed up ~40% of the time at first), spend hours on the phone with them, and had a hard time finding a therapist😤
But I’m glad they made SOOOOO MUCH money 🙄

Healthcare is a right, and shouldn’t be tied to school/employment. It should be accessible, largely free, and it should focus on helping people NOT MAKING PROFITS.

This stuff pisses me off.
And while I’m on the subject, MAKE MENTAL HEALTHCARE MORE ACCESSIBLE.

Therapists deserve to make a living wage, AND clients who can’t afford it should have to spend $100s of dollars a month for therapy/meds.
Read 5 tweets
Jul 31, 2021
Another #ChelseaExplains 🧵 (trying to start with simpler topics).

Today that's 💫Conjugate Priors💫
First, PRIORS. In Bayesian Statistics, we use probability distributions (like a normal, Cauchy, beta...) to represent uncertainty about the value of parameters.

Instead of choosing ONE number for a param, a distribution describes how likely a range of values are A distribution with the x-axis label "Possible Paramete
Bayesian Stats works by taking previously known, PRIOR information (this can be from prior data, domain expertise, regularization...) about the parameter

and combining it with data to make the POSTERIOR (the distribution of parameter values AFTER including data)
Read 11 tweets
Jul 30, 2021
True to my word, the best statistical model, a Thread🧵

As an applied statistician and freelance consultant, I work a LOT with people trying to figure out the best model. Here are things I consider and ask.
1. Does the model ACTUALLY answer a question you have?

If you don’t have a question THEN THE BEST INFERENTIAL/PREDICTIVE MODEL IS NO MODEL. Do some EDA first!

Stop doing hypothesis testing if you don’t have a hypothesis to test. ✋
2. Is the model realistic?

We love prior predictive checks bc they allow u to generate data based on ur priors😻 but also ask whether ur leaving out important effects, whether the variable u use actually measures what you want, or if linear relationships are realistic…
Read 7 tweets
Nov 20, 2020
⚠️ SO YOU WANT TO BE A BAYESIAN⚠️ :

(since I compiled a quick list of Bayesian resources today, I figured I should share; these are just my opinion!)
📚 Beginner book (R): Bayesian Statistics the fun way —amazon.com/dp/B07J461Q2K/…

📚 Beginner-Intermediate book (R + JAGS + Stan): Doing Bayesian Data Analysis: amazon.com/Doing-Bayesian…
📚 Intermediate book (R + Stan) : Statistical Rethinking—
amazon.com/Statistical-Re…

📚 Intermediate/Advanced Book (R + Stan) : Bayesian Data Analysis (amazon.com/Bayesian-Analy…)
Read 9 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us!

:(