🔥 Matt Dancho (Business Science) 🔥 Profile picture
Jul 10 13 tweets 3 min read Read on X
Understanding P-Values is essential for improving regression models.

In 2 minutes, I'll crush your confusion. Image
1. The p-value:

A p-value in statistics is a measure used to assess the strength of the evidence against a null hypothesis.
2. Null Hypothesis (H₀):

The null hypothesis is the default position that there is no relationship between two measured phenomena or no association among groups. For example, under H₀, the regressor does not affect the outcome.
3. Alternative Hypothesis (H₁):

The alternative hypothesis is what you want to test for and is typically the opposite of the null hypothesis. For example, under H₁, the regressor does affect the outcome.
4. Calculating the p-value:

In regression analysis, the p-value for each coefficient is typically calculated using a t-test. Several steps are involved in this process, which are outlined below.
5. Coefficient Estimate:

In a regression model, each predictor has an estimated coefficient (β) that represents the change in the dependent variable associated with a one-unit change in the predictor, assuming all other predictors remain constant.
6. Standard Error of the Coefficient:

The standard error (SE) quantifies the precision of the coefficient estimate. A smaller SE indicates that the estimate is more precise, reflecting less variability in the estimate of the coefficient.
7. Test Statistic (T):

The test statistic for each coefficient is calculated by dividing the coefficient estimate by its standard error. This ratio yields a t-value that is used in the t-test.
8. Degrees of Freedom:

The degrees of freedom (df) for the t-test are usually calculated as the number of observations minus the number of parameters being estimated (including the intercept).
9. P-Value Calculation:

The p-value is determined by comparing the calculated t-value to a t-distribution with the appropriate degrees of freedom. For a two-tailed test, it represents the probability of observing a t-value as extreme as, or more extreme than, the one calculated, assuming the null hypothesis is true.
10. Interpretation:

A small p-value (typically ≤ 0.05) indicates that the observed data pattern would be unlikely if the null hypothesis were true, suggesting that the predictor makes a statistically significant contribution to the model.
🚨 NEW WORKSHOP: On Wednesday, July 23rd, I'm sharing one of my best AI Projects for FREE:

How I built an AI Customer Segmentation Agent with Python:

- Scikit Learn
- LangChain
- LangGraph

👉Register here (500 seats): learn.business-science.io/ai-registerImage
That's a wrap! Over the next 24 days, I'm sharing the 24 concepts that helped me become an AI data scientist.

If you enjoyed this thread:

1. Follow me @mdancho84 for more of these
2. RT the tweet below to share this thread with your audience

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with 🔥 Matt Dancho (Business Science) 🔥

🔥 Matt Dancho (Business Science) 🔥 Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @mdancho84

Jul 7
Bayes' Theorem is a fundamental concept in data science.

But it took me 2 years to understand its importance.

In 2 minutes, I'll share my best findings over the last 2 years exploring Bayesian Statistics. Let's go. Image
1. Background:

"An Essay towards solving a Problem in the Doctrine of Chances," was published in 1763, two years after Bayes' death. In this essay, Bayes addressed the problem of inverse probability, which is the basis of what is now known as Bayesian probability.
2. Bayes' Theorem:

Bayes' Theorem provides a mathematical formula to update the probability for a hypothesis as more evidence or information becomes available. It describes how to revise existing predictions or theories in light of new evidence, a process known as Bayesian inference.
Read 12 tweets
Jul 5
Correlation is the skill that has singlehandedly benefitted me the most in my career.

In 3 minutes, I'll demolish your confusion (and share strengths and weaknesses you might be missing).

Let's go: Image
1. Correlation:

Correlation is a statistical measure that describes the extent to which two variables change together. It can indicate whether and how strongly pairs of variables are related. Image
2. Types of correlation:

Several types of correlation are used in statistics to measure the strength and direction of the relationship between variables. The three most common types are Pearson, Spearman Rank, and Kendall's Tau. We'll focus on Pearson since that is what I use 95% of the time.Image
Read 12 tweets
Jul 4
Principal Component Analysis (PCA) is the gold standard in dimensionality reduction.

But PCA is hard to understand for beginners.

Let me destroy your confusion: Image
1. What is PCA?

PCA is a statistical technique used in data analysis, mainly for dimensionality reduction. It's beneficial when dealing with large datasets with many variables, and it helps simplify the data's complexity while retaining as much variability as possible.
2. PCA has 5 steps:

1. Standardization
2. Covariance Matrix Computation
3. Eigen Vector Calculation
4. Choosing Principal Components
5. Transforming the data
Read 12 tweets
Jul 1
🚨 Say goodbye to manual ETL

Cleaned a 100k-word PDF dataset in 3 lines of Python code: Image
1. What is DocETL?

DocETL is a system for LLM-powered data processing.

You can create LLM-powered data processing pipelines. Image
2. Quick Example:

I made a quick, messy-PDF-to-Structured Output pipeline in 3 lines of Python: Image
Read 7 tweets
Jun 30
🚨 Synthetic Data is the Future of AI

Introducing The Synthetic Data Vault (SDV).

This is what you need to know: Image
Synthetic Data is the Future of AI

Synthetic data keeps your data private.

SDV generates fake datasets that look REAL.

Here's how: Image
I built a 100k-row customer dataset in 4 lines:

Perfect for HIPAA-compliant Machine Learning & AI.

Google Colab Example: colab.research.google.com/drive/1L6i-JhJ…Image
Read 7 tweets
Jun 29
Logistic Regression is the most important foundational algorithm in Classification Modeling.

In 2 minutes, I'll crush your confusion.

Let's dive in: Image
1. Logistic regression is a statistical method used for analyzing a dataset in which there are one or more independent variables that determine a binary outcome (in which there are only two possible outcomes). This is commonly called a binary classification problem.
2. The Logit (Log-Odds):

The formula estimates the log-odds or logit. The right-hand side is the same as the form for linear regression. But the left-hand side is the logit function, which is the natural log of the odds ratio. The logit function is what distinguishes logistic regression from other types of regression.Image
Read 9 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us!

:(