🔥 Matt Dancho (Business Science) 🔥 Profile picture
Jun 29 9 tweets 3 min read Read on X
Logistic Regression is the most important foundational algorithm in Classification Modeling.

In 2 minutes, I'll crush your confusion.

Let's dive in: Image
1. Logistic regression is a statistical method used for analyzing a dataset in which there are one or more independent variables that determine a binary outcome (in which there are only two possible outcomes). This is commonly called a binary classification problem.
2. The Logit (Log-Odds):

The formula estimates the log-odds or logit. The right-hand side is the same as the form for linear regression. But the left-hand side is the logit function, which is the natural log of the odds ratio. The logit function is what distinguishes logistic regression from other types of regression.Image
3. The S-Curve:

Logistic regression uses a sigmoid (or logistic) function to model the data. This function maps any real-valued number into a value between 0 and 1, making it suitable for a probability estimation. This is where the S-curve shape comes in. Image
4. Why not Linear Regression?

The shape of the S-curve often fits the binary outcome better than a linear regression. Linear regression assumes the relationship is linear, which often does not hold for binary outcomes, where the relationship between the independent variables and the probability of the outcome is typically not linear but sigmoidal (S-shaped).Image
5. Coefficient Estimation:

Like linear regression, logistic regression calculates coefficients for each independent variable. However, these coefficients are in the log-odds scale.
6. Coefficient Interpretation (Log-Odds to Odds):

Exponentiating a coefficient converts it from log odds to odds. For example, if a coefficient is 0.5, the odds ratio is exp(0.5), which is approximately 1.65. This means that with a one-unit increase in the predictor, the odds of the outcome increase by a factor of 1.65.
Want to become a Generative AI Data Scientist in 2025?

On Wednesday, July 9th, I'm sharing how to build one of my best AI Projects: AI Customer Segmentation Agent with Python

Register here (limit 500 seats): learn.business-science.io/ai-registerImage
That's a wrap! Over the next 24 days, I'm sharing the 24 concepts that helped me become a data scientist.

If you enjoyed this thread:

1. Follow me @mdancho84 for more of these
2. RT the tweet below to share this thread with your audience

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with 🔥 Matt Dancho (Business Science) 🔥

🔥 Matt Dancho (Business Science) 🔥 Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @mdancho84

Jun 27
These 7 statistical analysis concepts have helped me as an AI Data Scientist.

Let's go: 🧵 Image
Step 1: Learn These Descriptive Statistics

Mean, median, mode, variance, standard deviation. Used to summarize data and spot variability. These are key for any data scientist to understand what’s in front of them in their data sets. Image
2. Learn Probability

Know your distributions (Normal, Binomial) & Bayes’ Theorem. The backbone of modeling and reasoning under uncertainty. Central Limit Theorem is a must too. Image
Read 12 tweets
Jun 27
🚨BREAKING: New Python library for Bayesian Marketing Mix Modeling and Customer Lifetime Value

It's called PyMC Marketing.

This is what you need to know: 🧵 Image
1. What is PyMC Marketing?

PyMC-Marketing is a state-of-the-art Bayesian modeling library that's designed for Marketing Mix Modeling (MMM) and Customer Lifetime Value (CLV) prediction.
2. Benefits

- Incorporate business logic into MMM and CLV models
- Model carry-over effects with adstock transformations
- Understand the diminishing returns
- Incorporate time series and decay
- Causal identification Image
Read 9 tweets
Jun 26
Stop Prompting LLMs.
Start Programming LLMs.

Introducing DSPy by Stanford NLP.

This is why you need to learn it: Image
1. Why DSPy?

DSPy is the open-source framework for programming—rather than prompting—language models.

It allows you to iterate fast on building modular AI systems.
2. Modules that express AI programmer-centric way

To build enterprise-grade AI, you need to be able to build modular codebases.

DSPy makes it easy to create these LLM tasks modularly. Image
Read 8 tweets
Jun 22
Understanding P-Values is essential for improving regression models.

In 2 minutes, I'll crush your confusion.

Let's go: Image
1. The p-value:

A p-value in statistics is a measure used to assess the strength of the evidence against a null hypothesis. Image
2. Null Hypothesis (H₀):

The null hypothesis is the default position that there is no relationship between two measured phenomena or no association among groups. For example, under H₀, the regressor does not affect the outcome. Image
Read 15 tweets
Jun 16
🚨 BREAKING: IBM launches a free Python library that converts ANY document to data

Introducing Docling. Here's what you need to know: 🧵 Image
1. What is Docling?

Docling is a Python library that simplifies document processing, parsing diverse formats — including advanced PDF understanding — and providing seamless integrations with the gen AI ecosystem. Image
2. Document Conversion Architecture

For each document format, the document converter knows which format-specific backend to employ for parsing the document and which pipeline to use for orchestrating the execution, along with any relevant options. Image
Read 8 tweets
Jun 15
Understanding probability is essential in data science.

In 4 minutes, I'll demolish your confusion.

Let's go! Image
1. Statistical Distributions:

There are 100s of distributions to choose from when modeling data. Choices seem endless. Use this as a guide to simplify the choice. Image
2. Discrete Distributions:

Discrete distributions are used when the data can take on only specific, distinct values. These values are often integers, like the number of sales calls made or the number of customers that converted.
Read 13 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us!

:(