🔥 Matt Dancho (Business Science) 🔥 Profile picture
Sep 4 13 tweets 4 min read Read on X
K-means is an essential algorithm for Data Science.

But it's confusing for beginners.

Let me demolish your confusion: Image
1. K-Means

K-means is a popular unsupervised machine learning algorithm used for clustering. It's a core algorithm used for customer segmentation, inventory categorization, market segmentation, and even anomaly detection. Image
2. Unsupervised:

K-means is an unsupervised algorithm used on data with no labels or predefined outcomes. The goal is not to predict a target output, but to explore the structure of the data by identifying patterns, clusters, or relationships within the dataset.
3. Objective Function:

The objective of K-means is to minimize the within-cluster sum of squares (WCSS). It does this though a series of iterative steps that include Assignments and Updated Steps. Image
4. Assignment Step:

In this step, each data point is assigned to the nearest cluster centroid. The "nearest" is typically determined using the Euclidean distance. Image
5. Update Step:

Recalculate the centroids as the mean of all points in the cluster. Each centroid is the average of the points in its cluster.
6. Iterate Step(s):

The assignment and update steps are repeated until the centroids no longer change significantly, indicating that the clusters are as good as stable. This process minimizes the within-cluster variance.
7. Silhouette Score (Evaluation):

This metric measures how similar a data point is to its own cluster compared to other clusters. The silhouette score ranges from -1 to 1, where a high value indicates that the data point is well-matched to its own cluster and poorly matched to neighboring clusters.Image
8. Elbow Method (Evaluation):

This method involves plotting the inertia as a function of the number of clusters and looking for an 'elbow' in the graph. The elbow point, where the rate of decrease sharply changes, can be a good choice for the number of clusters. Image
9. There's a new problem that has surfaced --

Companies NOW want AI.

AI is the single biggest force of our decade. Yet 99% of data scientists are ignoring it.

That's a huge advantage to you. I'd like to help.
Want to become a Generative AI Data Scientist in 2025 ($200,000 career)?

On Wednesday, Sept 3rd, I'm sharing one of my best AI Projects: How I built an AI Customer Segmentation Agent with Python

Register here (limit 500 seats): learn.business-science.io/registration-a…Image
That's a wrap! Over the next 24 days, I'm sharing the 24 concepts that helped me become an AI data scientist.

If you enjoyed this thread:

1. Follow me @mdancho84 for more of these
2. RT the tweet below to share this thread with your audience
P.S. Want free AI, Machine Learning, and Data Science Tips with Python code every Sunday?

Don't forget to sign up for my AI/ML Tips Newsletter Here: learn.business-science.io/free-ai-tips

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with 🔥 Matt Dancho (Business Science) 🔥

🔥 Matt Dancho (Business Science) 🔥 Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @mdancho84

Sep 3
These 7 statistical analysis concepts have helped me as an AI Data Scientist.

Let's go: 🧵 Image
Step 1: Learn These Descriptive Statistics

Mean, median, mode, variance, standard deviation. Used to summarize data and spot variability. These are key for any data scientist to understand what’s in front of them in their data sets. Image
2. Learn Probability

Know your distributions (Normal, Binomial) & Bayes’ Theorem. The backbone of modeling and reasoning under uncertainty. Central Limit Theorem is a must too. Image
Read 11 tweets
Sep 3
The 3 types of machine learning (that every data scientist should know).

In 3 minutes I'll eviscerate your confusion. Let's go: 🧵 Image
1. The 3 Fundamental Types of Machine Learning:

- Supervised Learning
- Unsupervised Learning
- Reinforcement Learning.

Let's break them down:
2. Supervised Learning:

Supervised Learning maps a set of inputs (features) to an output (target). There are 2 types: Classification and Regression.
Read 11 tweets
Aug 31
🚨 Google published a 69-page prompt engineering masterclass.

This is what's inside: Image
Table of Contents:

- Prompt Engineering
- LLM Output Configuration
- Prompting Techniques
- Best Practices Image
Important concepts:

1. One-shot versus multi-shot

Google does a great job examining both approaches and demonstrating when to use them and how they work. Image
Read 8 tweets
Aug 30
Linear Regression is one of the most important tools in a Data Scientist's toolbox.

Yet it's super confusing for beginners.

Let's fix that: 🧵 Image
1. Ordinary Least Squares (OLS) Regression

Most common form of Linear Regression. OLS regression aims to find the best-fitting linear equation that describes the relationship between the dependent variable (often denoted as Y) and independent variables (denoted as X1, X2, ..., Xn).Image
2. Minimize the Sum of Squares

OLS does this by minimizing the sum of the squares of the differences between the observed dependent variable values and those predicted by the linear model. These differences are called "residuals." Image
Read 12 tweets
Aug 28
Came across this new library for LLM Prompt Management in Python.

This is what it does: Image
The Python library is called Promptify.

It combines a prompter, LLMs, and pipeline to Solve NLP Problems with LLM's.

You can easily generate different NLP Task prompts for popular generative models like GPT, PaLM, and more with Promptify. Image
Don't understand what that means? Let's take an example:

This is an NLP Classification Task.

The prompt combines a model, prompter, and pipeline to perform a Medical classification of the patient's symptoms. Image
Read 7 tweets
Aug 28
🚨BREAKING: New Python library for agentic data processing and ETL with AI

Introducing DocETL.

Here's what you need to know: Image
1. What is DocETL?

It's a tool for creating and executing data processing pipelines, especially suited for complex document processing tasks.

It offers:

- An interactive UI playground
- A Python package for running production pipelines Image
2. DocWrangler

DocWrangler helps you iteratively develop your pipeline:

- Experiment with different prompts and see results in real-time
- Build your pipeline step by step
- Export your finalized pipeline configuration for production use Image
Read 8 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us!

:(