Supervised Learning maps a set of inputs (features) to an output (target). There are 2 types: Classification and Regression.
Classification:
Identifying the category that something belongs to. Often I use Binary Classification for lead scoring to get a class probability (the probability from 0 to 1 of how likely the rowwise observation belongs to a class). Think non-buyer or buyer. 0 or 1. Binary Classification.
Regression:
Predicting a continuous value. I commonly use Regression for predicting future values of sales demand. It's a special type of regression called Forecasting.
3. Unsupervised Learning:
Unsupervised learning is extracting insights from unlabelled data.
The 2 main types I use are clustering and dimensionality reduction.
- K-means is the most common clustering algorithm I use, often for clustering customers based on their similarities. I use
- PCA to reduce the number of columns so other supervised machine learning algorithms run more efficiently and to visualize clusters.
4. Reinforcement Learning:
The idea is that the software learns to take actions based on the accumulation of reward. This is the underlying concept of "AI" or Artificial Intelligence, where the software learns to think.
There's a new problem that has surfaced --
Companies NOW want AI.
AI is the single biggest force of our decade. Yet 99% of data scientists are ignoring it.
That's a huge advantage to you. I'd like to help.
Want to become a Generative AI Data Scientist in 2025 ($200,000 career)?
On Wednesday, Sept 3rd, I'm sharing one of my best AI Projects: How I built an AI Customer Segmentation Agent with Python
K-means is an essential algorithm for Data Science.
But it's confusing for beginners.
Let me demolish your confusion:
1. K-Means
K-means is a popular unsupervised machine learning algorithm used for clustering. It's a core algorithm used for customer segmentation, inventory categorization, market segmentation, and even anomaly detection.
2. Unsupervised:
K-means is an unsupervised algorithm used on data with no labels or predefined outcomes. The goal is not to predict a target output, but to explore the structure of the data by identifying patterns, clusters, or relationships within the dataset.
These 7 statistical analysis concepts have helped me as an AI Data Scientist.
Let's go: π§΅
Step 1: Learn These Descriptive Statistics
Mean, median, mode, variance, standard deviation. Used to summarize data and spot variability. These are key for any data scientist to understand whatβs in front of them in their data sets.
2. Learn Probability
Know your distributions (Normal, Binomial) & Bayesβ Theorem. The backbone of modeling and reasoning under uncertainty. Central Limit Theorem is a must too.
Linear Regression is one of the most important tools in a Data Scientist's toolbox.
Yet it's super confusing for beginners.
Let's fix that: π§΅
1. Ordinary Least Squares (OLS) Regression
Most common form of Linear Regression. OLS regression aims to find the best-fitting linear equation that describes the relationship between the dependent variable (often denoted as Y) and independent variables (denoted as X1, X2, ..., Xn).
2. Minimize the Sum of Squares
OLS does this by minimizing the sum of the squares of the differences between the observed dependent variable values and those predicted by the linear model. These differences are called "residuals."
π¨BREAKING: New Python library for agentic data processing and ETL with AI
Introducing DocETL.
Here's what you need to know:
1. What is DocETL?
It's a tool for creating and executing data processing pipelines, especially suited for complex document processing tasks.
It offers:
- An interactive UI playground
- A Python package for running production pipelines
2. DocWrangler
DocWrangler helps you iteratively develop your pipeline:
- Experiment with different prompts and see results in real-time
- Build your pipeline step by step
- Export your finalized pipeline configuration for production use