πŸ”₯ Matt Dancho (Business Science) πŸ”₯ Profile picture
Sep 3 β€’ 11 tweets β€’ 3 min read β€’ Read on X
The 3 types of machine learning (that every data scientist should know).

In 3 minutes I'll eviscerate your confusion. Let's go: 🧡 Image
1. The 3 Fundamental Types of Machine Learning:

- Supervised Learning
- Unsupervised Learning
- Reinforcement Learning.

Let's break them down:
2. Supervised Learning:

Supervised Learning maps a set of inputs (features) to an output (target). There are 2 types: Classification and Regression.
Classification:

Identifying the category that something belongs to. Often I use Binary Classification for lead scoring to get a class probability (the probability from 0 to 1 of how likely the rowwise observation belongs to a class). Think non-buyer or buyer. 0 or 1. Binary Classification.
Regression:

Predicting a continuous value. I commonly use Regression for predicting future values of sales demand. It's a special type of regression called Forecasting.
3. Unsupervised Learning:

Unsupervised learning is extracting insights from unlabelled data.

The 2 main types I use are clustering and dimensionality reduction.

- K-means is the most common clustering algorithm I use, often for clustering customers based on their similarities. I use

- PCA to reduce the number of columns so other supervised machine learning algorithms run more efficiently and to visualize clusters.
4. Reinforcement Learning:

The idea is that the software learns to take actions based on the accumulation of reward. This is the underlying concept of "AI" or Artificial Intelligence, where the software learns to think.
There's a new problem that has surfaced --

Companies NOW want AI.

AI is the single biggest force of our decade. Yet 99% of data scientists are ignoring it.

That's a huge advantage to you. I'd like to help.
Want to become a Generative AI Data Scientist in 2025 ($200,000 career)?

On Wednesday, Sept 3rd, I'm sharing one of my best AI Projects: How I built an AI Customer Segmentation Agent with Python

Register here (limit 500 seats): learn.business-science.io/registration-a…Image
That's a wrap! Over the next 24 days, I'm sharing the 24 concepts that helped me become an AI data scientist.

If you enjoyed this thread:

1. Follow me @mdancho84 for more of these
2. RT the tweet below to share this thread with your audience
P.S. I create AI + Data Science tutorials and share them for free. Your πŸ‘ like and ♻️ repost helps keep me going.

β€’ β€’ β€’

Missing some Tweet in this thread? You can try to force a refresh
γ€€

Keep Current with πŸ”₯ Matt Dancho (Business Science) πŸ”₯

πŸ”₯ Matt Dancho (Business Science) πŸ”₯ Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @mdancho84

Sep 4
K-means is an essential algorithm for Data Science.

But it's confusing for beginners.

Let me demolish your confusion: Image
1. K-Means

K-means is a popular unsupervised machine learning algorithm used for clustering. It's a core algorithm used for customer segmentation, inventory categorization, market segmentation, and even anomaly detection. Image
2. Unsupervised:

K-means is an unsupervised algorithm used on data with no labels or predefined outcomes. The goal is not to predict a target output, but to explore the structure of the data by identifying patterns, clusters, or relationships within the dataset.
Read 13 tweets
Sep 3
These 7 statistical analysis concepts have helped me as an AI Data Scientist.

Let's go: 🧡 Image
Step 1: Learn These Descriptive Statistics

Mean, median, mode, variance, standard deviation. Used to summarize data and spot variability. These are key for any data scientist to understand what’s in front of them in their data sets. Image
2. Learn Probability

Know your distributions (Normal, Binomial) & Bayes’ Theorem. The backbone of modeling and reasoning under uncertainty. Central Limit Theorem is a must too. Image
Read 11 tweets
Aug 31
🚨 Google published a 69-page prompt engineering masterclass.

This is what's inside: Image
Table of Contents:

- Prompt Engineering
- LLM Output Configuration
- Prompting Techniques
- Best Practices Image
Important concepts:

1. One-shot versus multi-shot

Google does a great job examining both approaches and demonstrating when to use them and how they work. Image
Read 8 tweets
Aug 30
Linear Regression is one of the most important tools in a Data Scientist's toolbox.

Yet it's super confusing for beginners.

Let's fix that: 🧡 Image
1. Ordinary Least Squares (OLS) Regression

Most common form of Linear Regression. OLS regression aims to find the best-fitting linear equation that describes the relationship between the dependent variable (often denoted as Y) and independent variables (denoted as X1, X2, ..., Xn).Image
2. Minimize the Sum of Squares

OLS does this by minimizing the sum of the squares of the differences between the observed dependent variable values and those predicted by the linear model. These differences are called "residuals." Image
Read 12 tweets
Aug 28
Came across this new library for LLM Prompt Management in Python.

This is what it does: Image
The Python library is called Promptify.

It combines a prompter, LLMs, and pipeline to Solve NLP Problems with LLM's.

You can easily generate different NLP Task prompts for popular generative models like GPT, PaLM, and more with Promptify. Image
Don't understand what that means? Let's take an example:

This is an NLP Classification Task.

The prompt combines a model, prompter, and pipeline to perform a Medical classification of the patient's symptoms. Image
Read 7 tweets
Aug 28
🚨BREAKING: New Python library for agentic data processing and ETL with AI

Introducing DocETL.

Here's what you need to know: Image
1. What is DocETL?

It's a tool for creating and executing data processing pipelines, especially suited for complex document processing tasks.

It offers:

- An interactive UI playground
- A Python package for running production pipelines Image
2. DocWrangler

DocWrangler helps you iteratively develop your pipeline:

- Experiment with different prompts and see results in real-time
- Build your pipeline step by step
- Export your finalized pipeline configuration for production use Image
Read 8 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us!

:(