Dealing with imbalanced datasets 🐁 βš–οΈ 🐘

Real world datasets are often imbalanced - some of the classes appear much more often than others.

The problem? You ML model will likely learn to only predict the dominant classes.

What can you do about it? πŸ€”

Thread 🧡 #RepostFriday
Example 🚦

We will be dealing with an ML model to detect traffic lights for a self-driving car πŸ€–πŸš—

Traffic lights are small so you will have much more parts of the image that are not traffic lights.

Furthermore, yellow lights 🟑 are much rarer than green 🟒 or red πŸ”΄.
The problem ⚑

Imagine we train a model to classify the color of the traffic light. A typical distribution will be:
πŸ”΄ - 56%
🟑 - 3%
🟒 - 41%

So, your model can get to 97% accuracy just by learning to distinguish red from green.

How can we deal with this?
Evaluation measures πŸ“

First, you need to start using a different evaluation measure than accuracy:
- Precision per class
- Recall per class
- F1 score per class

I also like to look at the confusion matrix to get an overview. Always look at examples from the data as well!
In the traffic lights example above, we will see very poor recall for 🟑 (most real examples were not recognized), while precision will likely be high.

At the same time, the precision of 🟒 and πŸ”΄ will be lower (🟑 will be classified as 🟒 or πŸ”΄).
Get more data πŸ”’

The best thing you can do is to collect more data of the underrepresented classes. This may be hard or even impossible...

You can imagine ways to record more yellow lights, but imagine you want to detect a very rare disease in CT images?
Balance your data πŸ”€

The idea is to resample your dataset so it is better balanced.

β–ͺ️Undersampling - throw away some examples of the dominant classes

β–ͺ️ Oversampling - get more samples of the underrepresented class
Undersampling ⏬

The easiest way is to just randomly throw away samples from the dominant class.

Even better, you can use some unsupervised clustering method and throw out only samples from the big clusters.

The problem of course is that you are throwing out valuable data...
Oversampling ⏫

This is more difficult. You can just repeat sample, but it won't work very good.

You can use methods like SMOTE (Synthetic Minority Oversampling Technique) to generate new samples interpolating between existing ones. This may not be easy for complex images.
Oversampling ⏫

If you are dealing with images, you can use data augmentation techniques to create new samples by modifying the existing ones (rotation, flipping, skewing, color filters...)

You can also use GANs or simulation the synthesize completely new images.
Adapting your loss πŸ“‰

Another strategy is to modify your loss function to penalize misclassification of the underrepresented classes more than the dominant ones.

In the 🚦 example we can set them like this (proportionally to the distribution)
πŸ”΄ - 1.8
🟑 - 33.3
🟒 - 2.4
If you are training a neural network with TensorFlow or PyTorch you can do this very easily:

β–ͺ️ TensorFlow - use the class_weights parameter in the fit() function (tensorflow.org/versions/r2.0/…)

β–ͺ️PyTorch - use the weight parameter in the CrossEntropyLoss (pytorch.org/docs/stable/ge…)
Summary 🏁

In practice, you will likely need to combine all of the strategies above to achieve good performance.

Look at different evaluation metrics and start playing with the parameters to find a good balance (pun intended)
Every Friday I repost one of my old threads so more people get the chance to see them. During the rest of the week, I post new content on machine learning and web3.

If you are interested in seeing more, follow me @haltakov

β€’ β€’ β€’

Missing some Tweet in this thread? You can try to force a refresh
γ€€

Keep Current with haltakov.eth 🌍 πŸ‡ΊπŸ‡¦

haltakov.eth 🌍 πŸ‡ΊπŸ‡¦ Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @haltakov

Mar 22
Machine Learning Explained πŸ‘¨β€πŸ«

PCA

Principal Component Analysis is a commonly used method for dimensionality reduction.

It's a good example of how fairly complex math can have an intuitive explanation and be easy to use in practice.

Let's start from the application of PCA πŸ‘‡ Image
Dimensionality Reduction

This is one of the common uses of PCA in machine learning.

Imagine you want to predict house prices. You get a large table of many houses and different features for them like size, number of rooms, location, age, etc.

Some features seem correlated πŸ‘‡
Correlated features

For example, the size of the house is correlated with the number of rooms. Bigger houses tend to have more rooms.

Another example could be the age and the year the house was built - they give us pretty much the same information.

We don't want that πŸ‘‡
Read 16 tweets
Mar 18
s this formula difficult? πŸ€”

This is the formula for Gradient Descent with Momentum as presented in Wikipedia.

It may look intimidating at first, but I promise you that by the end of this thread it will be easy to understand!

Thread πŸ‘‡

#RepostFriday
The Basis ◻️

Let's break it down! The basis is this simple formula describing an iterative optimization method.

We have some weights (parameters) and we iteratively update them in some way to reach a goal

Iterative methods are used when we cannot compute the solution directly
Gradient Decent Update πŸ“‰

We define a loss function describing how good our model is. We want to find the weights that minimize the loss (make the model better).

We compute the gradient of the loss and update the weights by a small amount (learning rate) against the gradient.
Read 8 tweets
Mar 16
Machine Learning Formulas Explained πŸ‘¨β€πŸ«

For regression problems you can use one of several loss functions:
β–ͺ️ MSE
β–ͺ️ MAE
β–ͺ️ Huber loss

But which one is best? When should you prefer one instead of the other?

Thread 🧡 Image
Let's first quickly recap what each of the loss functions does. After that, we can compare them and see the differences based on some examples.

πŸ‘‡
Mean Square Error (MSE)

For every sample, MSE takes the difference between the ground truth and the model's prediction and computes its square. Then, the average over all samples is computed.

For details, check out this thread:


πŸ‘‡
Read 20 tweets
Mar 11
Machine Learning in the Real World 🧠 πŸ€–

ML for real-world applications is much more than designing fancy networks and fine-tuning parameters.

In fact, you will spend most of your time curating a good dataset.

Let's go through the process together πŸ‘‡

#RepostFriday Image
Collect Data πŸ’½

We need to represent the real world as accurately as possible. If some situations are underrepresented we are introducing Sampling Bias.

Sampling Bias is nasty because we'll have high test accuracy, but our model will perform badly when deployed.

πŸ‘‡
Traffic Lights 🚦

Let's build a model to recognize traffic lights for a self-driving car. We need to collect data for different:

β–ͺ️ Lighting conditions
β–ͺ️ Weather conditions
β–ͺ️ Distances and viewpoints
β–ͺ️ Strange variants

And if we sample only 🚦 we won't detect πŸš₯ πŸ€·β€β™‚οΈ

πŸ‘‡ Image
Read 16 tweets
Mar 8
Machine Learning Formulas Explained πŸ‘¨β€πŸ«

This is the Huber loss - another complicated-looking formula...

Yet again, if you break it down and understand the individual, it becomes really easy.

Let me show you πŸ‘‡ Image
Background

The Huber loss is a loss function that is similar to the Mean Squared Error (MSE) but it is designed to be more robust to outliers.

MSE suffers from the problem that if there is a small number of severe outliers they can dominate the whole loss

How does it work? πŸ‘‡
The key to understanding math formulas is not to try to understand everything at the same time.

Try looking at the terms inside the formula. Try to understand them from the inside to the outside...

Here, we can quickly see that one term is repeated several times...

πŸ‘‡ Image
Read 13 tweets
Mar 4
Machine Learning Formulas Explained! πŸ‘¨β€πŸ«

This is the formula for the Binary Cross Entropy Loss. It is commonly used for binary classification problems.

It may look super confusing, but I promise you that it is actually quite simple!

Let's go step by step πŸ‘‡

#RepostFriday
The Cross-Entropy Loss function is one of the most used losses for classification problems. It tells us how well a machine learning model classifies a dataset compared to the ground truth labels.

The Binary Cross-Entropy Loss is a special case when we have only 2 classes.

πŸ‘‡
The most important part to understand is this one - this is the core of the whole formula!

Here, Y denotes the ground-truth label, while ΕΆ is the predicted probability of the classifier.

Let's look at a simple example before we talk about the logarithm... πŸ‘‡
Read 13 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us on Twitter!

:(