Let's talk about a common problem in ML - imbalanced data ⚖️
Imagine we want to detect all pixels belonging to a traffic light from a self-driving car's camera. We train a model with 99.88% performance. Pretty cool, right?
Actually, this model is useless ❌
Let me explain 👇
The problem is the data is severely imbalanced - the ratio between traffic light pixels and background pixels is 800:1.
If we don't take any measures, our model will learn to classify each pixel as background giving us 99.88% accuracy. But it's useless!
What can we do? 👇
Let me tell you about 3 ways of dealing with imbalanced data:
▪️ Choose the right evaluation metric
▪️ Undersampling your dataset
▪️ Oversampling your dataset
▪️ Adapting the loss
Let's dive in 👇
1️⃣ Evaluation metrics
Looking at the overall accuracy is a very bad idea when dealing with imbalanced data. There are other measures that are much better suited:
▪️ Precision
▪️ Recall
▪️ F1 score
The idea is to throw away samples of the overrepresented classes.
One way to do this is to randomly throw away samples. However, ideally, we want to make sure we are only throwing away samples that look similar.
Here is a strategy to achieve that 👇
Clever Undersampling
▪️ Compute image features for each sample using a pre-trained CNN
▪️ Cluster images by visual appearance using k-means, DBSCAN etc.
▪️ Remove similar samples from the clusters (check out for example the Near-Miss or the Tomek Links strategies)
👇
3️⃣ Oversampling
The idea here is to generate new samples from underrepresented classes. The easiest way to do this is of course to repeat the samples. However, we are not gaining any new information with this.
Some better strategies 👇
Data Augmentation
Create new samples by modifying the existing ones. You can apply many different transformations like for example:
The idea is to create new samples by combining two existing ones.
This technique is more common when working with tabular data, but can be used for images as well. For that, we can combine the images in feature space and reconstruct them using an autoencoder.
Synthetic Data
Another option is to generate synthetic data to add to our dataset. This can be done either using a GAN or using a realistic simulation to render new images.
There are even companies that specialize in this, like paralleldomain.com (not affiliated)
👇
4️⃣ Adapting the loss function
Finally, an easy way to improve the balance is directly in your loss function. We can specify that samples of the underrepresented class to have more weight and contribute more to the loss function.
Here is an example of how to do it in the code.
So, let's recap the main ideas when dealing with imbalanced data:
▪️ Make sure you are using the right evaluation metric
▪️ Use undersampling and oversampling techniques to improve your dataset
▪️ Use class weights in your loss function
I regularly write threads like this to help people get started with Machine Learning.
If you are interested in seeing more, follow me @haltakov.
• • •
Missing some Tweet in this thread? You can try to
force a refresh
Here are some insights I found particulalry interesting 👇
"Neural networks are parallel computers"
That is why they are so powerful - you can train a generic computer to solve your problem. This is also the driver behind Software 2.0 - neural network are becoming more and more capable of solving all kinds of problems.
"Neural networks perform well on tasks that humans can perform very quickly"
Humans don't think much when listening, observing or performing simple tasks.
This means that a neural network can be trained to be good at it as well: NLP, computer vision and reinforcement learning.
My setup for recording videos for my machine learning course 🎥
A lot of people asked about my setup the other day, so here a short thread on that. It's nothing fancy, but it does a good job 🤷♂️
Details 👇
Hardware ⚙️
▪️ MacBook Pro (2015 model) - screen sharing and recording
▪️ iPhone XS - using the back camera for video recording
▪️ Omnidiretional external mic - connected to the iPhone
▪️ Highly professional camera rig - books mostly about cooking and travel 😄
👇
Software 💻
▪️ OBS Studio - recording of the screen and the camera image
▪️ EpocCam - use your iPhone as a web cam. You can connect your iPhone both over WiFi and cable.
▪️ Google Slides - for presentation
▪️ Jupyter notebooks and Google Colab - for experimenting with code
▪️ For beginners
▪️ End-to-end
▪️ Practice instead of theory
▪️ Intuition instead of definition
▪️ Minimum math
▪️ Real-world dataset
▪️ Flexible example applications
▪️ Built with community feedback
Let's go through these points 👇
For beginners
Only Python knowledge will be required to do the course. No previous machine learning experience needed.
End-to-end
We will cover the whole pipeline - from collecting and cleaning data to deploying a trained model.
We will also discuss some topics like ethics and bias and problem framing.
I like using VS Code when working with Jupyter notebooks. One pain point has always been automatic code formatting, but now I have a good solution.
You need:
▪️ VS Code 1.60 (August 2021)
▪️ YAPF formatter
Details 👇
VS Code 1.60
The latest VS Code version from August 2021 contains many improvements for the native display of Jupyter notebooks (which came in July 2021). They now support the command Format Cell with which you can automatically format your code.
👇
Keyboard shortcuts
You can quickly do it with a keyboard shortcut.
▪️ Windows: Shift + Alt + F
▪️ Mac: Shift + Option + F
▪️ Linux: Ctrl + Shift + I
I had to remap Shift + Option + F on my Mac, because it seems to be a macOS shortcut for some strange character...