ROC curves measure the True Positive Rate (also known as Accuracy). So, if you have an imbalanced dataset, the ROC curve will not tell you if your classifier completely ignores the underrepresented class.
ROC curves measure the False Positive Rate (FPR) according to the following formula.
In object detection problems the True Negatives don't make sense. There are so many possibilities to detect the background class.
Let's dive deeper 👇
I'll use my favorite example again - traffic lights detection (for a self-driving car 🧠🚗).
We want to detect the presence and location (using a bounding box) of traffic lights in images. This is a typical Object Detection problem - an important class of CV problems.
👇
The typical way to solve this is to design a neural network that will check many locations in the image and classify them as being a traffic light or not (background).
There is no (practical) way to define True Negatives
These would be all locations in the image that don't contain a traffic light and where our model correctly predicted no traffic lights as well.
Imagine now going through all possible bounding box sizes for every pixel...
👇
Common evaluation metrics for this are Precision and Recall.
▪️ Precision - the percentage of traffic lights the model detected that are indeed real traffic lights
▪️ Recall - the percentage of real traffic lights that the model detected
Confusing? Let's see some examples 👇
High Precision, Low Recall
In this example, the Precision is 100% (all detected lights are correct), but the Recall is only 50% (2 out of 4 lights missed).
👇
Low Precision, High Recall
In this example, the Precision is 67% (4 out of 6 detected lights are correct), while the Recall is 100% (all 4 real lights are detected).
👇
Precision and Recall give us a way to measure both types of errors (FPs and FNs), without requiring us to count TNs.
By the way, that's why they work better for imbalanced data! Object Detection is imbalanced - there is much more background than objects.
Now the curve 👇
For every candidate the network classifies, it gives us the probability of it being a traffic light. We need to choose a threshold on the probability to consider it a detected light.
It could be 50% if we don't want to miss many lights or 99% if we want to be really sure.
👇
For every value of the threshold, we will get different Precision and Recall when evaluating our test dataset.
▪️ High thresholds - high Precision, low Recall
▪️ Low thresholds - low Precision, high Recall
Plotting these values gives us the Precision-Recall curve.
The Precision-Recall Curve visualizes the trade-off between making False Positives and False Negatives.
Similar to ROC curves, we can also compare different models. If the curve of one model is closer to the upper right corner than another one, then it is a better model.
👇
Summary 🏁
To recap:
▪️ ROC curves don't work for imbalanced data
▪️ ROC curves don't work for object detection (can't count TNs)
▪️ Prevision-Recall curves are computed in a similar way to ROC curves
▪️ Prevision-Recall curves visualize the trade-off between FNs and FPs
I regularly write threads like this to help people get started with Machine Learning.
If you are interested in seeing more, follow me @haltakov.
Yes, you are absolutely right! True Positive Rate is the Recall, not the Accuracy. Sorry for the confusion. Good catch!
Here are some insights I found particulalry interesting 👇
"Neural networks are parallel computers"
That is why they are so powerful - you can train a generic computer to solve your problem. This is also the driver behind Software 2.0 - neural network are becoming more and more capable of solving all kinds of problems.
"Neural networks perform well on tasks that humans can perform very quickly"
Humans don't think much when listening, observing or performing simple tasks.
This means that a neural network can be trained to be good at it as well: NLP, computer vision and reinforcement learning.
My setup for recording videos for my machine learning course 🎥
A lot of people asked about my setup the other day, so here a short thread on that. It's nothing fancy, but it does a good job 🤷♂️
Details 👇
Hardware ⚙️
▪️ MacBook Pro (2015 model) - screen sharing and recording
▪️ iPhone XS - using the back camera for video recording
▪️ Omnidiretional external mic - connected to the iPhone
▪️ Highly professional camera rig - books mostly about cooking and travel 😄
👇
Software 💻
▪️ OBS Studio - recording of the screen and the camera image
▪️ EpocCam - use your iPhone as a web cam. You can connect your iPhone both over WiFi and cable.
▪️ Google Slides - for presentation
▪️ Jupyter notebooks and Google Colab - for experimenting with code
Let's talk about a common problem in ML - imbalanced data ⚖️
Imagine we want to detect all pixels belonging to a traffic light from a self-driving car's camera. We train a model with 99.88% performance. Pretty cool, right?
Actually, this model is useless ❌
Let me explain 👇
The problem is the data is severely imbalanced - the ratio between traffic light pixels and background pixels is 800:1.
If we don't take any measures, our model will learn to classify each pixel as background giving us 99.88% accuracy. But it's useless!
What can we do? 👇
Let me tell you about 3 ways of dealing with imbalanced data:
▪️ Choose the right evaluation metric
▪️ Undersampling your dataset
▪️ Oversampling your dataset
▪️ Adapting the loss