Did you ever want to learn how to read ROC curves? ππ€
This is something you will encounter a lot when analyzing the performance of machine learning models.
Let me help you understand them π
What does ROC mean?
ROC stands for Receiver Operating Characteristic but just forget about it. This is a military term from the 1940s and doesn't make much sense today.
Think about these curves as True Positive Rate vs. False Positive Rate plots.
Now, let's dive in π
The ROC curve visualizes the trade-offs that a binary classifier makes between True Positives and False Positives.
This may sound too abstract for you so let's look at an example. After that, I encourage you to come back and read the previous sentence again!
Now the example π
We are building a self-driving car and want it to stop at red traffic lights π¦
(You saw this coming, right π?)
We build a classifier to determine if the car should STOP (light is π΄ or π‘) or PASS (light is π’). I'm using just 2 classes here to make the example simpler.
π
Now we ask the model - should the car stop at the π¦?
There are 4 possible cases
βͺοΈ Light is π΄, model says STOP - True Positive
βͺοΈ Light is π΄, model says PASS - False Negative
βͺοΈ Light is π’, model says PASS - True Negative
βͺοΈ Light is π’, model says STOP - False Positive
π
Given many examples from our validation/test set, we can compute the following metrics:
βͺοΈ True Positive Rate (TPR) - how good is our model telling us correctly to stop.
βͺοΈ False Positive Rate (FPR) - how often does our model tell us wrongly to stop
To get a feeling for it π
A high TPR means that we stop at most π΄ lights.
A low TPR means that we often miss π΄ lights and pass.
A high FPR means that we often confuse π’ lights for π΄ and wrongly stop.
A low FPR means that we don't have many false stops.
So, we want a high TPR and low FPR, right? π
Evaluating a model on a validation/test dataset will give us exactly one TPR and FPR value. Here an example of a (not so good) classifier:
βͺοΈ TPR = 95%
βͺοΈ FPR = 1%
Wait, but this is just one point on the TPR vs. FPR plot above. So, how do we get the curve now?
π
Machine learning classifiers usually don't simply output a class, but they tell you the probability of each class being the correct one.
You can then define a threshold based on which you decide. For example, stop at a light only if the classifier is 99% sure. Or 90%? 80%?
π
We can now try many different values of the threshold and evaluate on our validation/test dataset.
Every time we will get different values for TPR and FPR and we can put them on the ROC plot. This is how we get our curve!
So let's look at different thresholds π
Here is an example plot. Look at the 3 points I marked on it to see the fundamental trade-off between FPR and TPR.
1οΈβ£ TPR = 20%, FPR = 2% - setting a high threshold (we want to be really sure before stopping), we won't have many FPs, but we will also miss many real π΄.
π
2οΈβ£ TPR = 81%, FPR = 33% - decreasing the threshold improves the detection rate, but now we also have many false detections of π΄.
3οΈβ£ TPR = 99%, FPR = 90% - a model with a very low threshold will detect almost all π΄, but will wrongly classify most π’ as π΄ as well.
π
Changing the threshold will only change the trade-off not make our model better.
However, this is still an important step when you are tuning the model for a specific application. For self-driving cars, it is **very** Important to not run the red lights - you need high TPR!
π
We can, however, train another model using more data, more parameters, better optimization. But how do we tell that it is really better and not just finding a different trade-off point?
The new ROC curve should be closer to the upper left corner of the graph! π
A better ROC curve means that we can choose thresholds that give us the same TPR for both classifiers, but the better one will have less FPR.
Alternatively, for the same FPR, the better classifier will have a higher TPR.
π
There is one problem, though - in the reality, ROC curves are much noisier. At some points, the curve of one model may be higher, but at others lower. So which one is better in this case?
See this image from a real evaluation (credit to Wikipedia). Which one is best?
π
To get a single number summarizing the whole ROC curve, we can compute the Area under the Curve (AOC).
This will again be a number between 0 and 1 and expresses the probability of the model to rank a random positive example higher than a random negative example.
π
Summary π
To recap quickly:
βͺοΈ ROC curves visualize the trade-off between TPR and FPR
βͺοΈ The curve is created by varying an internal decision threshold
βͺοΈ Models with a curve closer to the upper left corner are better
βͺοΈ Use the Area under the Curve to get a single metric
I regularly write threads like this to help people get started with Machine Learning.
If you are interested in seeing more, follow me @haltakov.
This is a good point! So what you say is that it this kind of applications you want to minimize the False Positives, so it is important to have a good TPR already at the beginning of the ROC curve?
Yes, great point! I plan to write more about the precision/recall curve since it is useful in other cases as well. For example in the case of object detection, where TN don't really make sense.
Here are some insights I found particulalry interesting π
"Neural networks are parallel computers"
That is why they are so powerful - you can train a generic computer to solve your problem. This is also the driver behind Software 2.0 - neural network are becoming more and more capable of solving all kinds of problems.
"Neural networks perform well on tasks that humans can perform very quickly"
Humans don't think much when listening, observing or performing simple tasks.
This means that a neural network can be trained to be good at it as well: NLP, computer vision and reinforcement learning.
My setup for recording videos for my machine learning course π₯
A lot of people asked about my setup the other day, so here a short thread on that. It's nothing fancy, but it does a good job π€·ββοΈ
Details π
Hardware βοΈ
βͺοΈ MacBook Pro (2015 model) - screen sharing and recording
βͺοΈ iPhone XS - using the back camera for video recording
βͺοΈ Omnidiretional external mic - connected to the iPhone
βͺοΈ Highly professional camera rig - books mostly about cooking and travel π
π
Software π»
βͺοΈ OBS Studio - recording of the screen and the camera image
βͺοΈ EpocCam - use your iPhone as a web cam. You can connect your iPhone both over WiFi and cable.
βͺοΈ Google Slides - for presentation
βͺοΈ Jupyter notebooks and Google Colab - for experimenting with code
Let's talk about a common problem in ML - imbalanced data βοΈ
Imagine we want to detect all pixels belonging to a traffic light from a self-driving car's camera. We train a model with 99.88% performance. Pretty cool, right?
Actually, this model is useless β
Let me explain π
The problem is the data is severely imbalanced - the ratio between traffic light pixels and background pixels is 800:1.
If we don't take any measures, our model will learn to classify each pixel as background giving us 99.88% accuracy. But it's useless!
What can we do? π
Let me tell you about 3 ways of dealing with imbalanced data:
βͺοΈ Choose the right evaluation metric
βͺοΈ Undersampling your dataset
βͺοΈ Oversampling your dataset
βͺοΈ Adapting the loss
βͺοΈ For beginners
βͺοΈ End-to-end
βͺοΈ Practice instead of theory
βͺοΈ Intuition instead of definition
βͺοΈ Minimum math
βͺοΈ Real-world dataset
βͺοΈ Flexible example applications
βͺοΈ Built with community feedback
Let's go through these points π
For beginners
Only Python knowledge will be required to do the course. No previous machine learning experience needed.
End-to-end
We will cover the whole pipeline - from collecting and cleaning data to deploying a trained model.
We will also discuss some topics like ethics and bias and problem framing.
I like using VS Code when working with Jupyter notebooks. One pain point has always been automatic code formatting, but now I have a good solution.
You need:
βͺοΈ VS Code 1.60 (August 2021)
βͺοΈ YAPF formatter
Details π
VS Code 1.60
The latest VS Code version from August 2021 contains many improvements for the native display of Jupyter notebooks (which came in July 2021). They now support the command Format Cell with which you can automatically format your code.
π
Keyboard shortcuts
You can quickly do it with a keyboard shortcut.
βͺοΈ Windows: Shift + Alt + F
βͺοΈ Mac: Shift + Option + F
βͺοΈ Linux: Ctrl + Shift + I
I had to remap Shift + Option + F on my Mac, because it seems to be a macOS shortcut for some strange character...