15 Sep, 21 tweets, 6 min read
Did you ever want to learn how to read ROC curves? ππ€

This is something you will encounter a lot when analyzing the performance of machine learning models.

What does ROC mean?

ROC stands for Receiver Operating Characteristic but just forget about it. This is a military term from the 1940s and doesn't make much sense today.

Think about these curves as True Positive Rate vs. False Positive Rate plots.

Now, let's dive in π
The ROC curve visualizes the trade-offs that a binary classifier makes between True Positives and False Positives.

This may sound too abstract for you so let's look at an example. After that, I encourage you to come back and read the previous sentence again!

Now the example π
We are building a self-driving car and want it to stop at red traffic lights π¦

(You saw this coming, right π?)

We build a classifier to determine if the car should STOP (light is π΄ or π‘) or PASS (light is π’). I'm using just 2 classes here to make the example simpler.

π
Now we ask the model - should the car stop at the π¦?

There are 4 possible cases
βͺοΈ Light is π΄, model says STOP - True Positive
βͺοΈ Light is π΄, model says PASS - False Negative
βͺοΈ Light is π’, model says PASS - True Negative
βͺοΈ Light is π’, model says STOP - False Positive

π
Given many examples from our validation/test set, we can compute the following metrics:

βͺοΈ True Positive Rate (TPR) - how good is our model telling us correctly to stop.
βͺοΈ False Positive Rate (FPR) - how often does our model tell us wrongly to stop

To get a feeling for it π
A high TPR means that we stop at most π΄ lights.
A low TPR means that we often miss π΄ lights and pass.

A high FPR means that we often confuse π’ lights for π΄ and wrongly stop.
A low FPR means that we don't have many false stops.

So, we want a high TPR and low FPR, right? π
Evaluating a model on a validation/test dataset will give us exactly one TPR and FPR value. Here an example of a (not so good) classifier:
βͺοΈ TPR = 95%
βͺοΈ FPR = 1%

Wait, but this is just one point on the TPR vs. FPR plot above. So, how do we get the curve now?

π
Machine learning classifiers usually don't simply output a class, but they tell you the probability of each class being the correct one.

You can then define a threshold based on which you decide. For example, stop at a light only if the classifier is 99% sure. Or 90%? 80%?

π
We can now try many different values of the threshold and evaluate on our validation/test dataset.

Every time we will get different values for TPR and FPR and we can put them on the ROC plot. This is how we get our curve!

So let's look at different thresholds π
Here is an example plot. Look at the 3 points I marked on it to see the fundamental trade-off between FPR and TPR.

1οΈβ£ TPR = 20%, FPR = 2% - setting a high threshold (we want to be really sure before stopping), we won't have many FPs, but we will also miss many real π΄.

π
2οΈβ£ TPR = 81%, FPR = 33% - decreasing the threshold improves the detection rate, but now we also have many false detections of π΄.

3οΈβ£ TPR = 99%, FPR = 90% - a model with a very low threshold will detect almost all π΄, but will wrongly classify most π’ as π΄ as well.

π
Changing the threshold will only change the trade-off not make our model better.

However, this is still an important step when you are tuning the model for a specific application. For self-driving cars, it is **very** Important to not run the red lights - you need high TPR!

π
We can, however, train another model using more data, more parameters, better optimization. But how do we tell that it is really better and not just finding a different trade-off point?

The new ROC curve should be closer to the upper left corner of the graph! π
A better ROC curve means that we can choose thresholds that give us the same TPR for both classifiers, but the better one will have less FPR.

Alternatively, for the same FPR, the better classifier will have a higher TPR.

π
There is one problem, though - in the reality, ROC curves are much noisier. At some points, the curve of one model may be higher, but at others lower. So which one is better in this case?

See this image from a real evaluation (credit to Wikipedia). Which one is best?

π
To get a single number summarizing the whole ROC curve, we can compute the Area under the Curve (AOC).

This will again be a number between 0 and 1 and expresses the probability of the model to rank a random positive example higher than a random negative example.

π
Summary π

To recap quickly:

βͺοΈ ROC curves visualize the trade-off between TPR and FPR
βͺοΈ The curve is created by varying an internal decision threshold
βͺοΈ Models with a curve closer to the upper left corner are better
βͺοΈ Use the Area under the Curve to get a single metric
I regularly write threads like this to help people get started with Machine Learning.

If you are interested in seeing more, follow me @haltakov.
This is a good point! So what you say is that it this kind of applications you want to minimize the False Positives, so it is important to have a good TPR already at the beginning of the ROC curve?

Yes, great point! I plan to write more about the precision/recall curve since it is useful in other cases as well. For example in the case of object detection, where TN don't really make sense.

β’ β’ β’

Missing some Tweet in this thread? You can try to force a refresh
γ

This Thread may be Removed Anytime!

Twitter may remove this content at anytime! Save it as PDF for later use!

# More from @haltakov

14 Sep
Most people seem to use matplotlib as a Python plotting library, but is it really the best choice? π€

We are going to compare 5 free and popular libraries:
βͺοΈ Matplotlib
βͺοΈ Seaborn
βͺοΈ Plotly
βͺοΈ Bokeh
βͺοΈ Altair

Which one is the best? Find out below π
In a survey I did the other day, matplotlib had the most users by a large margin. This was quite surprising to me since I don't really like it...

But let's first look at each library π
Matplotlib π

Matplotlib is one of the most popular libraries out there.

β Supports many types of plots
β Lots of customization options

β Plots look ugly
β Limited interactivity
β Not very intuitive to use
9 Sep
I highly recommend listening to the latest eposide of @therobotbrains podcast with @ilyasut.

therobotbrains.ai/podcasts/episoβ¦

Here are some insights I found particulalry interesting π
"Neural networks are parallel computers"

That is why they are so powerful - you can train a generic computer to solve your problem. This is also the driver behind Software 2.0 - neural network are becoming more and more capable of solving all kinds of problems.
"Neural networks perform well on tasks that humans can perform very quickly"

Humans don't think much when listening, observing or performing simple tasks.

This means that a neural network can be trained to be good at it as well: NLP, computer vision and reinforcement learning.
9 Sep
My setup for recording videos for my machine learning course π₯

A lot of people asked about my setup the other day, so here a short thread on that. It's nothing fancy, but it does a good job π€·ββοΈ

Details π
Hardware βοΈ

βͺοΈ MacBook Pro (2015 model) - screen sharing and recording
βͺοΈ iPhone XS - using the back camera for video recording
βͺοΈ Omnidiretional external mic - connected to the iPhone
βͺοΈ Highly professional camera rig - books mostly about cooking and travel π

π
Software π»

βͺοΈ OBS Studio - recording of the screen and the camera image
βͺοΈ EpocCam - use your iPhone as a web cam. You can connect your iPhone both over WiFi and cable.
βͺοΈ Google Slides - for presentation
βͺοΈ Jupyter notebooks and Google Colab - for experimenting with code

π
7 Sep
Let's talk about a common problem in ML - imbalanced data βοΈ

Imagine we want to detect all pixels belonging to a traffic light from a self-driving car's camera. We train a model with 99.88% performance. Pretty cool, right?

Actually, this model is useless β

Let me explain π
The problem is the data is severely imbalanced - the ratio between traffic light pixels and background pixels is 800:1.

If we don't take any measures, our model will learn to classify each pixel as background giving us 99.88% accuracy. But it's useless!

What can we do? π
Let me tell you about 3 ways of dealing with imbalanced data:

βͺοΈ Choose the right evaluation metric

Let's dive in π
7 Sep
Goals for my ML course

βͺοΈ For beginners
βͺοΈ End-to-end
βͺοΈ Minimum math
βͺοΈ Real-world dataset
βͺοΈ Flexible example applications
βͺοΈ Built with community feedback

Let's go through these points π
For beginners

Only Python knowledge will be required to do the course. No previous machine learning experience needed.
End-to-end

We will cover the whole pipeline - from collecting and cleaning data to deploying a trained model.

We will also discuss some topics like ethics and bias and problem framing.
6 Sep
How I format my Jupyter notebooks in VS Code π

I like using VS Code when working with Jupyter notebooks. One pain point has always been automatic code formatting, but now I have a good solution.

You need:
βͺοΈ VS Code 1.60 (August 2021)
βͺοΈ YAPF formatter

Details π
VS Code 1.60

The latest VS Code version from August 2021 contains many improvements for the native display of Jupyter notebooks (which came in July 2021). They now support the command Format Cell with which you can automatically format your code.

π
Keyboard shortcuts

You can quickly do it with a keyboard shortcut.

βͺοΈ Windows: Shift + Alt + F
βͺοΈ Mac: Shift + Option + F
βͺοΈ Linux: Ctrl + Shift + I

I had to remap Shift + Option + F on my Mac, because it seems to be a macOS shortcut for some strange character...