Did you ever want to learn how to read ROC curves? πŸ“ˆπŸ€”

This is something you will encounter a lot when analyzing the performance of machine learning models.

Let me help you understand them πŸ‘‡
What does ROC mean?

ROC stands for Receiver Operating Characteristic but just forget about it. This is a military term from the 1940s and doesn't make much sense today.

Think about these curves as True Positive Rate vs. False Positive Rate plots.

Now, let's dive in πŸ‘‡
The ROC curve visualizes the trade-offs that a binary classifier makes between True Positives and False Positives.

This may sound too abstract for you so let's look at an example. After that, I encourage you to come back and read the previous sentence again!

Now the example πŸ‘‡
We are building a self-driving car and want it to stop at red traffic lights 🚦

(You saw this coming, right 😁?)

We build a classifier to determine if the car should STOP (light is πŸ”΄ or 🟑) or PASS (light is 🟒). I'm using just 2 classes here to make the example simpler.

Now we ask the model - should the car stop at the 🚦?

There are 4 possible cases
β–ͺ️ Light is πŸ”΄, model says STOP - True Positive
β–ͺ️ Light is πŸ”΄, model says PASS - False Negative
β–ͺ️ Light is 🟒, model says PASS - True Negative
β–ͺ️ Light is 🟒, model says STOP - False Positive

Given many examples from our validation/test set, we can compute the following metrics:

β–ͺ️ True Positive Rate (TPR) - how good is our model telling us correctly to stop.
β–ͺ️ False Positive Rate (FPR) - how often does our model tell us wrongly to stop

To get a feeling for it πŸ‘‡
A high TPR means that we stop at most πŸ”΄ lights.
A low TPR means that we often miss πŸ”΄ lights and pass.

A high FPR means that we often confuse 🟒 lights for πŸ”΄ and wrongly stop.
A low FPR means that we don't have many false stops.

So, we want a high TPR and low FPR, right? πŸ‘‡
Evaluating a model on a validation/test dataset will give us exactly one TPR and FPR value. Here an example of a (not so good) classifier:
β–ͺ️ TPR = 95%
β–ͺ️ FPR = 1%

Wait, but this is just one point on the TPR vs. FPR plot above. So, how do we get the curve now?

Machine learning classifiers usually don't simply output a class, but they tell you the probability of each class being the correct one.

You can then define a threshold based on which you decide. For example, stop at a light only if the classifier is 99% sure. Or 90%? 80%?

We can now try many different values of the threshold and evaluate on our validation/test dataset.

Every time we will get different values for TPR and FPR and we can put them on the ROC plot. This is how we get our curve!

So let's look at different thresholds πŸ‘‡
Here is an example plot. Look at the 3 points I marked on it to see the fundamental trade-off between FPR and TPR.

1️⃣ TPR = 20%, FPR = 2% - setting a high threshold (we want to be really sure before stopping), we won't have many FPs, but we will also miss many real πŸ”΄.

2️⃣ TPR = 81%, FPR = 33% - decreasing the threshold improves the detection rate, but now we also have many false detections of πŸ”΄.

3️⃣ TPR = 99%, FPR = 90% - a model with a very low threshold will detect almost all πŸ”΄, but will wrongly classify most 🟒 as πŸ”΄ as well.

Changing the threshold will only change the trade-off not make our model better.

However, this is still an important step when you are tuning the model for a specific application. For self-driving cars, it is **very** Important to not run the red lights - you need high TPR!

We can, however, train another model using more data, more parameters, better optimization. But how do we tell that it is really better and not just finding a different trade-off point?

The new ROC curve should be closer to the upper left corner of the graph! πŸ‘‡
A better ROC curve means that we can choose thresholds that give us the same TPR for both classifiers, but the better one will have less FPR.

Alternatively, for the same FPR, the better classifier will have a higher TPR.

There is one problem, though - in the reality, ROC curves are much noisier. At some points, the curve of one model may be higher, but at others lower. So which one is better in this case?

See this image from a real evaluation (credit to Wikipedia). Which one is best?

To get a single number summarizing the whole ROC curve, we can compute the Area under the Curve (AOC).

This will again be a number between 0 and 1 and expresses the probability of the model to rank a random positive example higher than a random negative example.

Summary 🏁

To recap quickly:

β–ͺ️ ROC curves visualize the trade-off between TPR and FPR
β–ͺ️ The curve is created by varying an internal decision threshold
β–ͺ️ Models with a curve closer to the upper left corner are better
β–ͺ️ Use the Area under the Curve to get a single metric
I regularly write threads like this to help people get started with Machine Learning.

If you are interested in seeing more, follow me @haltakov.
This is a good point! So what you say is that it this kind of applications you want to minimize the False Positives, so it is important to have a good TPR already at the beginning of the ROC curve?

Yes, great point! I plan to write more about the precision/recall curve since it is useful in other cases as well. For example in the case of object detection, where TN don't really make sense.

β€’ β€’ β€’

Missing some Tweet in this thread? You can try to force a refresh

Keep Current with Vladimir Haltakov

Vladimir Haltakov Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!


Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @haltakov

14 Sep
Most people seem to use matplotlib as a Python plotting library, but is it really the best choice? πŸ€”

We are going to compare 5 free and popular libraries:
β–ͺ️ Matplotlib
β–ͺ️ Seaborn
β–ͺ️ Plotly
β–ͺ️ Bokeh
β–ͺ️ Altair

Which one is the best? Find out below πŸ‘‡
In a survey I did the other day, matplotlib had the most users by a large margin. This was quite surprising to me since I don't really like it...

But let's first look at each library πŸ‘‡
Matplotlib πŸ“ˆ

Matplotlib is one of the most popular libraries out there.

βœ… Supports many types of plots
βœ… Lots of customization options

❌ Plots look ugly
❌ Limited interactivity
❌ Not very intuitive to use
Read 11 tweets
9 Sep
I highly recommend listening to the latest eposide of @therobotbrains podcast with @ilyasut.


Here are some insights I found particulalry interesting πŸ‘‡
"Neural networks are parallel computers"

That is why they are so powerful - you can train a generic computer to solve your problem. This is also the driver behind Software 2.0 - neural network are becoming more and more capable of solving all kinds of problems.
"Neural networks perform well on tasks that humans can perform very quickly"

Humans don't think much when listening, observing or performing simple tasks.

This means that a neural network can be trained to be good at it as well: NLP, computer vision and reinforcement learning.
Read 4 tweets
9 Sep
My setup for recording videos for my machine learning course πŸŽ₯

A lot of people asked about my setup the other day, so here a short thread on that. It's nothing fancy, but it does a good job πŸ€·β€β™‚οΈ

Details πŸ‘‡
Hardware βš™οΈ

β–ͺ️ MacBook Pro (2015 model) - screen sharing and recording
β–ͺ️ iPhone XS - using the back camera for video recording
β–ͺ️ Omnidiretional external mic - connected to the iPhone
β–ͺ️ Highly professional camera rig - books mostly about cooking and travel πŸ˜„

Software πŸ’»

β–ͺ️ OBS Studio - recording of the screen and the camera image
β–ͺ️ EpocCam - use your iPhone as a web cam. You can connect your iPhone both over WiFi and cable.
β–ͺ️ Google Slides - for presentation
β–ͺ️ Jupyter notebooks and Google Colab - for experimenting with code

Read 5 tweets
7 Sep
Let's talk about a common problem in ML - imbalanced data βš–οΈ

Imagine we want to detect all pixels belonging to a traffic light from a self-driving car's camera. We train a model with 99.88% performance. Pretty cool, right?

Actually, this model is useless ❌

Let me explain πŸ‘‡
The problem is the data is severely imbalanced - the ratio between traffic light pixels and background pixels is 800:1.

If we don't take any measures, our model will learn to classify each pixel as background giving us 99.88% accuracy. But it's useless!

What can we do? πŸ‘‡
Let me tell you about 3 ways of dealing with imbalanced data:

β–ͺ️ Choose the right evaluation metric
β–ͺ️ Undersampling your dataset
β–ͺ️ Oversampling your dataset
β–ͺ️ Adapting the loss

Let's dive in πŸ‘‡
Read 13 tweets
7 Sep
Goals for my ML course

β–ͺ️ For beginners
β–ͺ️ End-to-end
β–ͺ️ Practice instead of theory
β–ͺ️ Intuition instead of definition
β–ͺ️ Minimum math
β–ͺ️ Real-world dataset
β–ͺ️ Flexible example applications
β–ͺ️ Built with community feedback

Let's go through these points πŸ‘‡
For beginners

Only Python knowledge will be required to do the course. No previous machine learning experience needed.

We will cover the whole pipeline - from collecting and cleaning data to deploying a trained model.

We will also discuss some topics like ethics and bias and problem framing.
Read 9 tweets
6 Sep
How I format my Jupyter notebooks in VS Code πŸ“’

I like using VS Code when working with Jupyter notebooks. One pain point has always been automatic code formatting, but now I have a good solution.

You need:
β–ͺ️ VS Code 1.60 (August 2021)
β–ͺ️ YAPF formatter

Details πŸ‘‡
VS Code 1.60

The latest VS Code version from August 2021 contains many improvements for the native display of Jupyter notebooks (which came in July 2021). They now support the command Format Cell with which you can automatically format your code.

Keyboard shortcuts

You can quickly do it with a keyboard shortcut.

β–ͺ️ Windows: Shift + Alt + F
β–ͺ️ Mac: Shift + Option + F
β–ͺ️ Linux: Ctrl + Shift + I

I had to remap Shift + Option + F on my Mac, because it seems to be a macOS shortcut for some strange character...
Read 8 tweets

Did Thread Reader help you today?

Support us! We are indie developers!

This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!

Follow Us on Twitter!