There are two problems with ROC curves

❌ They don't work for imbalanced datasets
❌ They don't work for object detection problems

So what do we do to evaluate our machine learning models properly in these cases?

We use a Precision-Recall curve.

Another one of my threads 👇
Last week I wrote another detailed thread on ROC curves. I recommend that you read it first if you don't know what they are.



Then go on 👇
❌ Problem 1 - Imbalanced Data

ROC curves measure the True Positive Rate (also known as Accuracy). So, if you have an imbalanced dataset, the ROC curve will not tell you if your classifier completely ignores the underrepresented class.

More details:

👇
❌ Problem 2 - Object Detection Problems

ROC curves measure the False Positive Rate (FPR) according to the following formula.

In object detection problems the True Negatives don't make sense. There are so many possibilities to detect the background class.

Let's dive deeper 👇
I'll use my favorite example again - traffic lights detection (for a self-driving car 🧠🚗).

We want to detect the presence and location (using a bounding box) of traffic lights in images. This is a typical Object Detection problem - an important class of CV problems.

👇
The typical way to solve this is to design a neural network that will check many locations in the image and classify them as being a traffic light or not (background).

For more details, check out this thread:

Now on the actual problem 👇
There is no (practical) way to define True Negatives

These would be all locations in the image that don't contain a traffic light and where our model correctly predicted no traffic lights as well.

Imagine now going through all possible bounding box sizes for every pixel...

👇
Common evaluation metrics for this are Precision and Recall.

▪️ Precision - the percentage of traffic lights the model detected that are indeed real traffic lights
▪️ Recall - the percentage of real traffic lights that the model detected

Confusing? Let's see some examples 👇
High Precision, Low Recall

In this example, the Precision is 100% (all detected lights are correct), but the Recall is only 50% (2 out of 4 lights missed).

👇
Low Precision, High Recall

In this example, the Precision is 67% (4 out of 6 detected lights are correct), while the Recall is 100% (all 4 real lights are detected).

👇
Precision and Recall give us a way to measure both types of errors (FPs and FNs), without requiring us to count TNs.

By the way, that's why they work better for imbalanced data! Object Detection is imbalanced - there is much more background than objects.

Now the curve 👇
For every candidate the network classifies, it gives us the probability of it being a traffic light. We need to choose a threshold on the probability to consider it a detected light.

It could be 50% if we don't want to miss many lights or 99% if we want to be really sure.

👇
For every value of the threshold, we will get different Precision and Recall when evaluating our test dataset.

▪️ High thresholds - high Precision, low Recall
▪️ Low thresholds - low Precision, high Recall

Plotting these values gives us the Precision-Recall curve.
The Precision-Recall Curve visualizes the trade-off between making False Positives and False Negatives.

Similar to ROC curves, we can also compare different models. If the curve of one model is closer to the upper right corner than another one, then it is a better model.

👇
Summary 🏁

To recap:
▪️ ROC curves don't work for imbalanced data
▪️ ROC curves don't work for object detection (can't count TNs)
▪️ Prevision-Recall curves are computed in a similar way to ROC curves
▪️ Prevision-Recall curves visualize the trade-off between FNs and FPs
I regularly write threads like this to help people get started with Machine Learning.

If you are interested in seeing more, follow me @haltakov.
Yes, you are absolutely right! True Positive Rate is the Recall, not the Accuracy. Sorry for the confusion. Good catch!

Oh, man 🤦‍♂️ This is my second typo in this thread... Thanks for correcting me. You are of course right!

I didn't articulate well why ROC curves are not suitable for imbalanced datasets, so here is a more detailed example:

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Vladimir Haltakov

Vladimir Haltakov Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @haltakov

20 Sep
How to spot fake images of faces generated by a GAN? Look at the eyes! 👁️

This is an interesting paper that shows how fake images of faces can be easily detected by looking at the shape of the pupil.

The pupils in GAN-generated images are usually not round - see the image!

👇
Here is the actual paper. The authors propose a way to automatically identify fake images by analyzing the pupil's shape.

arxiv.org/abs/2109.00162
The bad thing is, GANs will probably quickly catch up and include an additional constraint for pupils to be round...
Read 5 tweets
15 Sep
Did you ever want to learn how to read ROC curves? 📈🤔

This is something you will encounter a lot when analyzing the performance of machine learning models.

Let me help you understand them 👇
What does ROC mean?

ROC stands for Receiver Operating Characteristic but just forget about it. This is a military term from the 1940s and doesn't make much sense today.

Think about these curves as True Positive Rate vs. False Positive Rate plots.

Now, let's dive in 👇
The ROC curve visualizes the trade-offs that a binary classifier makes between True Positives and False Positives.

This may sound too abstract for you so let's look at an example. After that, I encourage you to come back and read the previous sentence again!

Now the example 👇
Read 21 tweets
14 Sep
Most people seem to use matplotlib as a Python plotting library, but is it really the best choice? 🤔

We are going to compare 5 free and popular libraries:
▪️ Matplotlib
▪️ Seaborn
▪️ Plotly
▪️ Bokeh
▪️ Altair

Which one is the best? Find out below 👇
In a survey I did the other day, matplotlib had the most users by a large margin. This was quite surprising to me since I don't really like it...



But let's first look at each library 👇
Matplotlib 📈

Matplotlib is one of the most popular libraries out there.

✅ Supports many types of plots
✅ Lots of customization options

❌ Plots look ugly
❌ Limited interactivity
❌ Not very intuitive to use
Read 11 tweets
9 Sep
I highly recommend listening to the latest eposide of @therobotbrains podcast with @ilyasut.

therobotbrains.ai/podcasts/episo…

Here are some insights I found particulalry interesting 👇
"Neural networks are parallel computers"

That is why they are so powerful - you can train a generic computer to solve your problem. This is also the driver behind Software 2.0 - neural network are becoming more and more capable of solving all kinds of problems.
"Neural networks perform well on tasks that humans can perform very quickly"

Humans don't think much when listening, observing or performing simple tasks.

This means that a neural network can be trained to be good at it as well: NLP, computer vision and reinforcement learning.
Read 4 tweets
9 Sep
My setup for recording videos for my machine learning course 🎥

A lot of people asked about my setup the other day, so here a short thread on that. It's nothing fancy, but it does a good job 🤷‍♂️

Details 👇
Hardware ⚙️

▪️ MacBook Pro (2015 model) - screen sharing and recording
▪️ iPhone XS - using the back camera for video recording
▪️ Omnidiretional external mic - connected to the iPhone
▪️ Highly professional camera rig - books mostly about cooking and travel 😄

👇
Software 💻

▪️ OBS Studio - recording of the screen and the camera image
▪️ EpocCam - use your iPhone as a web cam. You can connect your iPhone both over WiFi and cable.
▪️ Google Slides - for presentation
▪️ Jupyter notebooks and Google Colab - for experimenting with code

👇
Read 5 tweets
7 Sep
Let's talk about a common problem in ML - imbalanced data ⚖️

Imagine we want to detect all pixels belonging to a traffic light from a self-driving car's camera. We train a model with 99.88% performance. Pretty cool, right?

Actually, this model is useless ❌

Let me explain 👇
The problem is the data is severely imbalanced - the ratio between traffic light pixels and background pixels is 800:1.

If we don't take any measures, our model will learn to classify each pixel as background giving us 99.88% accuracy. But it's useless!

What can we do? 👇
Let me tell you about 3 ways of dealing with imbalanced data:

▪️ Choose the right evaluation metric
▪️ Undersampling your dataset
▪️ Oversampling your dataset
▪️ Adapting the loss

Let's dive in 👇
Read 13 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!

Follow Us on Twitter!

:(