Vladimir Haltakov Profile picture
Feb 25 20 tweets 7 min read
There are two problems with ROC curves

❌ They don't work for imbalanced datasets
❌ They don't work for object detection problems

So what do we do to evaluate our machine learning models properly in these cases?

We use a Precision-Recall curve.

Thread 👇

#RepostFriday
Last week I wrote another detailed thread on ROC curves. I recommend that you read it first if you don't know what they are.



Then go on 👇
❌ Problem 1 - Imbalanced Data

ROC curves measure the True Positive Rate (also known as Accuracy). So, if you have an imbalanced dataset, the ROC curve will not tell you if your classifier completely ignores the underrepresented class.

Let's take an example confusion matrix 👇
We are obviously dealing with a severely imbalanced dataset - 1000 samples for class 0 and only 10 for class 1. This is not uncommon in practice, though.

The classifier is also not particularly good, because it has a lot of false positives.

Let's compute some metrics 👇
ROC

To draw a ROC curve we need to compute the True Positive Rate (TPR) and the False Positive Rate (FPR). In this case:

TPR = 90%
FPR = 5%

Remember, a good classifier has high TPR and low FPR (upper left corner on the ROC curve). This looks quite OK.
Or is it? 👇
Precision-Recall

Let's compute the metrics for the Precision-Recall curve.

Precision = 15%
Recall = 90% (same as TPR)

Now, this is a different story. We want both precision and recall to be high (upper right corner of the curve), so this classifier clearly isn't good!

👇
A better way to understand this is to visually look at what is computed. Look at the image.

You see how in the case of Precision-Recall, we don't look at the True Negatives at all. So it doesn't matter if they are as much as the positive classes or much more!
❌ Problem 2 - Object Detection Problems

ROC curves measure the False Positive Rate (FPR) according to the following formula.

In object detection problems the True Negatives don't make sense. There are so many possibilities to detect the background class.

Let's dive deeper 👇
I'll use my favorite example again - traffic lights detection (for a self-driving car 🧠🚗).

We want to detect the presence and location (using a bounding box) of traffic lights in images. This is a typical Object Detection problem - an important class of CV problems.

👇
The typical way to solve this is to design a neural network that will check many locations in the image and classify them as being a traffic light or not (background).

For more details, check out this thread:

Now on the actual problem 👇
There is no (practical) way to define True Negatives

These would be all locations in the image that don't contain a traffic light and where our model correctly predicted no traffic lights as well.

Imagine now going through all possible bounding box sizes for every pixel...

👇
Common evaluation metrics for this are Precision and Recall.

▪️ Precision - the percentage of traffic lights the model detected that are indeed real traffic lights
▪️ Recall - the percentage of real traffic lights that the model detected

Confusing? Let's see some examples 👇
High Precision, Low Recall

In this example, the Precision is 100% (all detected lights are correct), but the Recall is only 50% (2 out of 4 lights missed).

👇
Low Precision, High Recall

In this example, the Precision is 67% (4 out of 6 detected lights are correct), while the Recall is 100% (all 4 real lights are detected).

👇
Precision and Recall give us a way to measure both types of errors (FPs and FNs), without requiring us to count TNs.

By the way, that's why they work better for imbalanced data! Object Detection is imbalanced - there is much more background than objects.

Now the curve 👇
For every candidate the network classifies, it gives us the probability of it being a traffic light. We need to choose a threshold on the probability to consider it a detected light.

It could be 50% if we don't want to miss many lights or 99% if we want to be really sure.

👇
For every value of the threshold, we will get different Precision and Recall when evaluating our test dataset.

▪️ High thresholds - high Precision, low Recall
▪️ Low thresholds - low Precision, high Recall

Plotting these values gives us the Precision-Recall curve.
The Precision-Recall Curve visualizes the trade-off between making False Positives and False Negatives.

Similar to ROC curves, we can also compare different models. If the curve of one model is closer to the upper right corner than another one, then it is a better model.

👇
Summary 🏁

To recap:
▪️ ROC curves don't work for imbalanced data
▪️ ROC curves don't work for object detection (can't count TNs)
▪️ Prevision-Recall curves are computed in a similar way to ROC curves
▪️ Prevision-Recall curves visualize the trade-off between FNs and FPs
Every Friday I repost one of my old threads so more people get the chance to see them. During the rest of the week, I post new content on machine learning and web3.

If you are interested in seeing more, follow me @haltakov

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Vladimir Haltakov

Vladimir Haltakov Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @haltakov

Feb 24
Is your machine learning model performing well? What about in 6 months? 🤔

If you are wondering why I'm asking this, you need to learn about 𝗰𝗼𝗻𝗰𝗲𝗽𝘁 𝗱𝗿𝗶𝗳𝘁 and 𝗱𝗮𝘁𝗮 𝗱𝗿𝗶𝗳𝘁.

Let me explain this to you using two real world examples.

Thread 👇
Imagine you are developing a model for a self-driving car to detect other vehicles at night.

Well, this is not too difficult, since vehicles have two red tail lights and it is easy to get a lot of data. You model works great!

But then... 👇 Image
Car companies decide to experiment with red horizontal bars instead of two individual lights.

Now your model fails to detect these cars because it has never seen this kind of tail light.

Your model is suffering from 𝗰𝗼𝗻𝗰𝗲𝗽𝘁 𝗱𝗿𝗶𝗳𝘁

👇 ImageImage
Read 11 tweets
Feb 22
Math is not very important when you are using a machine learning method to solve your problem.

Everybody that disagrees, should study the 92-page appendix of the Self-normalizing networks (SNN) paper, before using
torch.nn.SELU.

And the core idea of SNN is actually simple 👇 ImageImageImageImage
SNNs use an activation function called Scaled Exponential Linear Unit (SELU) that is pretty simple to define.

It has the advantage that the activations converge to zero mean and unit variance, which allows training of deeper networks and employing strong regularization.

👇 ImageImage
There are implementations both in PyTorch (torch.nn.SELU) and TensorFlow (tf.keras.activations.selu).

You need to be careful to use the correct initialization function and dropout, but this is well documented.

The code is open-source as well: github.com/bioinf-jku/SNNs

👇
Read 9 tweets
Feb 21
This is like an NFT in the physical world

This is a special edition BMW 8 series painted by the famous artist Jeff Koons. A limited-edition of 99 with a price of $350K - about $200K more than the regular M850i.

If you think about it, you'll see many similarities with NFTs

👇 Image
Artificially scarce

BMW can surely produce (mint 😅) more than 99 cars with this paint. The collection size is limited artificially in order to make it more exclusive.

Same as most NFT collections - they create artificial scarcity.

👇
Its price comes from the story

The $200K premium for the "paint" is purely motivated by the story around this car - it is exclusive, it is created by a famous artist, it is a BMW Art Car.

It is not faster, more reliable, or more economic. You are paying for the story.

👇
Read 10 tweets
Feb 18
Did you ever want to learn how to read ROC curves? 📈🤔

This is something you will encounter a lot when analyzing the performance of machine learning models.

Let me help you understand them 👇

#RepostFriday
What does ROC mean?

ROC stands for Receiver Operating Characteristic but just forget about it. This is a military term from the 1940s and doesn't make much sense today.

Think about these curves as True Positive Rate vs. False Positive Rate plots.

Now, let's dive in 👇
The ROC curve visualizes the trade-offs that a binary classifier makes between True Positives and False Positives.

This may sound too abstract for you so let's look at an example. After that, I encourage you to come back and read the previous sentence again!

Now the example 👇
Read 19 tweets
Feb 17
It sucks if your ML model can't achieve good performance, but it is even worse if you don't know it!

Sometimes you follow all the best practices and your experiments show your model performing very well, but it fails when deployed.

A thread about Sampling Bias 👇
There is a lot of information about rules you need to follow when evaluating your machine learning model:

▪️ Balance your dataset
▪️ Use the right metric
▪️ Use high-quality labels
▪️ Split your training and test data
▪️ Perform cross-validation

But this may not be enough 👇
A common problem when evaluating an ML model is the Sampling Bias.

This means that your dataset contains more samples of some part of the underlying distribution than others.

Some examples 👇
Read 12 tweets
Jan 18
The Internet is already decentralized, why do we need web3? 🤔

This is a common critique of web3. However, decentralization on its own is not always enough - sometimes we need to agree on a set of facts.

Blockchains give us a consensus mechanism for that!

Thread 🧵

1/12
The Internet is built of servers that communicate using open protocols like HTTP, SMTP, WebRTC etc. Everybody can set up a server and participate. It is decentralized!

However, if two servers distribute contradicting information, how do you know which one is right?

2/12
This is what blockchains give us, a way for decentralized parties to agree on one set of facts. They offer a consensus mechanism!

Imagine the blockchain as a global public database that anybody can read and nobody can falsify - every transaction/change needs to be signed.

3/12
Read 15 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us on Twitter!

:(