Did you ever want to learn how to read ROC curves? ๐Ÿ“ˆ๐Ÿค”

This is something you will encounter a lot when analyzing the performance of machine learning models.

Let me help you understand them ๐Ÿ‘‡

#RepostFriday
What does ROC mean?

ROC stands for Receiver Operating Characteristic but just forget about it. This is a military term from the 1940s and doesn't make much sense today.

Think about these curves as True Positive Rate vs. False Positive Rate plots.

Now, let's dive in ๐Ÿ‘‡
The ROC curve visualizes the trade-offs that a binary classifier makes between True Positives and False Positives.

This may sound too abstract for you so let's look at an example. After that, I encourage you to come back and read the previous sentence again!

Now the example ๐Ÿ‘‡
We are building a self-driving car and want it to stop at red traffic lights ๐Ÿšฆ

(You saw this coming, right ๐Ÿ˜?)

We build a classifier to determine if the car should STOP (light is ๐Ÿ”ด or ๐ŸŸก) or PASS (light is ๐ŸŸข). I'm using just 2 classes here to make the example simpler.

๐Ÿ‘‡
Now we ask the model - should the car stop at the ๐Ÿšฆ?

There are 4 possible cases
โ–ช๏ธ Light is ๐Ÿ”ด, model says STOP - True Positive
โ–ช๏ธ Light is ๐Ÿ”ด, model says PASS - False Negative
โ–ช๏ธ Light is ๐ŸŸข, model says PASS - True Negative
โ–ช๏ธ Light is ๐ŸŸข, model says STOP - False Positive

๐Ÿ‘‡
Given many examples from our validation/test set, we can compute the following metrics:

โ–ช๏ธ True Positive Rate (TPR) - how good is our model telling us correctly to stop.
โ–ช๏ธ False Positive Rate (FPR) - how often does our model tell us wrongly to stop

To get a feeling for it ๐Ÿ‘‡
A high TPR means that we stop at most ๐Ÿ”ด lights.
A low TPR means that we often miss ๐Ÿ”ด lights and pass.

A high FPR means that we often confuse ๐ŸŸข lights for ๐Ÿ”ด and wrongly stop.
A low FPR means that we don't have many false stops.

So, we want a high TPR and low FPR, right? ๐Ÿ‘‡
Evaluating a model on a validation/test dataset will give us exactly one TPR and FPR value. Here an example of a (not so good) classifier:
โ–ช๏ธ TPR = 95%
โ–ช๏ธ FPR = 1%

Wait, but this is just one point on the TPR vs. FPR plot above. So, how do we get the curve now?

๐Ÿ‘‡
Machine learning classifiers usually don't simply output a class, but they tell you the probability of each class being the correct one.

You can then define a threshold based on which you decide. For example, stop at a light only if the classifier is 99% sure. Or 90%? 80%?

๐Ÿ‘‡
We can now try many different values of the threshold and evaluate on our validation/test dataset.

Every time we will get different values for TPR and FPR and we can put them on the ROC plot. This is how we get our curve!

So let's look at different thresholds ๐Ÿ‘‡
Here is an example plot. Look at the 3 points I marked on it to see the fundamental trade-off between FPR and TPR.

1๏ธโƒฃ TPR = 20%, FPR = 2% - setting a high threshold (we want to be really sure before stopping), we won't have many FPs, but we will also miss many real ๐Ÿ”ด.

๐Ÿ‘‡
2๏ธโƒฃ TPR = 81%, FPR = 33% - decreasing the threshold improves the detection rate, but now we also have many false detections of ๐Ÿ”ด.

3๏ธโƒฃ TPR = 99%, FPR = 90% - a model with a very low threshold will detect almost all ๐Ÿ”ด, but will wrongly classify most ๐ŸŸข as ๐Ÿ”ด as well.

๐Ÿ‘‡
Changing the threshold will only change the trade-off not make our model better.

However, this is still an important step when you are tuning the model for a specific application. For self-driving cars, it is **very** Important to not run the red lights - you need high TPR!

๐Ÿ‘‡
We can, however, train another model using more data, more parameters, better optimization. But how do we tell that it is really better and not just finding a different trade-off point?

The new ROC curve should be closer to the upper left corner of the graph! ๐Ÿ‘‡
A better ROC curve means that we can choose thresholds that give us the same TPR for both classifiers, but the better one will have less FPR.

Alternatively, for the same FPR, the better classifier will have a higher TPR.

๐Ÿ‘‡
There is one problem, though - in the reality, ROC curves are much noisier. At some points, the curve of one model may be higher, but at others lower. So which one is better in this case?

See this image from a real evaluation (credit to Wikipedia). Which one is best?

๐Ÿ‘‡
To get a single number summarizing the whole ROC curve, we can compute the Area under the Curve (AOC).

This will again be a number between 0 and 1 and expresses the probability of the model to rank a random positive example higher than a random negative example.

๐Ÿ‘‡
Summary ๐Ÿ

To recap quickly:

โ–ช๏ธ ROC curves visualize the trade-off between TPR and FPR
โ–ช๏ธ The curve is created by varying an internal decision threshold
โ–ช๏ธ Models with a curve closer to the upper left corner are better
โ–ช๏ธ Use the Area under the Curve to get a single metric
Every Friday I repost one of my old threads so more people get the chance to see them. During the rest of the week, I post new content on machine learning and web3.

If you are interested in seeing more, follow me @haltakov.

โ€ข โ€ข โ€ข

Missing some Tweet in this thread? You can try to force a refresh
ใ€€

Keep Current with haltakov.eth | (๐Ÿค–,๐Ÿง )

haltakov.eth | (๐Ÿค–,๐Ÿง ) Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @haltakov

Feb 17
It sucks if your ML model can't achieve good performance, but it is even worse if you don't know it!

Sometimes you follow all the best practices and your experiments show your model performing very well, but it fails when deployed.

A thread about Sampling Bias ๐Ÿ‘‡
There is a lot of information about rules you need to follow when evaluating your machine learning model:

โ–ช๏ธ Balance your dataset
โ–ช๏ธ Use the right metric
โ–ช๏ธ Use high-quality labels
โ–ช๏ธ Split your training and test data
โ–ช๏ธ Perform cross-validation

But this may not be enough ๐Ÿ‘‡
A common problem when evaluating an ML model is the Sampling Bias.

This means that your dataset contains more samples of some part of the underlying distribution than others.

Some examples ๐Ÿ‘‡
Read 12 tweets
Jan 18
The Internet is already decentralized, why do we need web3? ๐Ÿค”

This is a common critique of web3. However, decentralization on its own is not always enough - sometimes we need to agree on a set of facts.

Blockchains give us a consensus mechanism for that!

Thread ๐Ÿงต

1/12
The Internet is built of servers that communicate using open protocols like HTTP, SMTP, WebRTC etc. Everybody can set up a server and participate. It is decentralized!

However, if two servers distribute contradicting information, how do you know which one is right?

2/12
This is what blockchains give us, a way for decentralized parties to agree on one set of facts. They offer a consensus mechanism!

Imagine the blockchain as a global public database that anybody can read and nobody can falsify - every transaction/change needs to be signed.

3/12
Read 15 tweets
Jan 18
How decentralized is web3 really?

While there is a lot of hype around web3, NFTs, and decentralized apps (dApps), there is also a lot of criticism. Today, I'll focus on the critique that web3 is actually too centralized.

Let's try to have an honest discussion ๐Ÿ‘‡
These are the main arguments I see regularly. Please add more in the comments.

1๏ธโƒฃ The Internet is already decentralized
2๏ธโƒฃ It is inefficient
3๏ธโƒฃ Everything can be implemented better using a centralized approach
4๏ธโƒฃ Important services are centralized

๐Ÿ‘‡
I was inspired to write this in part after reading this great article by @moxie pointing some of the problems with the current state of web3. If you've been living under a rock in the last weeks, make sure you check it out:

moxie.org/2022/01/07/webโ€ฆ

๐Ÿ‘‡
Read 9 tweets
Jan 17
How many parameters do you need in your neural network to solve any problem? ๐Ÿค”

GPT-3 has 175 billion, MT-NLG has 530 billion and Wu Dao has 1.75 trillion.

But the truth is you only need 1 parameter. No, not 1 billion. Just a single parameter!

Let me explain ๐Ÿ‘‡
Yes, of course, I'm trolling you, but only a little bit ๐Ÿ˜

I want to show you this very cool work by @ranlot75 about how to fit an arbitrary dataset with a single parameter and the following function

github.com/Ranlot/single-โ€ฆ

๐Ÿ‘‡
@ranlot75 Here are examples of some 2D image datasets. You see the parameter alpha and the reconstructed image.

Now, let me give you some high-level intuition how this works ๐Ÿ‘‡
Read 7 tweets
Dec 29, 2021
You think you know what is an NFT? Well, think again...

You are doing it wrong if you think about NFTs as pixelated images of punks, toads, or apes. It is not about the JPEG!

A better mental model for thinking about NFTs ๐Ÿ‘‡
Forget the images for now. Owning an NFT means that your wallet address is listed as the owner of a specific digital asset on the blockchain.

Digital assets are organized in collections and an NFT is one specific piece of this collection.

Let's look at an example ๐Ÿ‘‡
I own an NFT from the @underfittedio membership collection - a membership card.

You can now check the collection on the blockchain and can see that my wallet address is the owner of token ID 4.

Everybody can check this. Nobody can change it - except me!

How is this useful? ๐Ÿ‘‡
Read 26 tweets
Dec 21, 2021
Things are getting more and more interesting for AI-generated images! ๐ŸŽจ

GLIDE is a new model by @OpenAI that can generate images guided by a text prompt. It is based on a diffusion model instead of the more widely used GAN models.

Some details ๐Ÿ‘‡
@OpenAI GLIDE also has the interesting ability to perform inpainting allowing for some interesting usages.

๐Ÿ‘‡
@OpenAI Here is the full paper

arxiv.org/abs/2112.10741

๐Ÿ‘‡
Read 5 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

:(