ROC stands for Receiver Operating Characteristic but just forget about it. This is a military term from the 1940s and doesn't make much sense today.
Think about these curves as True Positive Rate vs. False Positive Rate plots.
Now, let's dive in ๐
The ROC curve visualizes the trade-offs that a binary classifier makes between True Positives and False Positives.
This may sound too abstract for you so let's look at an example. After that, I encourage you to come back and read the previous sentence again!
Now the example ๐
We are building a self-driving car and want it to stop at red traffic lights ๐ฆ
(You saw this coming, right ๐?)
We build a classifier to determine if the car should STOP (light is ๐ด or ๐ก) or PASS (light is ๐ข). I'm using just 2 classes here to make the example simpler.
๐
Now we ask the model - should the car stop at the ๐ฆ?
There are 4 possible cases
โช๏ธ Light is ๐ด, model says STOP - True Positive
โช๏ธ Light is ๐ด, model says PASS - False Negative
โช๏ธ Light is ๐ข, model says PASS - True Negative
โช๏ธ Light is ๐ข, model says STOP - False Positive
๐
Given many examples from our validation/test set, we can compute the following metrics:
โช๏ธ True Positive Rate (TPR) - how good is our model telling us correctly to stop.
โช๏ธ False Positive Rate (FPR) - how often does our model tell us wrongly to stop
To get a feeling for it ๐
A high TPR means that we stop at most ๐ด lights.
A low TPR means that we often miss ๐ด lights and pass.
A high FPR means that we often confuse ๐ข lights for ๐ด and wrongly stop.
A low FPR means that we don't have many false stops.
So, we want a high TPR and low FPR, right? ๐
Evaluating a model on a validation/test dataset will give us exactly one TPR and FPR value. Here an example of a (not so good) classifier:
โช๏ธ TPR = 95%
โช๏ธ FPR = 1%
Wait, but this is just one point on the TPR vs. FPR plot above. So, how do we get the curve now?
๐
Machine learning classifiers usually don't simply output a class, but they tell you the probability of each class being the correct one.
You can then define a threshold based on which you decide. For example, stop at a light only if the classifier is 99% sure. Or 90%? 80%?
๐
We can now try many different values of the threshold and evaluate on our validation/test dataset.
Every time we will get different values for TPR and FPR and we can put them on the ROC plot. This is how we get our curve!
So let's look at different thresholds ๐
Here is an example plot. Look at the 3 points I marked on it to see the fundamental trade-off between FPR and TPR.
1๏ธโฃ TPR = 20%, FPR = 2% - setting a high threshold (we want to be really sure before stopping), we won't have many FPs, but we will also miss many real ๐ด.
๐
2๏ธโฃ TPR = 81%, FPR = 33% - decreasing the threshold improves the detection rate, but now we also have many false detections of ๐ด.
3๏ธโฃ TPR = 99%, FPR = 90% - a model with a very low threshold will detect almost all ๐ด, but will wrongly classify most ๐ข as ๐ด as well.
๐
Changing the threshold will only change the trade-off not make our model better.
However, this is still an important step when you are tuning the model for a specific application. For self-driving cars, it is **very** Important to not run the red lights - you need high TPR!
๐
We can, however, train another model using more data, more parameters, better optimization. But how do we tell that it is really better and not just finding a different trade-off point?
The new ROC curve should be closer to the upper left corner of the graph! ๐
A better ROC curve means that we can choose thresholds that give us the same TPR for both classifiers, but the better one will have less FPR.
Alternatively, for the same FPR, the better classifier will have a higher TPR.
๐
There is one problem, though - in the reality, ROC curves are much noisier. At some points, the curve of one model may be higher, but at others lower. So which one is better in this case?
See this image from a real evaluation (credit to Wikipedia). Which one is best?
๐
To get a single number summarizing the whole ROC curve, we can compute the Area under the Curve (AOC).
This will again be a number between 0 and 1 and expresses the probability of the model to rank a random positive example higher than a random negative example.
๐
Summary ๐
To recap quickly:
โช๏ธ ROC curves visualize the trade-off between TPR and FPR
โช๏ธ The curve is created by varying an internal decision threshold
โช๏ธ Models with a curve closer to the upper left corner are better
โช๏ธ Use the Area under the Curve to get a single metric
Every Friday I repost one of my old threads so more people get the chance to see them. During the rest of the week, I post new content on machine learning and web3.
If you are interested in seeing more, follow me @haltakov.
โข โข โข
Missing some Tweet in this thread? You can try to
force a refresh
It sucks if your ML model can't achieve good performance, but it is even worse if you don't know it!
Sometimes you follow all the best practices and your experiments show your model performing very well, but it fails when deployed.
A thread about Sampling Bias ๐
There is a lot of information about rules you need to follow when evaluating your machine learning model:
โช๏ธ Balance your dataset
โช๏ธ Use the right metric
โช๏ธ Use high-quality labels
โช๏ธ Split your training and test data
โช๏ธ Perform cross-validation
But this may not be enough ๐
A common problem when evaluating an ML model is the Sampling Bias.
This means that your dataset contains more samples of some part of the underlying distribution than others.
The Internet is already decentralized, why do we need web3? ๐ค
This is a common critique of web3. However, decentralization on its own is not always enough - sometimes we need to agree on a set of facts.
Blockchains give us a consensus mechanism for that!
Thread ๐งต
1/12
The Internet is built of servers that communicate using open protocols like HTTP, SMTP, WebRTC etc. Everybody can set up a server and participate. It is decentralized!
However, if two servers distribute contradicting information, how do you know which one is right?
2/12
This is what blockchains give us, a way for decentralized parties to agree on one set of facts. They offer a consensus mechanism!
Imagine the blockchain as a global public database that anybody can read and nobody can falsify - every transaction/change needs to be signed.
While there is a lot of hype around web3, NFTs, and decentralized apps (dApps), there is also a lot of criticism. Today, I'll focus on the critique that web3 is actually too centralized.
Let's try to have an honest discussion ๐
These are the main arguments I see regularly. Please add more in the comments.
1๏ธโฃ The Internet is already decentralized
2๏ธโฃ It is inefficient
3๏ธโฃ Everything can be implemented better using a centralized approach
4๏ธโฃ Important services are centralized
๐
I was inspired to write this in part after reading this great article by @moxie pointing some of the problems with the current state of web3. If you've been living under a rock in the last weeks, make sure you check it out:
Things are getting more and more interesting for AI-generated images! ๐จ
GLIDE is a new model by @OpenAI that can generate images guided by a text prompt. It is based on a diffusion model instead of the more widely used GAN models.
Some details ๐
@OpenAI GLIDE also has the interesting ability to perform inpainting allowing for some interesting usages.