Your accuracy is 97%, so this is pretty good, right? Right? No! β
Just looking at the model accuracy is not enough. Let me tell you about some other metrics:
βͺοΈ Recall
βͺοΈ Precision
βͺοΈ F1 score
βͺοΈ Confusion matrix
We'll use this example in the whole thread - classifying traffic light colors (e.g. for a self-driving car).
Yellow traffic lights appear much less often, so our dataset may look like this.
This means our model could reach 97% accuracy, by ignoring all π‘ lights. Not good!
π
Let's assume now that we trained our model and we get the following predictions.
Do you think this model is good? How can we quantitatively evaluate its performance? How should it be improved?
Let's first discuss the possible error types π
Let's evaluate how well our model classifies π‘ lights. There are 3 possible cases:
β True Positive - our model correctly classifies the π‘
β False Negative - our model classifies π‘ as another color
β False Positive - our model classifies another color as π‘
π
Accuracy
We can now just take the percentage of correctly classified samples - this is called the accuracy. In this case, it is 75%.
What is the problem? π€
We have no idea in which cases the model has problems and needs to be improved.
We need to look at other metrics π
Recall
The recall is computed separately for each color as the percentage of samples of the color classified correctly (per class accuracy).
We now see that we have a big problem with π‘ and we should also improve π’.
On the other hand π΄ looks perfect. But it isn't... π
Precision
The precision tells us how many of the model's predictions for a certain class were correct. The precision for π΄ is low because the model wrongly classifies π’ and π‘ lights as π΄.
The π’ looks much better, while π‘ is bad again.
Now, let's combine both π
F1 Score
The problem with Recall is that it ignores False Positives (FPs), while Precision ignores False Negatives. The F1 score is another metric that considers both.
We can see that π‘ is indeed quite bad, but we also see that both π΄ and π’ need to be improved.
π
Confusion Matrix
We can also get an overview of both FPs and FNs by looking at the Confusion Matrix. It breaks down for each class, how it was classified.
For example, looking at the π‘ row, we see that 67% of the π‘ lights were classified as π΄ , 33% as π‘ and none as π’
π
If you look closely, you'll see that the diagonal of the matrix is exactly the Recall.
However, in the Confusion Matrix we can also see where we have FPs - for example, the π‘ and π’ lights that are classified as π΄.
That's why I like to use the Confusion Matrix.
π
So, let's summarize:
βͺοΈ Accuracy - doesn't tell us where the problems are
βͺοΈ Recall - ignores FPs
βͺοΈ Precision - ignores FNs
βͺοΈ F1 score - combines Recall and Precision
βͺοΈ Confusion matrix - overview overall error types
You need to choose the best metrics for your application!
I'm reposting some of my best threads over the year every Friday. On the other days, I regularly write threads like this to help people get started with Machine Learning and web3.
If you are interested in seeing more, follow me @haltakov.
β’ β’ β’
Missing some Tweet in this thread? You can try to
force a refresh
First officially approved Level 3 self-driving system in Germany.
This is significant because it is the first time an autonomous system that takes the *driving responsibility* from the driver is approved for mass production!
The main difference between Level 2 and Level 3 systems is that self-driving systems become legally responsible for the actions of the cars when in autonomous mode!
All driver assist systems on the market now (including Tesla) are Level 2 systems.
While Waymo and Cruise have Level 4 systems running as a beta in some cities, there are different challenges putting this tech in consumer vehicles and in cars that don't have a huge sensor rack costing tens of thousands of dollars on the roof.
Let's talk about a common problem in ML - imbalanced data βοΈ
Imagine we want to detect all pixels belonging to a traffic light from a self-driving car's camera. We train a model with 99.88% performance. Pretty cool, right?
Actually, this model is useless β
Let me explain π
The problem is the data is severely imbalanced - the ratio between traffic light pixels and background pixels is 800:1.
If we don't take any measures, our model will learn to classify each pixel as background giving us 99.88% accuracy. But it's useless!
What can we do? π
Let me tell you about 3 ways of dealing with imbalanced data:
βͺοΈ Choose the right evaluation metric
βͺοΈ Undersampling your dataset
βͺοΈ Oversampling your dataset
βͺοΈ Adapting the loss
The creator and lead dev of the popular NFT exchange Hic Et Nunc on the Tezos blockchain decided to shut down the project. He pulled the plug on the whole website and the official Twitter account.
Yet, the damage is not fatal π
How come?
β NFTs are fine - they are stored on the blockchain
β NFT metadata is fine - stored on IPFS
β Exchange backend code is fine - it is in an immutable smart contract
β The website is back online - it is open-source, so a clone was deployed by the community fast
π
Of course, this is a dramatic event and the quick recovery was only possible because of the immense effort of the community. But it is possible and it took basically 1 day.
Imagine the damage that the creator and lead dev could do if they want to destroy a Web 2.0 company!