Your accuracy is 97%, so this is pretty good, right? Right? No! β
Just looking at the model accuracy is not enough. Let me tell you about some other metrics:
βͺοΈ Recall
βͺοΈ Precision
βͺοΈ F1 score
βͺοΈ Confusion matrix
We'll use this example in the whole thread - classifying traffic light colors (e.g. for a self-driving car).
Yellow traffic lights appear much less often, so our dataset may look like this.
This means our model could reach 97% accuracy, by ignoring all π‘ lights. Not good!
π
Let's assume now that we trained our model and we get the following predictions.
Do you think this model is good? How can we quantitatively evaluate its performance? How should it be improved?
Let's first discuss the possible error types π
Let's evaluate how well our model classifies π‘ lights. There are 3 possible cases:
β True Positive - our model correctly classifies the π‘
β False Negative - our model classifies π‘ as another color
β False Positive - our model classifies another color as π‘
π
Accuracy
We can now just take the percentage of correctly classified samples - this is called the accuracy. In this case, it is 75%.
What is the problem? π€
We have no idea in which cases the model has problems and needs to be improved.
We need to look at other metrics π
Recall
The recall is computed separately for each color as the percentage of samples of the color classified correctly (per class accuracy).
We now see that we have a big problem with π‘ and we should also improve π’.
On the other hand π΄ looks perfect. But it isn't... π
Precision
The precision tells us how many of the model's predictions for a certain class were correct. The precision for π΄ is low because the model wrongly classifies π’ and π‘ lights as π΄.
The π’ looks much better, while π‘ is bad again.
Now, let's combine both π
F1 Score
The problem with Recall is that it ignores False Positives (FPs), while Precision ignores False Negatives. The F1 score is another metric that considers both.
We can see that π‘ is indeed quite bad, but we also see that both π΄ and π’ need to be improved.
π
Confusion Matrix
We can also get an overview of both FPs and FNs by looking at the Confusion Matrix. It breaks down for each class, how it was classified.
For example, looking at the π‘ row, we see that 67% of the π‘ lights were classified as π΄ , 33% as π‘ and none as π’
π
If you look closely, you'll see that the diagonal of the matrix is exactly the Recall.
However, in the Confusion Matrix we can also see where we have FPs - for example, the π‘ and π’ lights that are classified as π΄.
That's why I like to use the Confusion Matrix.
π
So, let's summarize:
βͺοΈ Accuracy - doesn't tell us where the problems are
βͺοΈ Recall - ignores FPs
βͺοΈ Precision - ignores FNs
βͺοΈ F1 score - combines Recall and Precision
βͺοΈ Confusion matrix - overview overall error types
You need to choose the best metrics for your application!
I'm reposting some of my best threads over the year every Friday. On the other days, I regularly write threads like this to help people get started with Machine Learning and web3.
If you are interested in seeing more, follow me @haltakov.
β’ β’ β’
Missing some Tweet in this thread? You can try to
force a refresh
How can I prove to you that I know a secret, without revealing any information about the secret itself?
This is called a zero-knowledge proof and it is a super interesting area of cryptography! But how does it work?
Thread π§΅
Let's start with an example
Peggie and Victor travel between cities A and B. There are two paths - a long path and a short path. The problem is that there is a gate on the short path for which you need a password.
Peggie knows the password, but Victor doesn't.
π
Victor wants to buy the password from Peggie so he can use the short path.
But what if Victor pays Peggie, but she lied and she didn't know the password? How can Peggie prove to Victor she knows the password, without actually revealing it?
Rescue Toadz looks like a regular NFT collection at first - you can mint a toad and you get an NFT in your wallet.
100% of the mint fee is directly sent to @Unchainfund - an organization that provides humanitarian aid to Ukraine and that has already raised $9M!
π
@ianbydesign@RescueToadz@Unchainfund@cryptoadzNFT The process is completely trustless and automatic! All the logic is coded in the smart contract which cannot be changed and which everybody can inspect.
You trust the code, not us! We have no way to steal the funds even if we wanted (we don't π).
Principal Component Analysis is a commonly used method for dimensionality reduction.
It's a good example of how fairly complex math can have an intuitive explanation and be easy to use in practice.
Let's start from the application of PCA π
Dimensionality Reduction
This is one of the common uses of PCA in machine learning.
Imagine you want to predict house prices. You get a large table of many houses and different features for them like size, number of rooms, location, age, etc.
Some features seem correlated π
Correlated features
For example, the size of the house is correlated with the number of rooms. Bigger houses tend to have more rooms.
Another example could be the age and the year the house was built - they give us pretty much the same information.
For regression problems you can use one of several loss functions:
βͺοΈ MSE
βͺοΈ MAE
βͺοΈ Huber loss
But which one is best? When should you prefer one instead of the other?
Thread π§΅
Let's first quickly recap what each of the loss functions does. After that, we can compare them and see the differences based on some examples.
π
Mean Square Error (MSE)
For every sample, MSE takes the difference between the ground truth and the model's prediction and computes its square. Then, the average over all samples is computed.