How to evaluate your ML model? πŸ“

Your accuracy is 97%, so this is pretty good, right? Right? No! ❌

Just looking at the model accuracy is not enough. Let me tell you about some other metrics:
β–ͺ️ Recall
β–ͺ️ Precision
β–ͺ️ F1 score
β–ͺ️ Confusion matrix

Let's go πŸ‘‡

#RepostFriday
We'll use this example in the whole thread - classifying traffic light colors (e.g. for a self-driving car).

Yellow traffic lights appear much less often, so our dataset may look like this.

This means our model could reach 97% accuracy, by ignoring all 🟑 lights. Not good!

πŸ‘‡
Let's assume now that we trained our model and we get the following predictions.

Do you think this model is good? How can we quantitatively evaluate its performance? How should it be improved?

Let's first discuss the possible error types πŸ‘‡
Let's evaluate how well our model classifies 🟑 lights. There are 3 possible cases:

βœ… True Positive - our model correctly classifies the 🟑
❌ False Negative - our model classifies 🟑 as another color
❌ False Positive - our model classifies another color as 🟑

πŸ‘‡
Accuracy

We can now just take the percentage of correctly classified samples - this is called the accuracy. In this case, it is 75%.

What is the problem? πŸ€”

We have no idea in which cases the model has problems and needs to be improved.

We need to look at other metrics πŸ‘‡
Recall

The recall is computed separately for each color as the percentage of samples of the color classified correctly (per class accuracy).

We now see that we have a big problem with 🟑 and we should also improve 🟒.

On the other hand πŸ”΄ looks perfect. But it isn't... πŸ‘‡
Precision

The precision tells us how many of the model's predictions for a certain class were correct. The precision for πŸ”΄ is low because the model wrongly classifies 🟒 and 🟑 lights as πŸ”΄.

The 🟒 looks much better, while 🟑 is bad again.

Now, let's combine both πŸ‘‡
F1 Score

The problem with Recall is that it ignores False Positives (FPs), while Precision ignores False Negatives. The F1 score is another metric that considers both.

We can see that 🟑 is indeed quite bad, but we also see that both πŸ”΄ and 🟒 need to be improved.

πŸ‘‡
Confusion Matrix

We can also get an overview of both FPs and FNs by looking at the Confusion Matrix. It breaks down for each class, how it was classified.

For example, looking at the 🟑 row, we see that 67% of the 🟑 lights were classified as πŸ”΄ , 33% as 🟑 and none as 🟒

πŸ‘‡
If you look closely, you'll see that the diagonal of the matrix is exactly the Recall.

However, in the Confusion Matrix we can also see where we have FPs - for example, the 🟑 and 🟒 lights that are classified as πŸ”΄.

That's why I like to use the Confusion Matrix.

πŸ‘‡
So, let's summarize:

β–ͺ️ Accuracy - doesn't tell us where the problems are
β–ͺ️ Recall - ignores FPs
β–ͺ️ Precision - ignores FNs
β–ͺ️ F1 score - combines Recall and Precision
β–ͺ️ Confusion matrix - overview overall error types

You need to choose the best metrics for your application!
I'm reposting some of my best threads over the year every Friday. On the other days, I regularly write threads like this to help people get started with Machine Learning and web3.

If you are interested in seeing more, follow me @haltakov.

β€’ β€’ β€’

Missing some Tweet in this thread? You can try to force a refresh
γ€€

Keep Current with Vladimir Haltakov

Vladimir Haltakov Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @haltakov

15 Dec
First officially approved Level 3 self-driving system in Germany.

This is significant because it is the first time an autonomous system that takes the *driving responsibility* from the driver is approved for mass production!

europe.autonews.com/automakers/mer…

πŸ‘‡
The main difference between Level 2 and Level 3 systems is that self-driving systems become legally responsible for the actions of the cars when in autonomous mode!

All driver assist systems on the market now (including Tesla) are Level 2 systems.



πŸ‘‡
While Waymo and Cruise have Level 4 systems running as a beta in some cities, there are different challenges putting this tech in consumer vehicles and in cars that don't have a huge sensor rack costing tens of thousands of dollars on the roof.

πŸ‘‡
Read 4 tweets
18 Nov
Let's talk about a common problem in ML - imbalanced data βš–οΈ

Imagine we want to detect all pixels belonging to a traffic light from a self-driving car's camera. We train a model with 99.88% performance. Pretty cool, right?

Actually, this model is useless ❌

Let me explain πŸ‘‡
The problem is the data is severely imbalanced - the ratio between traffic light pixels and background pixels is 800:1.

If we don't take any measures, our model will learn to classify each pixel as background giving us 99.88% accuracy. But it's useless!

What can we do? πŸ‘‡
Let me tell you about 3 ways of dealing with imbalanced data:

β–ͺ️ Choose the right evaluation metric
β–ͺ️ Undersampling your dataset
β–ͺ️ Oversampling your dataset
β–ͺ️ Adapting the loss

Let's dive in πŸ‘‡
Read 14 tweets
17 Nov
Machine Learning in the Real World 🧠 πŸ€–

ML for real-world applications is much more than designing fancy networks and fine-tuning parameters.

In fact, you will spend most of your time curating a good dataset.

Let's go through the steps of the process together πŸ‘‡
Collect Data πŸ’½

We need to represent the real world as accurately as possible. If some situations are underrepresented we are introducing Sampling Bias.

Sampling Bias is nasty because we'll have high test accuracy, but our model will perform badly when deployed.

πŸ‘‡
Traffic Lights 🚦

Let's build a model to recognize traffic lights for a self-driving car. We need to collect data for different:

β–ͺ️ Lighting conditions
β–ͺ️ Weather conditions
β–ͺ️ Distances and viewpoints
β–ͺ️ Strange variants

And if we sample only 🚦 we won't detect πŸš₯ πŸ€·β€β™‚οΈ

πŸ‘‡
Read 16 tweets
16 Nov
Can you detect COVID-19 using Machine Learning? πŸ€”

You have an X-ray or CT scan and the task is to detect if the patient has COVID-19 or not. Sounds doable, right?

None of the 415 ML papers published on the subject in 2020 was usable. Not a single one!

Let's see why πŸ‘‡
Researchers from Cambridge took all papers on the topic published from January to October 2020.

β–ͺ️ 2212 papers
β–ͺ️ 415 after initial screening
β–ͺ️ 62 chosen for detailed analysis
β–ͺ️ 0 with potential for clinical use

healthcare-in-europe.com/en/news/machin…

There are important lessons here πŸ‘‡
Small datasets 🐁

Getting medical data is hard, because of privacy concerns, and at the beginning of the pandemic, there was just not much data in general.

Many papers were using very small datasets often collected from a single hospital - not enough for real evaluation.

πŸ‘‡
Read 10 tweets
15 Nov
Mastering your Machine Learning Interview πŸ§‘β€πŸ«

I've summarized some great resources for you that will help you with your Machine Learning interview.

Read below πŸ‘‡
A great book by @chipro distilling a lot of information on preparing for a machine learning interview.

huyenchip.com/ml-interviews-…

Next πŸ‘‡
A collection of questions by @svpino who has a lot of experience interviewing people for ML positions.



Next πŸ‘‡
Read 9 tweets
12 Nov
How does decentralization help? An example...

The creator and lead dev of the popular NFT exchange Hic Et Nunc on the Tezos blockchain decided to shut down the project. He pulled the plug on the whole website and the official Twitter account.

Yet, the damage is not fatal πŸ‘‡
How come?

βœ… NFTs are fine - they are stored on the blockchain
βœ… NFT metadata is fine - stored on IPFS
βœ… Exchange backend code is fine - it is in an immutable smart contract
βœ… The website is back online - it is open-source, so a clone was deployed by the community fast

πŸ‘‡
Of course, this is a dramatic event and the quick recovery was only possible because of the immense effort of the community. But it is possible and it took basically 1 day.

Imagine the damage that the creator and lead dev could do if they want to destroy a Web 2.0 company!

πŸ‘‡
Read 4 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us on Twitter!

:(