Discover and read the best of Twitter Threads about #RepostFriday

Most recents (7)

Dealing with imbalanced datasets 🐁 βš–οΈ 🐘

Real world datasets are often imbalanced - some of the classes appear much more often than others.

The problem? You ML model will likely learn to only predict the dominant classes.

What can you do about it? πŸ€”

Thread 🧡 #RepostFriday
Example 🚦

We will be dealing with an ML model to detect traffic lights for a self-driving car πŸ€–πŸš—

Traffic lights are small so you will have much more parts of the image that are not traffic lights.

Furthermore, yellow lights 🟑 are much rarer than green 🟒 or red πŸ”΄.
The problem ⚑

Imagine we train a model to classify the color of the traffic light. A typical distribution will be:
πŸ”΄ - 56%
🟑 - 3%
🟒 - 41%

So, your model can get to 97% accuracy just by learning to distinguish red from green.

How can we deal with this?
Read 14 tweets
s this formula difficult? πŸ€”

This is the formula for Gradient Descent with Momentum as presented in Wikipedia.

It may look intimidating at first, but I promise you that by the end of this thread it will be easy to understand!

Thread πŸ‘‡

#RepostFriday
The Basis ◻️

Let's break it down! The basis is this simple formula describing an iterative optimization method.

We have some weights (parameters) and we iteratively update them in some way to reach a goal

Iterative methods are used when we cannot compute the solution directly
Gradient Decent Update πŸ“‰

We define a loss function describing how good our model is. We want to find the weights that minimize the loss (make the model better).

We compute the gradient of the loss and update the weights by a small amount (learning rate) against the gradient.
Read 8 tweets
Machine Learning in the Real World 🧠 πŸ€–

ML for real-world applications is much more than designing fancy networks and fine-tuning parameters.

In fact, you will spend most of your time curating a good dataset.

Let's go through the process together πŸ‘‡

#RepostFriday Image
Collect Data πŸ’½

We need to represent the real world as accurately as possible. If some situations are underrepresented we are introducing Sampling Bias.

Sampling Bias is nasty because we'll have high test accuracy, but our model will perform badly when deployed.

πŸ‘‡
Traffic Lights 🚦

Let's build a model to recognize traffic lights for a self-driving car. We need to collect data for different:

β–ͺ️ Lighting conditions
β–ͺ️ Weather conditions
β–ͺ️ Distances and viewpoints
β–ͺ️ Strange variants

And if we sample only 🚦 we won't detect πŸš₯ πŸ€·β€β™‚οΈ

πŸ‘‡ Image
Read 16 tweets
Machine Learning Formulas Explained! πŸ‘¨β€πŸ«

This is the formula for the Binary Cross Entropy Loss. It is commonly used for binary classification problems.

It may look super confusing, but I promise you that it is actually quite simple!

Let's go step by step πŸ‘‡

#RepostFriday
The Cross-Entropy Loss function is one of the most used losses for classification problems. It tells us how well a machine learning model classifies a dataset compared to the ground truth labels.

The Binary Cross-Entropy Loss is a special case when we have only 2 classes.

πŸ‘‡
The most important part to understand is this one - this is the core of the whole formula!

Here, Y denotes the ground-truth label, while ΕΆ is the predicted probability of the classifier.

Let's look at a simple example before we talk about the logarithm... πŸ‘‡
Read 13 tweets
There are two problems with ROC curves

❌ They don't work for imbalanced datasets
❌ They don't work for object detection problems

So what do we do to evaluate our machine learning models properly in these cases?

We use a Precision-Recall curve.

Thread πŸ‘‡

#RepostFriday
Last week I wrote another detailed thread on ROC curves. I recommend that you read it first if you don't know what they are.



Then go on πŸ‘‡
❌ Problem 1 - Imbalanced Data

ROC curves measure the True Positive Rate (also known as Accuracy). So, if you have an imbalanced dataset, the ROC curve will not tell you if your classifier completely ignores the underrepresented class.

Let's take an example confusion matrix πŸ‘‡
Read 20 tweets
Did you ever want to learn how to read ROC curves? πŸ“ˆπŸ€”

This is something you will encounter a lot when analyzing the performance of machine learning models.

Let me help you understand them πŸ‘‡

#RepostFriday
What does ROC mean?

ROC stands for Receiver Operating Characteristic but just forget about it. This is a military term from the 1940s and doesn't make much sense today.

Think about these curves as True Positive Rate vs. False Positive Rate plots.

Now, let's dive in πŸ‘‡
The ROC curve visualizes the trade-offs that a binary classifier makes between True Positives and False Positives.

This may sound too abstract for you so let's look at an example. After that, I encourage you to come back and read the previous sentence again!

Now the example πŸ‘‡
Read 19 tweets
How to evaluate your ML model? πŸ“

Your accuracy is 97%, so this is pretty good, right? Right? No! ❌

Just looking at the model accuracy is not enough. Let me tell you about some other metrics:
β–ͺ️ Recall
β–ͺ️ Precision
β–ͺ️ F1 score
β–ͺ️ Confusion matrix

Let's go πŸ‘‡

#RepostFriday
We'll use this example in the whole thread - classifying traffic light colors (e.g. for a self-driving car).

Yellow traffic lights appear much less often, so our dataset may look like this.

This means our model could reach 97% accuracy, by ignoring all 🟑 lights. Not good!

πŸ‘‡
Let's assume now that we trained our model and we get the following predictions.

Do you think this model is good? How can we quantitatively evaluate its performance? How should it be improved?

Let's first discuss the possible error types πŸ‘‡
Read 12 tweets

Related hashtags

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3.00/month or $30.00/year) and get exclusive features!

Become Premium

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!