Machine Learning Formulas Explained πŸ‘¨β€πŸ«

For regression problems you can use one of several loss functions:
β–ͺ️ MSE
β–ͺ️ MAE
β–ͺ️ Huber loss

But which one is best? When should you prefer one instead of the other?

Thread 🧡 Image
Let's first quickly recap what each of the loss functions does. After that, we can compare them and see the differences based on some examples.

πŸ‘‡
Mean Square Error (MSE)

For every sample, MSE takes the difference between the ground truth and the model's prediction and computes its square. Then, the average over all samples is computed.

For details, check out this thread:


πŸ‘‡
Mean Absolute Error (MAE)

MAE is very similar to MSE, but instead of taking the square of the difference between the ground truth and the model's prediction, it takes the absolute value.

πŸ‘‡
Huber Loss

The Huber loss is a combination of both ideas. It behaves like a square function (MSE) for small differences and like a linear function (MAE) for large differences.

I've written in detail about the Huber loss here:


πŸ‘‡
Alright, these loss functions are very similar in some aspects:

β–ͺ️ They are always positive or 0
β–ͺ️ The loss function increases if our model makes worse predictions
β–ͺ️ They can be used for training machine learning models for regression

But there are important differences πŸ‘‡
Handling outliers

Big differences between the ground truth and the prediction of the model are amplified by MSE much more than the others.

Imagine predicting a house price of 100K, while the real price is 50K. MSE will give us 2500K, while MAE and Huber loss only 50K.

πŸ‘‡
Therefore, a model trained with MSE as a loss function will focus on removing the large outliers because they will dominate the loss function. Samples with a smaller error will practically be ignored.

This can be problematic if we don't care about big outliers.

πŸ‘‡
Behavior around 0

MSE and the Huber loss are smooth around 0. This means that their derivative gets gradually smaller, which allows the optimization to converge nicely.

πŸ‘‡
MAE on the other hand has completely different gradients on both sides of the 0. This means that the optimization process may start oscillating when the error becomes small.

πŸ‘‡
OK, but which one is best?

Well, there is no universal answer to this question (you saw that coming, don't you πŸ˜‰). It really depends on your application and especially how you want to handle outliers.

Let's discuss two examples:
β–ͺ️ House price predictor
β–ͺ️ Trading bot

πŸ‘‡
House prices

Imagine you are a real estate agent. You want an ML model that can estimate the price of a house given properties like its size, location, age etc. You want your model to predict the price so you can compare it with the listing price and find good deals.

πŸ‘‡
You will run your model over many listings and let it filter out potential good deals, which you will then review manually before taking a decision to buy.

A big outlier is not a problem there - if the model makes a very wrong prediction you will spot it and ignore it.

πŸ‘‡
In this case, it is probably a good idea to use MAE or the Huber loss, because you don't want to avoid huge errors in the model at any cost.

On the other hand, you want to model to be fairly accurate for most predictions, so you don't waste your time reviewing bad deals.

πŸ‘‡
Automatic trading bot

Now imagine you are programming an ML model to predict stock prices and automatically buy and sell.

Things are different now - a big error of the model may lead to huge losses in a single trade! You don't want that.

πŸ‘‡
In this case, you should rather use MSE, because it will penalize these huge errors. There may be more cases where the model makes smaller mistakes, but as long as it is profitable, this will not matter that much.

πŸ‘‡
So, you see? Two different applications where we need different approaches. Sometimes MSE is better and sometimes MAE is better.

πŸ‘‡
And the Huber loss?

It kind of combines the advantages of MAE for outliers and MSE for smoothness. However, it adds complexity with the additional hyperparameter Ξ΄. It essentially defines which errors are counted as outliers.

It is one more parameter you need to tune...

πŸ‘‡
Let me summarize

β–ͺ️ MSE is good to avoid having large outliers
β–ͺ️ MAE is better if you don't care about the outliers too much
β–ͺ️ The Huber loss behaves like MAE for outliers but is smooth like MSE around 0
β–ͺ️ The Huber loss comes with an additional hyperparameter to tune

πŸ‘‡
I regularly write threads to explain complex concepts in machine learning and web3 in a simple manner.

Follow me @haltakov for more

β€’ β€’ β€’

Missing some Tweet in this thread? You can try to force a refresh
γ€€

Keep Current with haltakov.eth 🌍 πŸ‡ΊπŸ‡¦

haltakov.eth 🌍 πŸ‡ΊπŸ‡¦ Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @haltakov

Mar 18
s this formula difficult? πŸ€”

This is the formula for Gradient Descent with Momentum as presented in Wikipedia.

It may look intimidating at first, but I promise you that by the end of this thread it will be easy to understand!

Thread πŸ‘‡

#RepostFriday Image
The Basis ◻️

Let's break it down! The basis is this simple formula describing an iterative optimization method.

We have some weights (parameters) and we iteratively update them in some way to reach a goal

Iterative methods are used when we cannot compute the solution directly Image
Gradient Decent Update πŸ“‰

We define a loss function describing how good our model is. We want to find the weights that minimize the loss (make the model better).

We compute the gradient of the loss and update the weights by a small amount (learning rate) against the gradient. Image
Read 8 tweets
Mar 11
Machine Learning in the Real World 🧠 πŸ€–

ML for real-world applications is much more than designing fancy networks and fine-tuning parameters.

In fact, you will spend most of your time curating a good dataset.

Let's go through the process together πŸ‘‡

#RepostFriday Image
Collect Data πŸ’½

We need to represent the real world as accurately as possible. If some situations are underrepresented we are introducing Sampling Bias.

Sampling Bias is nasty because we'll have high test accuracy, but our model will perform badly when deployed.

πŸ‘‡
Traffic Lights 🚦

Let's build a model to recognize traffic lights for a self-driving car. We need to collect data for different:

β–ͺ️ Lighting conditions
β–ͺ️ Weather conditions
β–ͺ️ Distances and viewpoints
β–ͺ️ Strange variants

And if we sample only 🚦 we won't detect πŸš₯ πŸ€·β€β™‚οΈ

πŸ‘‡ Image
Read 16 tweets
Mar 8
Machine Learning Formulas Explained πŸ‘¨β€πŸ«

This is the Huber loss - another complicated-looking formula...

Yet again, if you break it down and understand the individual, it becomes really easy.

Let me show you πŸ‘‡ Image
Background

The Huber loss is a loss function that is similar to the Mean Squared Error (MSE) but it is designed to be more robust to outliers.

MSE suffers from the problem that if there is a small number of severe outliers they can dominate the whole loss

How does it work? πŸ‘‡
The key to understanding math formulas is not to try to understand everything at the same time.

Try looking at the terms inside the formula. Try to understand them from the inside to the outside...

Here, we can quickly see that one term is repeated several times...

πŸ‘‡ Image
Read 13 tweets
Mar 4
Machine Learning Formulas Explained! πŸ‘¨β€πŸ«

This is the formula for the Binary Cross Entropy Loss. It is commonly used for binary classification problems.

It may look super confusing, but I promise you that it is actually quite simple!

Let's go step by step πŸ‘‡

#RepostFriday
The Cross-Entropy Loss function is one of the most used losses for classification problems. It tells us how well a machine learning model classifies a dataset compared to the ground truth labels.

The Binary Cross-Entropy Loss is a special case when we have only 2 classes.

πŸ‘‡
The most important part to understand is this one - this is the core of the whole formula!

Here, Y denotes the ground-truth label, while ΕΆ is the predicted probability of the classifier.

Let's look at a simple example before we talk about the logarithm... πŸ‘‡
Read 13 tweets
Mar 3
When machine learning met crypto art... they fell in love ❀️

The Decentralized Autonomous Artist (DAA) is a concept that is uniquely enabled by these technologies.

Meet my favorite DAA - Botto.

Let me tell you how it works πŸ‘‡
Botto uses a popular technique to create images - VQGAN+CLIP

In simple terms, it uses a neural network model generating images (VQCAN) guided by the powerful CLIP model which can relate images to text.

This method can create stunning visuals from a simple text prompt!

πŸ‘‡
Creating amazing images, though, requires finding the right text prompt

Botto is programmed by its creator - artist Mario Klingemann (@quasimondo), but it creates all art itself. There is no human intervention in the creation of the images!

Botto is trained by the community πŸ‘‡
Read 11 tweets
Feb 25
There are two problems with ROC curves

❌ They don't work for imbalanced datasets
❌ They don't work for object detection problems

So what do we do to evaluate our machine learning models properly in these cases?

We use a Precision-Recall curve.

Thread πŸ‘‡

#RepostFriday
Last week I wrote another detailed thread on ROC curves. I recommend that you read it first if you don't know what they are.



Then go on πŸ‘‡
❌ Problem 1 - Imbalanced Data

ROC curves measure the True Positive Rate (also known as Accuracy). So, if you have an imbalanced dataset, the ROC curve will not tell you if your classifier completely ignores the underrepresented class.

Let's take an example confusion matrix πŸ‘‡
Read 20 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us on Twitter!

:(