Machine Learning Formulas Explained ๐Ÿ‘จโ€๐Ÿซ

This is the Huber loss - another complicated-looking formula...

Yet again, if you break it down and understand the individual, it becomes really easy.

Let me show you ๐Ÿ‘‡ Image
Background

The Huber loss is a loss function that is similar to the Mean Squared Error (MSE) but it is designed to be more robust to outliers.

MSE suffers from the problem that if there is a small number of severe outliers they can dominate the whole loss

How does it work? ๐Ÿ‘‡
The key to understanding math formulas is not to try to understand everything at the same time.

Try looking at the terms inside the formula. Try to understand them from the inside to the outside...

Here, we can quickly see that one term is repeated several times...

๐Ÿ‘‡ Image
Here y is the ground truth value we compare to and f(x) is the result provided by our model.

You can think for example about estimating house prices. Then y is the real prices and f(x) is the price our machine learning model predicted.

We can then simplify the formula a bit ๐Ÿ‘‡ Image
The next thing we see is that the formula has two parts.

The first part is a simple quadratic term ฮฑ^2 (with a constant of 0.5).

The second part is a bit convoluted with a couple of constants, but it is an absolute (linear) term - |ฮฑ|.

Let's simplify further... Image
The parameter ฮด determines when we choose one part and when the other. Let's try to set to a fixed value for now, so that we can simplify things. Setting ฮด = 1 gives us the simplest form.

OK, now if you ignore the constants (I'll come back to them later), it is quite simple

๐Ÿ‘‡ Image
What the formula now tells us is that we take the square of ฮฑ close to 0 and the absolute value otherwise.

Let's quickly implement the function in Python and plot it for ฮด = 1.

Take a look at the plot - do you see the quadratic and the linear part?

๐Ÿ‘‡ ImageImage
Alright, let me annotate the image a little bit.

You can clearly see how the Huber loss behaves like a quadratic function close to 0 and like the absolute value further away.

OK, now we understand the core of the formula. Let's go back and undo the simplifications...

๐Ÿ‘‡ Image
First, what's with these deltas?

We want our loss function to be continuous, so at the border between the two parts (when ฮฑ = ฮด) they need to have the same value.

What the constants in the linear term do is just make sure that it equals the quadratic term when ฮฑ = ฮด!

๐Ÿ‘‡ Image
Finally, why is this constant 0.5 everywhere? Do we really need it?

The thing is that we typically use a loss function to compute its derivative and optimize our weights. And the derivative of 0.5*ฮฑ^2 is... simply ฮฑ.

We use the 0.5 just to make the derivative simpler ๐Ÿคทโ€โ™‚๏ธ

๐Ÿ‘‡
And if you want to use the Huber loss, you probably don't need to implement it yourself - popular ML libraries already have it implemented:

โ–ช๏ธ PyTorch: torch.nn.HuberLoss
โ–ช๏ธ TensorFlow: tf.keras.losses.Huber
โ–ช๏ธscikit-learn: sklearn.linear_model.HuberRegressor
Summary

The Huber loss takes the form of a quadratic function (like MSE) close to 0 and of a linear function (like MAE) away from zero. This makes it more robust to outliers while keeping it smooth around 0. You control the balance with the parameter ฮด.

Simple, right? ๐Ÿ˜ Image
I regularly write threads to explain complex concepts in machine learning and web3 in a simple manner.

Follow me @haltakov for more

โ€ข โ€ข โ€ข

Missing some Tweet in this thread? You can try to force a refresh
ใ€€

Keep Current with haltakov.eth ๐ŸŒ ๐Ÿ‡บ๐Ÿ‡ฆ

haltakov.eth ๐ŸŒ ๐Ÿ‡บ๐Ÿ‡ฆ Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @haltakov

Mar 4
Machine Learning Formulas Explained! ๐Ÿ‘จโ€๐Ÿซ

This is the formula for the Binary Cross Entropy Loss. It is commonly used for binary classification problems.

It may look super confusing, but I promise you that it is actually quite simple!

Let's go step by step ๐Ÿ‘‡

#RepostFriday
The Cross-Entropy Loss function is one of the most used losses for classification problems. It tells us how well a machine learning model classifies a dataset compared to the ground truth labels.

The Binary Cross-Entropy Loss is a special case when we have only 2 classes.

๐Ÿ‘‡
The most important part to understand is this one - this is the core of the whole formula!

Here, Y denotes the ground-truth label, while ลถ is the predicted probability of the classifier.

Let's look at a simple example before we talk about the logarithm... ๐Ÿ‘‡
Read 13 tweets
Mar 3
When machine learning met crypto art... they fell in love โค๏ธ

The Decentralized Autonomous Artist (DAA) is a concept that is uniquely enabled by these technologies.

Meet my favorite DAA - Botto.

Let me tell you how it works ๐Ÿ‘‡
Botto uses a popular technique to create images - VQGAN+CLIP

In simple terms, it uses a neural network model generating images (VQCAN) guided by the powerful CLIP model which can relate images to text.

This method can create stunning visuals from a simple text prompt!

๐Ÿ‘‡
Creating amazing images, though, requires finding the right text prompt

Botto is programmed by its creator - artist Mario Klingemann (@quasimondo), but it creates all art itself. There is no human intervention in the creation of the images!

Botto is trained by the community ๐Ÿ‘‡
Read 11 tweets
Feb 25
There are two problems with ROC curves

โŒ They don't work for imbalanced datasets
โŒ They don't work for object detection problems

So what do we do to evaluate our machine learning models properly in these cases?

We use a Precision-Recall curve.

Thread ๐Ÿ‘‡

#RepostFriday
Last week I wrote another detailed thread on ROC curves. I recommend that you read it first if you don't know what they are.



Then go on ๐Ÿ‘‡
โŒ Problem 1 - Imbalanced Data

ROC curves measure the True Positive Rate (also known as Accuracy). So, if you have an imbalanced dataset, the ROC curve will not tell you if your classifier completely ignores the underrepresented class.

Let's take an example confusion matrix ๐Ÿ‘‡
Read 20 tweets
Feb 24
Is your machine learning model performing well? What about in 6 months? ๐Ÿค”

If you are wondering why I'm asking this, you need to learn about ๐—ฐ๐—ผ๐—ป๐—ฐ๐—ฒ๐—ฝ๐˜ ๐—ฑ๐—ฟ๐—ถ๐—ณ๐˜ and ๐—ฑ๐—ฎ๐˜๐—ฎ ๐—ฑ๐—ฟ๐—ถ๐—ณ๐˜.

Let me explain this to you using two real world examples.

Thread ๐Ÿ‘‡
Imagine you are developing a model for a self-driving car to detect other vehicles at night.

Well, this is not too difficult, since vehicles have two red tail lights and it is easy to get a lot of data. You model works great!

But then... ๐Ÿ‘‡ Image
Car companies decide to experiment with red horizontal bars instead of two individual lights.

Now your model fails to detect these cars because it has never seen this kind of tail light.

Your model is suffering from ๐—ฐ๐—ผ๐—ป๐—ฐ๐—ฒ๐—ฝ๐˜ ๐—ฑ๐—ฟ๐—ถ๐—ณ๐˜

๐Ÿ‘‡ ImageImage
Read 11 tweets
Feb 22
Math is not very important when you are using a machine learning method to solve your problem.

Everybody that disagrees, should study the 92-page appendix of the Self-normalizing networks (SNN) paper, before using
torch.nn.SELU.

And the core idea of SNN is actually simple ๐Ÿ‘‡ ImageImageImageImage
SNNs use an activation function called Scaled Exponential Linear Unit (SELU) that is pretty simple to define.

It has the advantage that the activations converge to zero mean and unit variance, which allows training of deeper networks and employing strong regularization.

๐Ÿ‘‡ ImageImage
There are implementations both in PyTorch (torch.nn.SELU) and TensorFlow (tf.keras.activations.selu).

You need to be careful to use the correct initialization function and dropout, but this is well documented.

The code is open-source as well: github.com/bioinf-jku/SNNs

๐Ÿ‘‡
Read 9 tweets
Feb 21
This is like an NFT in the physical world

This is a special edition BMW 8 series painted by the famous artist Jeff Koons. A limited-edition of 99 with a price of $350K - about $200K more than the regular M850i.

If you think about it, you'll see many similarities with NFTs

๐Ÿ‘‡ Image
Artificially scarce

BMW can surely produce (mint ๐Ÿ˜…) more than 99 cars with this paint. The collection size is limited artificially in order to make it more exclusive.

Same as most NFT collections - they create artificial scarcity.

๐Ÿ‘‡
Its price comes from the story

The $200K premium for the "paint" is purely motivated by the story around this car - it is exclusive, it is created by a famous artist, it is a BMW Art Car.

It is not faster, more reliable, or more economic. You are paying for the story.

๐Ÿ‘‡
Read 10 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us on Twitter!

:(