haltakov.eth πŸ§±πŸ”¨ Profile picture
Making web3 accessible to everyone. VP of Engineering @FR0NTIER_X. Used to program self-driving cars. Side-project @0xbnomial.

Mar 8, 2022, 13 tweets

Machine Learning Formulas Explained πŸ‘¨β€πŸ«

This is the Huber loss - another complicated-looking formula...

Yet again, if you break it down and understand the individual, it becomes really easy.

Let me show you πŸ‘‡

Background

The Huber loss is a loss function that is similar to the Mean Squared Error (MSE) but it is designed to be more robust to outliers.

MSE suffers from the problem that if there is a small number of severe outliers they can dominate the whole loss

How does it work? πŸ‘‡

The key to understanding math formulas is not to try to understand everything at the same time.

Try looking at the terms inside the formula. Try to understand them from the inside to the outside...

Here, we can quickly see that one term is repeated several times...

πŸ‘‡

Here y is the ground truth value we compare to and f(x) is the result provided by our model.

You can think for example about estimating house prices. Then y is the real prices and f(x) is the price our machine learning model predicted.

We can then simplify the formula a bit πŸ‘‡

The next thing we see is that the formula has two parts.

The first part is a simple quadratic term Ξ±^2 (with a constant of 0.5).

The second part is a bit convoluted with a couple of constants, but it is an absolute (linear) term - |Ξ±|.

Let's simplify further...

The parameter Ξ΄ determines when we choose one part and when the other. Let's try to set to a fixed value for now, so that we can simplify things. Setting Ξ΄ = 1 gives us the simplest form.

OK, now if you ignore the constants (I'll come back to them later), it is quite simple

πŸ‘‡

What the formula now tells us is that we take the square of Ξ± close to 0 and the absolute value otherwise.

Let's quickly implement the function in Python and plot it for Ξ΄ = 1.

Take a look at the plot - do you see the quadratic and the linear part?

πŸ‘‡

Alright, let me annotate the image a little bit.

You can clearly see how the Huber loss behaves like a quadratic function close to 0 and like the absolute value further away.

OK, now we understand the core of the formula. Let's go back and undo the simplifications...

πŸ‘‡

First, what's with these deltas?

We want our loss function to be continuous, so at the border between the two parts (when Ξ± = Ξ΄) they need to have the same value.

What the constants in the linear term do is just make sure that it equals the quadratic term when Ξ± = Ξ΄!

πŸ‘‡

Finally, why is this constant 0.5 everywhere? Do we really need it?

The thing is that we typically use a loss function to compute its derivative and optimize our weights. And the derivative of 0.5*Ξ±^2 is... simply Ξ±.

We use the 0.5 just to make the derivative simpler πŸ€·β€β™‚οΈ

πŸ‘‡

And if you want to use the Huber loss, you probably don't need to implement it yourself - popular ML libraries already have it implemented:

β–ͺ️ PyTorch: torch.nn.HuberLoss
β–ͺ️ TensorFlow: tf.keras.losses.Huber
β–ͺ️scikit-learn: sklearn.linear_model.HuberRegressor

Summary

The Huber loss takes the form of a quadratic function (like MSE) close to 0 and of a linear function (like MAE) away from zero. This makes it more robust to outliers while keeping it smooth around 0. You control the balance with the parameter Ξ΄.

Simple, right? 😁

I regularly write threads to explain complex concepts in machine learning and web3 in a simple manner.

Follow me @haltakov for more

Share this Scrolly Tale with your friends.

A Scrolly Tale is a new way to read Twitter threads with a more visually immersive experience.
Discover more beautiful Scrolly Tales like this.

Keep scrolling