haltakov.eth πŸ§±πŸ”¨ Profile picture
Mar 8, 2022 β€’ 13 tweets β€’ 5 min read β€’ Read on X
Machine Learning Formulas Explained πŸ‘¨β€πŸ«

This is the Huber loss - another complicated-looking formula...

Yet again, if you break it down and understand the individual, it becomes really easy.

Let me show you πŸ‘‡ Image
Background

The Huber loss is a loss function that is similar to the Mean Squared Error (MSE) but it is designed to be more robust to outliers.

MSE suffers from the problem that if there is a small number of severe outliers they can dominate the whole loss

How does it work? πŸ‘‡
The key to understanding math formulas is not to try to understand everything at the same time.

Try looking at the terms inside the formula. Try to understand them from the inside to the outside...

Here, we can quickly see that one term is repeated several times...

πŸ‘‡ Image
Here y is the ground truth value we compare to and f(x) is the result provided by our model.

You can think for example about estimating house prices. Then y is the real prices and f(x) is the price our machine learning model predicted.

We can then simplify the formula a bit πŸ‘‡ Image
The next thing we see is that the formula has two parts.

The first part is a simple quadratic term Ξ±^2 (with a constant of 0.5).

The second part is a bit convoluted with a couple of constants, but it is an absolute (linear) term - |Ξ±|.

Let's simplify further... Image
The parameter Ξ΄ determines when we choose one part and when the other. Let's try to set to a fixed value for now, so that we can simplify things. Setting Ξ΄ = 1 gives us the simplest form.

OK, now if you ignore the constants (I'll come back to them later), it is quite simple

πŸ‘‡ Image
What the formula now tells us is that we take the square of Ξ± close to 0 and the absolute value otherwise.

Let's quickly implement the function in Python and plot it for Ξ΄ = 1.

Take a look at the plot - do you see the quadratic and the linear part?

πŸ‘‡ ImageImage
Alright, let me annotate the image a little bit.

You can clearly see how the Huber loss behaves like a quadratic function close to 0 and like the absolute value further away.

OK, now we understand the core of the formula. Let's go back and undo the simplifications...

πŸ‘‡ Image
First, what's with these deltas?

We want our loss function to be continuous, so at the border between the two parts (when Ξ± = Ξ΄) they need to have the same value.

What the constants in the linear term do is just make sure that it equals the quadratic term when Ξ± = Ξ΄!

πŸ‘‡ Image
Finally, why is this constant 0.5 everywhere? Do we really need it?

The thing is that we typically use a loss function to compute its derivative and optimize our weights. And the derivative of 0.5*Ξ±^2 is... simply Ξ±.

We use the 0.5 just to make the derivative simpler πŸ€·β€β™‚οΈ

πŸ‘‡
And if you want to use the Huber loss, you probably don't need to implement it yourself - popular ML libraries already have it implemented:

β–ͺ️ PyTorch: torch.nn.HuberLoss
β–ͺ️ TensorFlow: tf.keras.losses.Huber
β–ͺ️scikit-learn: sklearn.linear_model.HuberRegressor
Summary

The Huber loss takes the form of a quadratic function (like MSE) close to 0 and of a linear function (like MAE) away from zero. This makes it more robust to outliers while keeping it smooth around 0. You control the balance with the parameter Ξ΄.

Simple, right? 😁 Image
I regularly write threads to explain complex concepts in machine learning and web3 in a simple manner.

Follow me @haltakov for more

β€’ β€’ β€’

Missing some Tweet in this thread? You can try to force a refresh
γ€€

Keep Current with haltakov.eth πŸ§±πŸ”¨

haltakov.eth πŸ§±πŸ”¨ Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @haltakov

Jul 5, 2022
Zero-Knowledge Proofs 0οΈβƒ£πŸ“˜

How can I prove to you that I know a secret, without revealing any information about the secret itself?

This is called a zero-knowledge proof and it is a super interesting area of cryptography! But how does it work?

Thread 🧡
Let's start with an example

Peggie and Victor travel between cities A and B. There are two paths - a long path and a short path. The problem is that there is a gate on the short path for which you need a password.

Peggie knows the password, but Victor doesn't.

πŸ‘‡
Victor wants to buy the password from Peggie so he can use the short path.

But what if Victor pays Peggie, but she lied and she didn't know the password? How can Peggie prove to Victor she knows the password, without actually revealing it?

They use a zero-knowledge proof πŸ‘‡
Read 20 tweets
Mar 30, 2022
Launching a charity project for Ukraine πŸ‡ΊπŸ‡¦

Me and @ianbydesign teamed up to build @RescueToadz - an NFT collection raising funds for humanitarian aid via @Unchainfund. Many thanks to @cryptoadzNFT for the support!

rescuetoadz.xyz

It's unlike any other NFT, thoughπŸ‘‡
@ianbydesign @RescueToadz @Unchainfund @cryptoadzNFT Trustless

Rescue Toadz looks like a regular NFT collection at first - you can mint a toad and you get an NFT in your wallet.

100% of the mint fee is directly sent to @Unchainfund - an organization that provides humanitarian aid to Ukraine and that has already raised $9M!

πŸ‘‡ Image
@ianbydesign @RescueToadz @Unchainfund @cryptoadzNFT The process is completely trustless and automatic! All the logic is coded in the smart contract which cannot be changed and which everybody can inspect.

You trust the code, not us! We have no way to steal the funds even if we wanted (we don't πŸ˜€).

etherscan.io/address/0x5760…

πŸ‘‡ Image
Read 6 tweets
Mar 25, 2022
Dealing with imbalanced datasets 🐁 βš–οΈ 🐘

Real world datasets are often imbalanced - some of the classes appear much more often than others.

The problem? You ML model will likely learn to only predict the dominant classes.

What can you do about it? πŸ€”

Thread 🧡 #RepostFriday
Example 🚦

We will be dealing with an ML model to detect traffic lights for a self-driving car πŸ€–πŸš—

Traffic lights are small so you will have much more parts of the image that are not traffic lights.

Furthermore, yellow lights 🟑 are much rarer than green 🟒 or red πŸ”΄.
The problem ⚑

Imagine we train a model to classify the color of the traffic light. A typical distribution will be:
πŸ”΄ - 56%
🟑 - 3%
🟒 - 41%

So, your model can get to 97% accuracy just by learning to distinguish red from green.

How can we deal with this?
Read 14 tweets
Mar 22, 2022
Machine Learning Explained πŸ‘¨β€πŸ«

PCA

Principal Component Analysis is a commonly used method for dimensionality reduction.

It's a good example of how fairly complex math can have an intuitive explanation and be easy to use in practice.

Let's start from the application of PCA πŸ‘‡ Image
Dimensionality Reduction

This is one of the common uses of PCA in machine learning.

Imagine you want to predict house prices. You get a large table of many houses and different features for them like size, number of rooms, location, age, etc.

Some features seem correlated πŸ‘‡
Correlated features

For example, the size of the house is correlated with the number of rooms. Bigger houses tend to have more rooms.

Another example could be the age and the year the house was built - they give us pretty much the same information.

We don't want that πŸ‘‡
Read 16 tweets
Mar 18, 2022
s this formula difficult? πŸ€”

This is the formula for Gradient Descent with Momentum as presented in Wikipedia.

It may look intimidating at first, but I promise you that by the end of this thread it will be easy to understand!

Thread πŸ‘‡

#RepostFriday
The Basis ◻️

Let's break it down! The basis is this simple formula describing an iterative optimization method.

We have some weights (parameters) and we iteratively update them in some way to reach a goal

Iterative methods are used when we cannot compute the solution directly
Gradient Decent Update πŸ“‰

We define a loss function describing how good our model is. We want to find the weights that minimize the loss (make the model better).

We compute the gradient of the loss and update the weights by a small amount (learning rate) against the gradient.
Read 8 tweets
Mar 16, 2022
Machine Learning Formulas Explained πŸ‘¨β€πŸ«

For regression problems you can use one of several loss functions:
β–ͺ️ MSE
β–ͺ️ MAE
β–ͺ️ Huber loss

But which one is best? When should you prefer one instead of the other?

Thread 🧡 Image
Let's first quickly recap what each of the loss functions does. After that, we can compare them and see the differences based on some examples.

πŸ‘‡
Mean Square Error (MSE)

For every sample, MSE takes the difference between the ground truth and the model's prediction and computes its square. Then, the average over all samples is computed.

For details, check out this thread:


πŸ‘‡
Read 20 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us!

:(