This is the Huber loss - another complicated-looking formula...
Yet again, if you break it down and understand the individual, it becomes really easy.
Let me show you ๐
Background
The Huber loss is a loss function that is similar to the Mean Squared Error (MSE) but it is designed to be more robust to outliers.
MSE suffers from the problem that if there is a small number of severe outliers they can dominate the whole loss
How does it work? ๐
The key to understanding math formulas is not to try to understand everything at the same time.
Try looking at the terms inside the formula. Try to understand them from the inside to the outside...
Here, we can quickly see that one term is repeated several times...
๐
Here y is the ground truth value we compare to and f(x) is the result provided by our model.
You can think for example about estimating house prices. Then y is the real prices and f(x) is the price our machine learning model predicted.
We can then simplify the formula a bit ๐
The next thing we see is that the formula has two parts.
The first part is a simple quadratic term ฮฑ^2 (with a constant of 0.5).
The second part is a bit convoluted with a couple of constants, but it is an absolute (linear) term - |ฮฑ|.
Let's simplify further...
The parameter ฮด determines when we choose one part and when the other. Let's try to set to a fixed value for now, so that we can simplify things. Setting ฮด = 1 gives us the simplest form.
OK, now if you ignore the constants (I'll come back to them later), it is quite simple
๐
What the formula now tells us is that we take the square of ฮฑ close to 0 and the absolute value otherwise.
Let's quickly implement the function in Python and plot it for ฮด = 1.
Take a look at the plot - do you see the quadratic and the linear part?
๐
Alright, let me annotate the image a little bit.
You can clearly see how the Huber loss behaves like a quadratic function close to 0 and like the absolute value further away.
OK, now we understand the core of the formula. Let's go back and undo the simplifications...
๐
First, what's with these deltas?
We want our loss function to be continuous, so at the border between the two parts (when ฮฑ = ฮด) they need to have the same value.
What the constants in the linear term do is just make sure that it equals the quadratic term when ฮฑ = ฮด!
๐
Finally, why is this constant 0.5 everywhere? Do we really need it?
The thing is that we typically use a loss function to compute its derivative and optimize our weights. And the derivative of 0.5*ฮฑ^2 is... simply ฮฑ.
We use the 0.5 just to make the derivative simpler ๐คทโโ๏ธ
๐
And if you want to use the Huber loss, you probably don't need to implement it yourself - popular ML libraries already have it implemented:
The Huber loss takes the form of a quadratic function (like MSE) close to 0 and of a linear function (like MAE) away from zero. This makes it more robust to outliers while keeping it smooth around 0. You control the balance with the parameter ฮด.
Simple, right? ๐
I regularly write threads to explain complex concepts in machine learning and web3 in a simple manner.
The Cross-Entropy Loss function is one of the most used losses for classification problems. It tells us how well a machine learning model classifies a dataset compared to the ground truth labels.
The Binary Cross-Entropy Loss is a special case when we have only 2 classes.
๐
The most important part to understand is this one - this is the core of the whole formula!
Here, Y denotes the ground-truth label, while ลถ is the predicted probability of the classifier.
Let's look at a simple example before we talk about the logarithm... ๐
When machine learning met crypto art... they fell in love โค๏ธ
The Decentralized Autonomous Artist (DAA) is a concept that is uniquely enabled by these technologies.
Meet my favorite DAA - Botto.
Let me tell you how it works ๐
Botto uses a popular technique to create images - VQGAN+CLIP
In simple terms, it uses a neural network model generating images (VQCAN) guided by the powerful CLIP model which can relate images to text.
This method can create stunning visuals from a simple text prompt!
๐
Creating amazing images, though, requires finding the right text prompt
Botto is programmed by its creator - artist Mario Klingemann (@quasimondo), but it creates all art itself. There is no human intervention in the creation of the images!
ROC curves measure the True Positive Rate (also known as Accuracy). So, if you have an imbalanced dataset, the ROC curve will not tell you if your classifier completely ignores the underrepresented class.
Is your machine learning model performing well? What about in 6 months? ๐ค
If you are wondering why I'm asking this, you need to learn about ๐ฐ๐ผ๐ป๐ฐ๐ฒ๐ฝ๐ ๐ฑ๐ฟ๐ถ๐ณ๐ and ๐ฑ๐ฎ๐๐ฎ ๐ฑ๐ฟ๐ถ๐ณ๐.
Let me explain this to you using two real world examples.
Thread ๐
Imagine you are developing a model for a self-driving car to detect other vehicles at night.
Well, this is not too difficult, since vehicles have two red tail lights and it is easy to get a lot of data. You model works great!
But then... ๐
Car companies decide to experiment with red horizontal bars instead of two individual lights.
Now your model fails to detect these cars because it has never seen this kind of tail light.
Your model is suffering from ๐ฐ๐ผ๐ป๐ฐ๐ฒ๐ฝ๐ ๐ฑ๐ฟ๐ถ๐ณ๐
Math is not very important when you are using a machine learning method to solve your problem.
Everybody that disagrees, should study the 92-page appendix of the Self-normalizing networks (SNN) paper, before using
torch.nn.SELU.
And the core idea of SNN is actually simple ๐
SNNs use an activation function called Scaled Exponential Linear Unit (SELU) that is pretty simple to define.
It has the advantage that the activations converge to zero mean and unit variance, which allows training of deeper networks and employing strong regularization.
๐
There are implementations both in PyTorch (torch.nn.SELU) and TensorFlow (tf.keras.activations.selu).
You need to be careful to use the correct initialization function and dropout, but this is well documented.
This is a special edition BMW 8 series painted by the famous artist Jeff Koons. A limited-edition of 99 with a price of $350K - about $200K more than the regular M850i.
If you think about it, you'll see many similarities with NFTs
๐
Artificially scarce
BMW can surely produce (mint ๐ ) more than 99 cars with this paint. The collection size is limited artificially in order to make it more exclusive.
Same as most NFT collections - they create artificial scarcity.
๐
Its price comes from the story
The $200K premium for the "paint" is purely motivated by the story around this car - it is exclusive, it is created by a famous artist, it is a BMW Art Car.
It is not faster, more reliable, or more economic. You are paying for the story.