Let's break it down! The basis is this simple formula describing an iterative optimization method.
We have some weights (parameters) and we iteratively update them in some way to reach a goal
Iterative methods are used when we cannot compute the solution directly
Gradient Decent Update π
We define a loss function describing how good our model is. We want to find the weights that minimize the loss (make the model better).
We compute the gradient of the loss and update the weights by a small amount (learning rate) against the gradient.
Here is an illustration of how it works.
The gradient tells us if the loss will decrease (negative gradient) or increase (positive gradient) if we increase the weight.
The learning rate defines how far along the gradient we will jump in the current step of the optimization.
Momentum β½οΈ
Now we add the momentum. It is defined as the weight update in the previous step times a decay factor.
The decay factor is just a number between 0 and 1 defining how much of the previous update will be taken into account. Ξ± = 0 means no momentum and Ξ± = 1 is a lot.
A useful analogy is a ball rolling down a hill. If the hill is steep, the ball will accelerate (we update the weights more)
This will help the ball jump over small local minima and continue down the hill (to a smaller loss).
More momentum means a heavier ball with high inertia
Putting it all together π
So, in the original formula we update the weights using two terms.
The *gradient descent* term pushes us down the slope of the loss function.
The *momentum* term helps us accelerate and jump over small local minima.
Not that hard, right?
Every Friday I repost one of my old threads so more people get the chance to see them. During the rest of the week, I post new content on machine learning and web3.
If you are interested in seeing more, follow me @haltakov
β’ β’ β’
Missing some Tweet in this thread? You can try to
force a refresh
How can I prove to you that I know a secret, without revealing any information about the secret itself?
This is called a zero-knowledge proof and it is a super interesting area of cryptography! But how does it work?
Thread π§΅
Let's start with an example
Peggie and Victor travel between cities A and B. There are two paths - a long path and a short path. The problem is that there is a gate on the short path for which you need a password.
Peggie knows the password, but Victor doesn't.
π
Victor wants to buy the password from Peggie so he can use the short path.
But what if Victor pays Peggie, but she lied and she didn't know the password? How can Peggie prove to Victor she knows the password, without actually revealing it?
Rescue Toadz looks like a regular NFT collection at first - you can mint a toad and you get an NFT in your wallet.
100% of the mint fee is directly sent to @Unchainfund - an organization that provides humanitarian aid to Ukraine and that has already raised $9M!
π
@ianbydesign@RescueToadz@Unchainfund@cryptoadzNFT The process is completely trustless and automatic! All the logic is coded in the smart contract which cannot be changed and which everybody can inspect.
You trust the code, not us! We have no way to steal the funds even if we wanted (we don't π).
Principal Component Analysis is a commonly used method for dimensionality reduction.
It's a good example of how fairly complex math can have an intuitive explanation and be easy to use in practice.
Let's start from the application of PCA π
Dimensionality Reduction
This is one of the common uses of PCA in machine learning.
Imagine you want to predict house prices. You get a large table of many houses and different features for them like size, number of rooms, location, age, etc.
Some features seem correlated π
Correlated features
For example, the size of the house is correlated with the number of rooms. Bigger houses tend to have more rooms.
Another example could be the age and the year the house was built - they give us pretty much the same information.
For regression problems you can use one of several loss functions:
βͺοΈ MSE
βͺοΈ MAE
βͺοΈ Huber loss
But which one is best? When should you prefer one instead of the other?
Thread π§΅
Let's first quickly recap what each of the loss functions does. After that, we can compare them and see the differences based on some examples.
π
Mean Square Error (MSE)
For every sample, MSE takes the difference between the ground truth and the model's prediction and computes its square. Then, the average over all samples is computed.