Machine Learning Formulas Explained πŸ‘¨β€πŸ«

This is the formula for Mean Squared Error (MSE) as defined in WikiPedia. It represents a very simple concept, but may not be easy to read if you are just starting with ML.

Read below and it will be a piece of cake! 🍰

Thread πŸ‘‡
The core ⚫

Let's unpack from the inside out. MSE calculates how close are your model's predictions ΕΆ to the ground truth labels Y. You want the error to go to 0.

If you are predicting house prices, the error could be the difference between the predicted and the actual price.
Why squared? 2️⃣

Subtracting the prediction from the label won't work. The error may be negative or positive, which is a problem when summing up samples.

You can take the absolute value or the square of the error. The square has the property that it punished bigger errors more.
Why squared? Example 2️⃣

Imagine your pediction for the price of two houses is like this:

🏑 House 1: actual 120K, predicted 100K -> error 20K
🏠 House 2: actual 60K, predicted 80K -> error -20K

If you sum these up the error will be 0, which is obviously wrong...
Summing up all samples βž•

When training your model you will have many samples (n) in your batch. We need to calculate the error for each one and sum it up.

Again, having the error be always β‰₯ 0 is important here.
Taking the average βž—

You are good to go how!

However, if you want to compare the errors of batches of different sizes, you need to normalize for the number of samples - you take the average.

For example, you may want to see which batch size produces a lower error.
Mean Squared Error πŸ“‰

Now it should be easier to understand the formula!

MSE is a commonly used statistical measure and loss function in ML regression models (e.g. linear regression).

You should look into the Mean Absolute Error (MAE) as well, which handles outliers better.
For the people that can "think" better in code, here a small Python example for calculating the Mean Square Error. πŸ‘¨β€πŸ’»

Thanks to @ArpJann for the idea!

β€’ β€’ β€’

Missing some Tweet in this thread? You can try to force a refresh
γ€€

Keep Current with Vladimir Haltakov

Vladimir Haltakov Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @haltakov

28 Jan
Is this formula difficult? πŸ€”

This is the formula for Gradient Descent with Momentum as presented in Wikipedia.

It may look intimidating at first, but I promise you that by the end of this thread it will be easy to understand!

Thread πŸ‘‡
The Basis ◻️

Let's break it down! The basis is this simple formula describing an iterative optimization method.

We have some weights (parameters) and we iteratively update them in some way to reach a goal.

Iterative methods are used when we cannot compute the solution directly
Gradient Decent Update πŸ“‰

We define a loss function describing how good our model is. We want to find the weights that minimize the loss (make the model better).

We compute the gradient of the loss and update the weights by a small amount (learning rate) against the gradient.
Read 7 tweets
27 Jan
How to add new classes to your ML model? 🍏🍎🍊... 🍌?

You have a large multi-class NN in production.

You discover a new important class and want to add support for it *quickly* and with *low* risk.

Example: traffic signs recognition for self-driving cars πŸ›‘πŸš—

Thread πŸ‘‡
The naive approach πŸ€·β€β™‚οΈ

Collect examples of the new class (for example a new traffic sign), label them and retrain the whole NN.

βœ… It will probably work

❌ It will be time consuming, especially for big models.
❌ Risk for unintended regressions
Freezing the first layers πŸ₯Ά

Typical CNNs learn generic image features in the initial layers and they will likely apply to the new sign as well.

You can freeze the weights of the initial layers and only retrain the last fully connected layer(s).
Read 10 tweets
26 Jan
Machine Learning Interview Question #7 πŸ€–πŸ§ πŸ§

This is a more difficult and more open question...

❓ You are developing a traffic signs detector for a self-driving car.

How would you design it in a way that you can quickly add support for new signs, you didn't see before ❓
🌟 BONUS QUESTION 🌟:

Can you do this with minimal retrain of your neural network?
Looking forward to some creative answers! πŸ˜ƒ

Answer in the replies. Read the rules πŸ‘‡

Read 4 tweets
12 Jan
What are typical challenges when training a deep neural networks ⁉️

β–ͺ️ Overfitting
β–ͺ️ Underfitting
β–ͺ️ Lack of training data
β–ͺ️ Vanishing gradients
β–ͺ️ Exploding gradients
β–ͺ️ Dead ReLUs
β–ͺ️ Network architecture design
β–ͺ️ Hyperparameter tuning

How to solve them πŸ‘‡
Overfitting 🐘

Your model performs well during training, but poorly during test.

Possible solutions:
- Reduce the size of your model
- Add more data
- Increase dropout
- Stop the training early
- Add regularization to your loss
- Decrease batch size
Underfitting 🐁

You model performs poorly both during training and test.

Possible solutions:
- Increase the size of your model
- Add more data
- Train for a longer time
- Start with a pre-trained network
Read 11 tweets
28 Dec 20
You are feeling overwhelmed when learning something new? 😫

There is so much information out there and you don't know where to start? πŸ₯΄

Here is my strategy to learn new concepts that has helped me a lot in my career...

πŸ‘‡ Thread πŸ‘‡
The problem with complex topics? πŸ€”

Today, the problem is not the availability of the information, but its discovery! πŸ”­

You need to avoid going down the rabbit whole, before you are sure this is the right rabbit hole πŸ˜€

Learn to focus and prioritize how to spend your time!
Get a rough overview πŸ—ΊοΈ

Research about the topic you are trying to learn and get a rough idea of the existing concepts. Don't try to understand everything yet!

The goal is to only have an overview of what is out there.

Survey papers about a specific topic are a good example.
Read 7 tweets
27 Dec 20
Artificial Intelligence and Machine Learning trends in 2020 πŸ§ πŸ€–

Short overview of the fields where AI and ML is growing fast.

πŸ‘‡ Thread πŸ‘‡
Robotics πŸ€–

Traditional robotics algorithms like localization, mapping, path planning and vehicle/robot control are being successfully replaced by AI versions.

Reinforcement learning is also a big topic here!
Computer Vision πŸ“·

Computer vision grew massively in the last years after great improvements in deep learning. This is one of the fields that benefited most from CNNs.

While computer vision problems start getting commoditized now, there are still many interesting challengec.
Read 6 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!

Follow Us on Twitter!