Day 6 of #31DaysofML

πŸ•Ή Training & Loss πŸ•Ή

In supervised learning training a model means learning good values for weights & bias from labeled examples. In doing so it attempts to find a model that minimizes loss. Process is called empirical risk minimization

A 🧡
2/4
πŸ“Œ Loss indicates how bad model's prediction was on an example
πŸ“Œ Loss = 0 if the model's prediction is perfect otherwise it's greater.
πŸ“Œ Goal of training is to find weights & biases that have low loss, on average, across dataset

#31DaysofML Image
3/4
Squared loss or L2 loss is a popular Loss function that aggregates the individual losses

L2 loss for a single example = (observation - prediction)^2

#31DaysofML
4/4
Mean square error (MSE) is the average squared loss per example over the whole dataset. To calculate MSE, sum up all the squared losses for individual examples and then divide by the number of examples. It looks like this πŸ‘‡

#31DaysofML Image
Here is a fun read on loss function by @quaesita πŸ‘‰ bit.ly/quaesita_emper…

#31DaysofML

β€’ β€’ β€’

Missing some Tweet in this thread? You can try to force a refresh
γ€€

Keep Current with Priyanka Vergadia

Priyanka Vergadia Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @pvergadia

28 Feb
Experimented with Teachable Machine today and created a #nocode classification model in less than 5 mins!

It's a web-based tool making #machinelearning models fast, easy, and accessible to everyone.

See how I did it πŸ§΅πŸ‘‡

teachablemachine.withgoogle.com

#nocode #31DaysofML
How do I use it?

πŸ“ŒGather data (upload it)
πŸ“ŒTrain model (in the web interface)
πŸ“ŒExport the model (use it in your app) Image
What can I use to teach it?

πŸ“ŒImages
πŸ“ŒSounds
πŸ“ŒPoses

We can use files or capture examples live through webcam/microphone. Image
Read 5 tweets
15 Feb
Day 14 #31DaysofML

πŸ€” How to pick the right #GoogleCloud #MachineLearning tool for your application?

Answer these questions
❓ What's your teams ML expertise?
❓ How much control/abstraction do you need?
❓ Would you like to handle the infrastructure components?

🧡 πŸ‘‡
@SRobTweets created this pyramid to explain the idea.
As you move up the pyramid, less ML expertise is required, and you also don’t need to worry as much about the infrastructure behind your model.

To lear more watch this video πŸ‘‰

#31DaysofML 2/10
@SRobTweets If you’re using Open source ML frameworks (#TensorFlow) to build the models, you get the flexibility of moving your workloads across different development & deployment environments. But, you need to manage all the infrastructure yourself for training & serving

#31DaysofML 3/10
Read 10 tweets
14 Feb
Day 13 #31DaysofML

βš–οΈ How to deal with imbalanced datasets?βš–οΈ
Most real-world datasets are not perfectly balanced. If 90% of your dataset belongs to one class, & only 10% to the other, how can you prevent your model from predicting the majority class 90% of the time?

🧡 πŸ‘‡
🐱🐱🐱🐱🐱🐱🐱🐱🐱🐢 (90:10)
πŸ’³ πŸ’³ πŸ’³ πŸ’³ πŸ’³ πŸ’³ πŸ’³ πŸ’³ πŸ’³ ⚠️ (90:10)
There can be many reasons for imbalanced data. First step is to see if it's possible to collect more data. If you're working with all the data that's available, these πŸ‘‡ techniques can help

#31DaysofML 2/7
Here are 3 techniques for addressing data imbalance. You can use just one of these or all of them together:
βš–οΈ Downsampling
βš–οΈ Upsampling
βš–οΈ Weighted classes

#31DaysofML 3/7
Read 7 tweets
11 Feb
Since it is Day 10 of #31DaysofML it's perfect to discuss 1️⃣0️⃣ things that can go wrong with #MachineLearning Projects and what you can do about it!

I watched this amazing presentation by @kweinmeister that sums it all up

A 🧡 Image
@kweinmeister 1️⃣ You aren't solving the right problem
❓What's the goal of your ML model?
❓How do you assess if your model is "good" or "bad"?
❓What's your baseline?
πŸ‘‰ Focus on a long-term mission with maximum impact
πŸ‘‰ Ensure that your problem is a good fit for ML

#31DaysofML
@kweinmeister 2️⃣ Jumping into development without a prototype
πŸ‘‰ ML project is an iterative process
πŸ‘‰ Start with simple model & continue to refine it until you've reached your goal
πŸ‘‰ Quick prototype can tell a lot about hidden requirements, implementation challenges, scope, etc

#31DaysofML
Read 11 tweets
10 Feb
Day 9 of #31DaysofML

πŸ’β€β™€οΈ I thought today I would share a tip that has helped me in my #MachineLearning journey
πŸ’‘The best way to learn ML is to pick a problem that you feel excited about & let it guide your learning path. Don't worry about the terms or tools, it's all secondary
Here's an example. Few weeks ago I wanted to live translate an episode of @GCPPodcast. The first question I asked myself was:
πŸ€” Does any video/audio translation API already exist?
πŸ”Ή If so - I would give that a try
πŸ”Ή If not, I would create it from scratch

#31DaysofML (2/5)
@GCPPodcast Next, I started digging into the Media Translation API which would translate audio & video data.
My point is:
πŸ“Œ You don't always need to create a model
πŸ“Œ Save yourself time & resources by using the models that already exist (if they server your purpose)

#31DaysofML (3/5)
Read 5 tweets
8 Feb
Day 7 of #31DaysofML

⬇️ Reducing Loss ⬇️
An iterative process of choosing model parameters that minimize loss
πŸ‘‰ Loss function is how we compute loss
πŸ‘‰ Loss function curve is convex for linear regression

A 🧡 πŸ‘‡ Image
Calculating loss for every value of W isn't efficient: most common way is called gradient descent
πŸ‘‰ Start with any value of w, b (weights & biases)
πŸ‘‰ Keep going until overall loss stops changing or changes slowly
πŸ‘‰ That point is called convergence

#31DaysofML 2/4 Image
As you probably already guessed, gradient is a vector with:
πŸ‘‰ Direction
πŸ‘‰ Magnitude
Gradient descent algorithms multiply the gradient by a scalar known as the learning rate (or step size) to determine the next point.

#31DaysofML 3/4
Read 4 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!

Follow Us on Twitter!