π Learning Rate π Also known as Step size
It's a scalar that is multiplied with gradient vector (which has the direction & magnitude) to provide the next value on the loss curve
A π§΅
π Smaller the learning rate the longer it will take to find the lowest loss value
π Large learning rate could overshoot the minimum
π You the learning rate to be just right (Goldilock's learning rate) to get the convergence value efficiently
Learning rate is one of the "Hyperparameters" that we tweak in machine learning algorithms. This is to tune the learning rate to efficiently get to the lowest loss value. There are more hyperparameters which we will get to later...
Answer these questions
β What's your teams ML expertise?
β How much control/abstraction do you need?
β Would you like to handle the infrastructure components?
𧡠π
@SRobTweets created this pyramid to explain the idea.
As you move up the pyramid, less ML expertise is required, and you also donβt need to worry as much about the infrastructure behind your model.
@SRobTweets If youβre using Open source ML frameworks (#TensorFlow) to build the models, you get the flexibility of moving your workloads across different development & deployment environments. But, you need to manage all the infrastructure yourself for training & serving
βοΈ How to deal with imbalanced datasets?βοΈ
Most real-world datasets are not perfectly balanced. If 90% of your dataset belongs to one class, & only 10% to the other, how can you prevent your model from predicting the majority class 90% of the time?
𧡠π
π±π±π±π±π±π±π±π±π±πΆ (90:10)
π³ π³ π³ π³ π³ π³ π³ π³ π³ β οΈ (90:10)
There can be many reasons for imbalanced data. First step is to see if it's possible to collect more data. If you're working with all the data that's available, these π techniques can help
Here are 3 techniques for addressing data imbalance. You can use just one of these or all of them together:
βοΈ Downsampling
βοΈ Upsampling
βοΈ Weighted classes
Since it is Day 10 of #31DaysofML it's perfect to discuss 1οΈβ£0οΈβ£ things that can go wrong with #MachineLearning Projects and what you can do about it!
I watched this amazing presentation by @kweinmeister that sums it all up
A π§΅
@kweinmeister 1οΈβ£ You aren't solving the right problem
βWhat's the goal of your ML model?
βHow do you assess if your model is "good" or "bad"?
βWhat's your baseline?
π Focus on a long-term mission with maximum impact
π Ensure that your problem is a good fit for ML
@kweinmeister 2οΈβ£ Jumping into development without a prototype
π ML project is an iterative process
π Start with simple model & continue to refine it until you've reached your goal
π Quick prototype can tell a lot about hidden requirements, implementation challenges, scope, etc
πββοΈ I thought today I would share a tip that has helped me in my #MachineLearning journey
π‘The best way to learn ML is to pick a problem that you feel excited about & let it guide your learning path. Don't worry about the terms or tools, it's all secondary
Here's an example. Few weeks ago I wanted to live translate an episode of @GCPPodcast. The first question I asked myself was:
π€ Does any video/audio translation API already exist?
πΉ If so - I would give that a try
πΉ If not, I would create it from scratch
@GCPPodcast Next, I started digging into the Media Translation API which would translate audio & video data.
My point is:
π You don't always need to create a model
π Save yourself time & resources by using the models that already exist (if they server your purpose)
β¬οΈ Reducing Loss β¬οΈ
An iterative process of choosing model parameters that minimize loss
π Loss function is how we compute loss
π Loss function curve is convex for linear regression
A 𧡠π
Calculating loss for every value of W isn't efficient: most common way is called gradient descent
π Start with any value of w, b (weights & biases)
π Keep going until overall loss stops changing or changes slowly
π That point is called convergence
As you probably already guessed, gradient is a vector with:
π Direction
π Magnitude
Gradient descent algorithms multiply the gradient by a scalar known as the learning rate (or step size) to determine the next point.