πββοΈ I thought today I would share a tip that has helped me in my #MachineLearning journey
π‘The best way to learn ML is to pick a problem that you feel excited about & let it guide your learning path. Don't worry about the terms or tools, it's all secondary
Here's an example. Few weeks ago I wanted to live translate an episode of @GCPPodcast. The first question I asked myself was:
π€ Does any video/audio translation API already exist?
πΉ If so - I would give that a try
πΉ If not, I would create it from scratch
@GCPPodcast Next, I started digging into the Media Translation API which would translate audio & video data.
My point is:
π You don't always need to create a model
π Save yourself time & resources by using the models that already exist (if they server your purpose)
@GCPPodcast Media translation API served my purpose. It was:
πΉ Easy to setup
πΉ Provided real-time translations
πΉ Accuracy was high
πΉ I did not have to use Speech-to-Text & Translation models separately
@GCPPodcast If you are convinced that any existing models/API don't serve your needs then look at creating custom models. Even then choose easier paths such as AutoML which allows you to upload your data and automate the training for you. More on this later... π
Answer these questions
β What's your teams ML expertise?
β How much control/abstraction do you need?
β Would you like to handle the infrastructure components?
𧡠π
@SRobTweets created this pyramid to explain the idea.
As you move up the pyramid, less ML expertise is required, and you also donβt need to worry as much about the infrastructure behind your model.
@SRobTweets If youβre using Open source ML frameworks (#TensorFlow) to build the models, you get the flexibility of moving your workloads across different development & deployment environments. But, you need to manage all the infrastructure yourself for training & serving
βοΈ How to deal with imbalanced datasets?βοΈ
Most real-world datasets are not perfectly balanced. If 90% of your dataset belongs to one class, & only 10% to the other, how can you prevent your model from predicting the majority class 90% of the time?
𧡠π
π±π±π±π±π±π±π±π±π±πΆ (90:10)
π³ π³ π³ π³ π³ π³ π³ π³ π³ β οΈ (90:10)
There can be many reasons for imbalanced data. First step is to see if it's possible to collect more data. If you're working with all the data that's available, these π techniques can help
Here are 3 techniques for addressing data imbalance. You can use just one of these or all of them together:
βοΈ Downsampling
βοΈ Upsampling
βοΈ Weighted classes
Since it is Day 10 of #31DaysofML it's perfect to discuss 1οΈβ£0οΈβ£ things that can go wrong with #MachineLearning Projects and what you can do about it!
I watched this amazing presentation by @kweinmeister that sums it all up
A π§΅
@kweinmeister 1οΈβ£ You aren't solving the right problem
βWhat's the goal of your ML model?
βHow do you assess if your model is "good" or "bad"?
βWhat's your baseline?
π Focus on a long-term mission with maximum impact
π Ensure that your problem is a good fit for ML
@kweinmeister 2οΈβ£ Jumping into development without a prototype
π ML project is an iterative process
π Start with simple model & continue to refine it until you've reached your goal
π Quick prototype can tell a lot about hidden requirements, implementation challenges, scope, etc
β¬οΈ Reducing Loss β¬οΈ
An iterative process of choosing model parameters that minimize loss
π Loss function is how we compute loss
π Loss function curve is convex for linear regression
A 𧡠π
Calculating loss for every value of W isn't efficient: most common way is called gradient descent
π Start with any value of w, b (weights & biases)
π Keep going until overall loss stops changing or changes slowly
π That point is called convergence
As you probably already guessed, gradient is a vector with:
π Direction
π Magnitude
Gradient descent algorithms multiply the gradient by a scalar known as the learning rate (or step size) to determine the next point.
In supervised learning training a model means learning good values for weights & bias from labeled examples. In doing so it attempts to find a model that minimizes loss. Process is called empirical risk minimization
A π§΅
2/4 π Loss indicates how bad model's prediction was on an example
π Loss = 0 if the model's prediction is perfect otherwise it's greater.
π Goal of training is to find weights & biases that have low loss, on average, across dataset