Since it is Day 10 of #31DaysofML it's perfect to discuss 1️⃣0️⃣ things that can go wrong with #MachineLearning Projects and what you can do about it!
I watched this amazing presentation by @kweinmeister that sums it all up
A 🧵
@kweinmeister 1️⃣ You aren't solving the right problem
❓What's the goal of your ML model?
❓How do you assess if your model is "good" or "bad"?
❓What's your baseline?
👉 Focus on a long-term mission with maximum impact
👉 Ensure that your problem is a good fit for ML
@kweinmeister 2️⃣ Jumping into development without a prototype
👉 ML project is an iterative process
👉 Start with simple model & continue to refine it until you've reached your goal
👉 Quick prototype can tell a lot about hidden requirements, implementation challenges, scope, etc
@kweinmeister 3️⃣ Model training can take a long time
👉 When your team is trying to rapidly iterate on new ideas and techniques, training time can slow down projects and get in the way of innovation.
👉 Use GPU's, TPUs to speed up training
👉 Use serverless training methods
@kweinmeister 4️⃣ You have an imbalanced dataset
❓ How can you balance accuracy across each class?
👉 Weighting each class
👉 Oversampling and undersampling
👉 Generating synthetic data
Here's a tutorial to checkout tensorflow.org/tutorials/stru…
@kweinmeister 5️⃣ Your model's accuracy is not good enough
👉 Include more varied training data
👉 Feature engineering
👉 Try different model architectures, hyper-parameter tuning & ensembles
👉 Remove features that may be causing overfitting, start with a smaller model & add features slowly
@kweinmeister 6️⃣ Your model doesn't serve all of your users well
"Fairness is the process of understanding bias introduced by your data & ensuring your model provides equitable predictions across all demographic groups"
- @SRobTweets read more 👉 bit.ly/2Ly9wMM
@kweinmeister@SRobTweets 7️⃣ It's unclear how your model works
👉 How will you be able to explain how your model makes predictions?
👉 Under what conditions does the model perform best and most consistently?
👉 Does it have blind spots? If so, where?
Read about Explainable AI 👉 bit.ly/3aFJdwx
@kweinmeister@SRobTweets 8️⃣ You could accidentally push a bad model into production
👉 Create ML pipeline to run same steps in same environment every time with a trigger/schedule
👉 Include steps - Data validation, Model evaluation, Conditional deployment
👉 Track pipeline runs & artifacts
@kweinmeister@SRobTweets 1️⃣0️⃣ Model inference isn't scaling well in production
👉 Choose the right deployment tools that can scale
👉 Serve online endpoints for low-latency predictions, or predictions on massive batches of data
👉 Scale automatically based on your traffic
Answer these questions
❓ What's your teams ML expertise?
❓ How much control/abstraction do you need?
❓ Would you like to handle the infrastructure components?
🧵 👇
@SRobTweets created this pyramid to explain the idea.
As you move up the pyramid, less ML expertise is required, and you also don’t need to worry as much about the infrastructure behind your model.
@SRobTweets If you’re using Open source ML frameworks (#TensorFlow) to build the models, you get the flexibility of moving your workloads across different development & deployment environments. But, you need to manage all the infrastructure yourself for training & serving
⚖️ How to deal with imbalanced datasets?⚖️
Most real-world datasets are not perfectly balanced. If 90% of your dataset belongs to one class, & only 10% to the other, how can you prevent your model from predicting the majority class 90% of the time?
🧵 👇
🐱🐱🐱🐱🐱🐱🐱🐱🐱🐶 (90:10)
💳 💳 💳 💳 💳 💳 💳 💳 💳 ⚠️ (90:10)
There can be many reasons for imbalanced data. First step is to see if it's possible to collect more data. If you're working with all the data that's available, these 👇 techniques can help
Here are 3 techniques for addressing data imbalance. You can use just one of these or all of them together:
⚖️ Downsampling
⚖️ Upsampling
⚖️ Weighted classes
💁♀️ I thought today I would share a tip that has helped me in my #MachineLearning journey
💡The best way to learn ML is to pick a problem that you feel excited about & let it guide your learning path. Don't worry about the terms or tools, it's all secondary
Here's an example. Few weeks ago I wanted to live translate an episode of @GCPPodcast. The first question I asked myself was:
🤔 Does any video/audio translation API already exist?
🔹 If so - I would give that a try
🔹 If not, I would create it from scratch
@GCPPodcast Next, I started digging into the Media Translation API which would translate audio & video data.
My point is:
📌 You don't always need to create a model
📌 Save yourself time & resources by using the models that already exist (if they server your purpose)
⬇️ Reducing Loss ⬇️
An iterative process of choosing model parameters that minimize loss
👉 Loss function is how we compute loss
👉 Loss function curve is convex for linear regression
A 🧵 👇
Calculating loss for every value of W isn't efficient: most common way is called gradient descent
👉 Start with any value of w, b (weights & biases)
👉 Keep going until overall loss stops changing or changes slowly
👉 That point is called convergence
As you probably already guessed, gradient is a vector with:
👉 Direction
👉 Magnitude
Gradient descent algorithms multiply the gradient by a scalar known as the learning rate (or step size) to determine the next point.
In supervised learning training a model means learning good values for weights & bias from labeled examples. In doing so it attempts to find a model that minimizes loss. Process is called empirical risk minimization
A 🧵
2/4 📌 Loss indicates how bad model's prediction was on an example
📌 Loss = 0 if the model's prediction is perfect otherwise it's greater.
📌 Goal of training is to find weights & biases that have low loss, on average, across dataset