⚖️ How to deal with imbalanced datasets?⚖️
Most real-world datasets are not perfectly balanced. If 90% of your dataset belongs to one class, & only 10% to the other, how can you prevent your model from predicting the majority class 90% of the time?
🧵 👇
🐱🐱🐱🐱🐱🐱🐱🐱🐱🐶 (90:10)
💳 💳 💳 💳 💳 💳 💳 💳 💳 ⚠️ (90:10)
There can be many reasons for imbalanced data. First step is to see if it's possible to collect more data. If you're working with all the data that's available, these 👇 techniques can help
Here are 3 techniques for addressing data imbalance. You can use just one of these or all of them together:
⚖️ Downsampling
⚖️ Upsampling
⚖️ Weighted classes
📌 Downsampling 📌
This technique removes a random subset of the majority class from your data
Original dataset: 🐱🐱🐱🐱🐱🐱🐱🐱🐱🐱 🐶
Downsampling: 🐱🐱🐱🐱 🐶
Note: requires starting with a big enough dataset to not lose any meaning present in the data
📌 Upsampling 📌
In this technique you generate new examples for your minority class that are in a similar feature space as existing examples
Original dataset: ⚽️⚽️⚽️⚽️⚽️⚽️⚽️ 🏀
Upsampling: ⚽️⚽️⚽️⚽️⚽️⚽️⚽️🏀🏀🏀🏀🏀
📌 Weighted classes📌
In this technique you tell your model to give more weight to specific examples in your dataset
Original dataset: 🍎🍎🍎🍎🍎🍎🍎 🍋
Pay more attention to 🍋 minority class
Answer these questions
❓ What's your teams ML expertise?
❓ How much control/abstraction do you need?
❓ Would you like to handle the infrastructure components?
🧵 👇
@SRobTweets created this pyramid to explain the idea.
As you move up the pyramid, less ML expertise is required, and you also don’t need to worry as much about the infrastructure behind your model.
@SRobTweets If you’re using Open source ML frameworks (#TensorFlow) to build the models, you get the flexibility of moving your workloads across different development & deployment environments. But, you need to manage all the infrastructure yourself for training & serving
📌Quantity & quality of your data dictate how accurate our model is
📌The outcome of this step is usually a table with some values (features)
📌 If you want to use pre-collected data - get it from sources such as Kaggle or BigQuery Public Datasets