Let's talk about a common problem in ML - imbalanced data ⚖️
Imagine we want to detect all pixels belonging to a traffic light from a self-driving car's camera. We train a model with 99.88% performance. Pretty cool, right?
Actually, this model is useless ❌
Let me explain 👇
The problem is the data is severely imbalanced - the ratio between traffic light pixels and background pixels is 800:1.
If we don't take any measures, our model will learn to classify each pixel as background giving us 99.88% accuracy. But it's useless!
What can we do? 👇
Let me tell you about 3 ways of dealing with imbalanced data:
▪️ Choose the right evaluation metric
▪️ Undersampling your dataset
▪️ Oversampling your dataset
▪️ Adapting the loss
Let's dive in 👇
1️⃣ Evaluation metrics
Looking at the overall accuracy is a very bad idea when dealing with imbalanced data. There are other measures that are much better suited:
▪️ Precision
▪️ Recall
▪️ F1 score
The idea is to throw away samples of the overrepresented classes.
One way to do this is to randomly throw away samples. However, ideally, we want to make sure we are only throwing away samples that look similar.
Here is a strategy to achieve that 👇
Clever Undersampling
▪️ Compute image features for each sample using a pre-trained CNN
▪️ Cluster images by visual appearance using k-means, DBSCAN etc.
▪️ Remove similar samples from the clusters (check out for example the Near-Miss or the Tomek Links strategies)
👇
3️⃣ Oversampling
The idea here is to generate new samples from underrepresented classes. The easiest way to do this is of course to repeat the samples. However, we are not gaining any new information with this.
Some better strategies 👇
Data Augmentation
Create new samples by modifying the existing ones. You can apply many different transformations like for example:
The idea is to create new samples by combining two existing ones.
This technique is more common when working with tabular data, but can be used for images as well. For that, we can combine the images in feature space and reconstruct them using an autoencoder.
Synthetic Data
Another option is to generate synthetic data to add to our dataset. This can be done either using a GAN or using a realistic simulation to render new images.
There are even companies that specialize in this, like paralleldomain.com (not affiliated)
👇
4️⃣ Adapting the loss function
Finally, an easy way to improve the balance is directly in your loss function. We can specify that samples of the underrepresented class to have more weight and contribute more to the loss function.
Here is an example of how to do it in the code.
So, let's recap the main ideas when dealing with imbalanced data:
▪️ Make sure you are using the right evaluation metric
▪️ Use undersampling and oversampling techniques to improve your dataset
▪️ Use class weights in your loss function
I regularly write threads like this to help people get started with Machine Learning.
If you are interested in seeing more, follow me @haltakov.
• • •
Missing some Tweet in this thread? You can try to
force a refresh
How can I prove to you that I know a secret, without revealing any information about the secret itself?
This is called a zero-knowledge proof and it is a super interesting area of cryptography! But how does it work?
Thread 🧵
Let's start with an example
Peggie and Victor travel between cities A and B. There are two paths - a long path and a short path. The problem is that there is a gate on the short path for which you need a password.
Peggie knows the password, but Victor doesn't.
👇
Victor wants to buy the password from Peggie so he can use the short path.
But what if Victor pays Peggie, but she lied and she didn't know the password? How can Peggie prove to Victor she knows the password, without actually revealing it?
Rescue Toadz looks like a regular NFT collection at first - you can mint a toad and you get an NFT in your wallet.
100% of the mint fee is directly sent to @Unchainfund - an organization that provides humanitarian aid to Ukraine and that has already raised $9M!
👇
@ianbydesign@RescueToadz@Unchainfund@cryptoadzNFT The process is completely trustless and automatic! All the logic is coded in the smart contract which cannot be changed and which everybody can inspect.
You trust the code, not us! We have no way to steal the funds even if we wanted (we don't 😀).
Principal Component Analysis is a commonly used method for dimensionality reduction.
It's a good example of how fairly complex math can have an intuitive explanation and be easy to use in practice.
Let's start from the application of PCA 👇
Dimensionality Reduction
This is one of the common uses of PCA in machine learning.
Imagine you want to predict house prices. You get a large table of many houses and different features for them like size, number of rooms, location, age, etc.
Some features seem correlated 👇
Correlated features
For example, the size of the house is correlated with the number of rooms. Bigger houses tend to have more rooms.
Another example could be the age and the year the house was built - they give us pretty much the same information.
For regression problems you can use one of several loss functions:
▪️ MSE
▪️ MAE
▪️ Huber loss
But which one is best? When should you prefer one instead of the other?
Thread 🧵
Let's first quickly recap what each of the loss functions does. After that, we can compare them and see the differences based on some examples.
👇
Mean Square Error (MSE)
For every sample, MSE takes the difference between the ground truth and the model's prediction and computes its square. Then, the average over all samples is computed.