And if we sample only π¦ we won't detect π₯ π€·ββοΈ
π
Data Cleaning π§Ή
Now we need to clean all corrupted and irrelevant samples. We need to remove:
βͺοΈ Overexposed or underexposed images
βͺοΈ Images in irrelevant situations
βͺοΈ Faulty images
Leaving them in the dataset will hurt our model's performance!
π
Preprocess Data βοΈ
Most ML models like their data nicely normalized and properly scaled. Bad normalization can also lead to worse performance (I have a nice story for another time...)
βͺοΈ Crop and resize all images
βͺοΈ Normalize all values (usually 0 mean and 1 std. dev.)
π
Label Data π·οΈ
Manual labeling is expensive. Try to be clever and automate as much as possible:
βͺοΈ Generate labels from the input data
βͺοΈ Use slow, but accurate algorithms offline
βͺοΈ Pre-label data during collection
βͺοΈ Develop good labeling tools
βͺοΈ Use synthetic data?
π
Label Correction β
You will always have errors in the labels - humans make mistakes. Review and iterate!
βͺοΈ Spot checks to find systematic problems
βͺοΈ Improve labeling guidelines and tools
βͺοΈ Review test results and fix labels
βͺοΈ Label samples multiple times
π
The danger of label errors π§βπ«
A recent study by MIT found that 10 of the most popular public datasets had 3.4% label errors on average (ImageNet had 5.8%).
This even lead authors to choose the wrong (and more complex) model as their best one!
This is the part that is usually covered by ML courses. Now is the time to try out different features, network architectures, fine-tune parameters etc.
But we are not done yet... π
Iterative Process π
In most real-world applications the bottleneck is not the model itself, but the data. After having a first model, we need to review where it has problems and go back to:
βͺοΈ Collecting and labeling more data
βͺοΈ Correcting labels
βͺοΈ Balancing the data
π
Deploy Model π’
Deploying the model in production poses some additional constraints:
We have to find a good trade-off between these factors and accuracy.
Now we are done, right? No...π
Monitoring π₯οΈ
The performance of the model will start degrading over time because the world keeps changing:
βͺοΈ Concept drift - the real-world distribution changes
βͺοΈ Data drift - the properties of the data change
We need to detect this, retrain, and deploy again.
Example π
Drift β‘οΈ
We now have a trained model to recognize π¦, but people keep inventing new variants - see what some creative people in Munich came up with π
We need a way to detect that we have a problem, collect data, label, and retrain our model.
π
Summary π
This is how a typical ML pipeline for real-world applications looks like. Please remember this:
βͺοΈ Curating a good dataset is the most important thing
βͺοΈ Dataset curation is an iterative process
βͺοΈ Monitoring is critical to ensure good performance over time
This week I'm reposting some of my best threads from the past months, so I can focus on creating my machine learning course.
Next week I'm back with some new content on machine learning and web3, so make sure you follow me @haltakov.
β’ β’ β’
Missing some Tweet in this thread? You can try to
force a refresh
Let's talk about a common problem in ML - imbalanced data βοΈ
Imagine we want to detect all pixels belonging to a traffic light from a self-driving car's camera. We train a model with 99.88% performance. Pretty cool, right?
Actually, this model is useless β
Let me explain π
The problem is the data is severely imbalanced - the ratio between traffic light pixels and background pixels is 800:1.
If we don't take any measures, our model will learn to classify each pixel as background giving us 99.88% accuracy. But it's useless!
What can we do? π
Let me tell you about 3 ways of dealing with imbalanced data:
βͺοΈ Choose the right evaluation metric
βͺοΈ Undersampling your dataset
βͺοΈ Oversampling your dataset
βͺοΈ Adapting the loss
The creator and lead dev of the popular NFT exchange Hic Et Nunc on the Tezos blockchain decided to shut down the project. He pulled the plug on the whole website and the official Twitter account.
Yet, the damage is not fatal π
How come?
β NFTs are fine - they are stored on the blockchain
β NFT metadata is fine - stored on IPFS
β Exchange backend code is fine - it is in an immutable smart contract
β The website is back online - it is open-source, so a clone was deployed by the community fast
π
Of course, this is a dramatic event and the quick recovery was only possible because of the immense effort of the community. But it is possible and it took basically 1 day.
Imagine the damage that the creator and lead dev could do if they want to destroy a Web 2.0 company!
How I made $3000 in 3 weeks selling AI-generated art? π°
Last week I showed you how you can use VQGAN+CLIP to generate interesting images based on text prompts.
Now, I'll tell you how I sold some of these as NFTs for more than $3000 in less than 3 weeks.
Let's go π
Background
I've been interested in NFTs for 2 months now and one collection I find interesting is @cryptoadzNFT. What's special about it is that the creator @supergremplin published all of the art in the public domain. This spurred the creation of many derivative projects.
π
The Idea π‘
My idea was to use VQGAN+CLIP to create interesting versions of the CrypToadz. So, I started experimenting with my own toad #6741.
I took the original NFT image as a start and experimented a lot with different text prompts. The results were very promising!
In their latest paper, they introduce the so-called verifiers. The generative model generates 100 solutions, but the verifiers select the one that has the highest chance of being factually correct.