It sucks if your ML model can't achieve good performance, but it is even worse if you don't know it!
Sometimes you follow all the best practices and your experiments show your model performing very well, but it fails when deployed.
A thread about Sampling Bias 👇
There is a lot of information about rules you need to follow when evaluating your machine learning model:
▪️ Balance your dataset
▪️ Use the right metric
▪️ Use high-quality labels
▪️ Split your training and test data
▪️ Perform cross-validation
But this may not be enough 👇
A common problem when evaluating an ML model is the Sampling Bias.
This means that your dataset contains more samples of some part of the underlying distribution than others.
Some examples 👇
You are training a computer vision model for traffic light detection.
You have many samples of vertical traffic lights, but you don't include any horizontal traffic lights.
👇
You are training a natural language processing model.
You have samples from many people, but you have no samples from a particular language dialect.
👇
You are training a model to predict housing prices.
You have data from the market for the last 10 years, but you don't have any data after the COVID-19 pandemic started.
👇
What is the problem?
The biggest problem is that you usually don't know you have a problem while evaluating your model.
Your data is correctly split for training and testing, but neither dataset contains the problematic examples.
👇
This means that you will be able to reach high accuracy on your test dataset and even on cross-validation.
However, when you encounter the underrepresented or missing samples in production, your model will fail.
So, what can we do about that? 👇
Know your domain
You need to know your domain very well. You need to understand where the data is coming from, how is it collected, what variations exist in the underlying distribution.
You need to think of ways how to get samples covering all cases.
👇
Be prepared to detect problems in the production
Even with the best preparation, there will be cases you didn't think about. Accept that!
Make sure you have good monitoring of your model to detect when it fails and inspect these cases.
👇
Be prepared to retrain your model
You need to be prepared to iterate. Make sure you can collect additional data from the problematic situations, retrain your model and deploy again.
If you liked this thread you will also like the other things I post too. I regularly tweet about similar topics.
The Internet is already decentralized, why do we need web3? 🤔
This is a common critique of web3. However, decentralization on its own is not always enough - sometimes we need to agree on a set of facts.
Blockchains give us a consensus mechanism for that!
Thread 🧵
1/12
The Internet is built of servers that communicate using open protocols like HTTP, SMTP, WebRTC etc. Everybody can set up a server and participate. It is decentralized!
However, if two servers distribute contradicting information, how do you know which one is right?
2/12
This is what blockchains give us, a way for decentralized parties to agree on one set of facts. They offer a consensus mechanism!
Imagine the blockchain as a global public database that anybody can read and nobody can falsify - every transaction/change needs to be signed.
While there is a lot of hype around web3, NFTs, and decentralized apps (dApps), there is also a lot of criticism. Today, I'll focus on the critique that web3 is actually too centralized.
Let's try to have an honest discussion 👇
These are the main arguments I see regularly. Please add more in the comments.
1️⃣ The Internet is already decentralized
2️⃣ It is inefficient
3️⃣ Everything can be implemented better using a centralized approach
4️⃣ Important services are centralized
👇
I was inspired to write this in part after reading this great article by @moxie pointing some of the problems with the current state of web3. If you've been living under a rock in the last weeks, make sure you check it out:
Things are getting more and more interesting for AI-generated images! 🎨
GLIDE is a new model by @OpenAI that can generate images guided by a text prompt. It is based on a diffusion model instead of the more widely used GAN models.
Some details 👇
@OpenAI GLIDE also has the interesting ability to perform inpainting allowing for some interesting usages.