Is your machine learning model performing well? What about in 6 months? ๐ค
If you are wondering why I'm asking this, you need to learn about ๐ฐ๐ผ๐ป๐ฐ๐ฒ๐ฝ๐ ๐ฑ๐ฟ๐ถ๐ณ๐ and ๐ฑ๐ฎ๐๐ฎ ๐ฑ๐ฟ๐ถ๐ณ๐.
Let me explain this to you using two real world examples.
Thread ๐
Imagine you are developing a model for a self-driving car to detect other vehicles at night.
Well, this is not too difficult, since vehicles have two red tail lights and it is easy to get a lot of data. You model works great!
But then... ๐
Car companies decide to experiment with red horizontal bars instead of two individual lights.
Now your model fails to detect these cars because it has never seen this kind of tail light.
Your model is suffering from ๐ฐ๐ผ๐ป๐ฐ๐ฒ๐ฝ๐ ๐ฑ๐ฟ๐ถ๐ณ๐
๐
Concept drift happens when the objects you are trying to model change over time.
In the case above, cars changed and you now need to adapt your model.
Another example ๐
You are now dealing with the detection of traffic sights. Again, at night things are pretty easy because signs reflect the lights from the car's headlights and are very well visible in the image. Easy!
And again something happens... ๐
New cars start getting more powerful laser high beams and now suddenly the signs reflect so much light that there are overexposed in the image.
The problem now is ๐ฑ๐ฎ๐๐ฎ ๐ฑ๐ฟ๐ถ๐ณ๐.
๐
Data drift happens when the object you are modeling stays the same, but the environment changes in a way that affects how the object is perceived.
In this case, the signs are exactly the same, but they appear different because of the lighting.
๐
Since we talked about Sampling Bias last week, we can connect it with the concept of drift. Since the underlying distribution of your data changed, even if your sampling was good in the beginning it isn't anymore
While there are some methods like online learning to continue improving the model and some others to detect the drift, usually the solution is just to retrain your model.
This is something you need to be prepared about in your pipeline!
๐
Follow me @haltakov for more intuitive explanations of machine learning and web 3 topics.
This depends on your application and how much error you can tolerate. You first need to monitor the model and detect that it is getting worse. Then collect data and retrain.
You can also keep the data collection ongoing and retrain periodically.
ROC curves measure the True Positive Rate (also known as Accuracy). So, if you have an imbalanced dataset, the ROC curve will not tell you if your classifier completely ignores the underrepresented class.
Math is not very important when you are using a machine learning method to solve your problem.
Everybody that disagrees, should study the 92-page appendix of the Self-normalizing networks (SNN) paper, before using
torch.nn.SELU.
And the core idea of SNN is actually simple ๐
SNNs use an activation function called Scaled Exponential Linear Unit (SELU) that is pretty simple to define.
It has the advantage that the activations converge to zero mean and unit variance, which allows training of deeper networks and employing strong regularization.
๐
There are implementations both in PyTorch (torch.nn.SELU) and TensorFlow (tf.keras.activations.selu).
You need to be careful to use the correct initialization function and dropout, but this is well documented.
This is a special edition BMW 8 series painted by the famous artist Jeff Koons. A limited-edition of 99 with a price of $350K - about $200K more than the regular M850i.
If you think about it, you'll see many similarities with NFTs
๐
Artificially scarce
BMW can surely produce (mint ๐ ) more than 99 cars with this paint. The collection size is limited artificially in order to make it more exclusive.
Same as most NFT collections - they create artificial scarcity.
๐
Its price comes from the story
The $200K premium for the "paint" is purely motivated by the story around this car - it is exclusive, it is created by a famous artist, it is a BMW Art Car.
It is not faster, more reliable, or more economic. You are paying for the story.
It sucks if your ML model can't achieve good performance, but it is even worse if you don't know it!
Sometimes you follow all the best practices and your experiments show your model performing very well, but it fails when deployed.
A thread about Sampling Bias ๐
There is a lot of information about rules you need to follow when evaluating your machine learning model:
โช๏ธ Balance your dataset
โช๏ธ Use the right metric
โช๏ธ Use high-quality labels
โช๏ธ Split your training and test data
โช๏ธ Perform cross-validation
But this may not be enough ๐
A common problem when evaluating an ML model is the Sampling Bias.
This means that your dataset contains more samples of some part of the underlying distribution than others.
The Internet is already decentralized, why do we need web3? ๐ค
This is a common critique of web3. However, decentralization on its own is not always enough - sometimes we need to agree on a set of facts.
Blockchains give us a consensus mechanism for that!
Thread ๐งต
1/12
The Internet is built of servers that communicate using open protocols like HTTP, SMTP, WebRTC etc. Everybody can set up a server and participate. It is decentralized!
However, if two servers distribute contradicting information, how do you know which one is right?
2/12
This is what blockchains give us, a way for decentralized parties to agree on one set of facts. They offer a consensus mechanism!
Imagine the blockchain as a global public database that anybody can read and nobody can falsify - every transaction/change needs to be signed.