Machine Learning in the Real World ๐Ÿง  ๐Ÿค–

ML for real-world applications is much more than designing fancy networks and fine-tuning parameters.

In fact, you will spend most of your time curating a good dataset.

Let's go through the process together ๐Ÿ‘‡

#RepostFriday Image
Collect Data ๐Ÿ’ฝ

We need to represent the real world as accurately as possible. If some situations are underrepresented we are introducing Sampling Bias.

Sampling Bias is nasty because we'll have high test accuracy, but our model will perform badly when deployed.

๐Ÿ‘‡
Traffic Lights ๐Ÿšฆ

Let's build a model to recognize traffic lights for a self-driving car. We need to collect data for different:

โ–ช๏ธ Lighting conditions
โ–ช๏ธ Weather conditions
โ–ช๏ธ Distances and viewpoints
โ–ช๏ธ Strange variants

And if we sample only ๐Ÿšฆ we won't detect ๐Ÿšฅ ๐Ÿคทโ€โ™‚๏ธ

๐Ÿ‘‡ Image
Data Cleaning ๐Ÿงน

Now we need to clean all corrupted and irrelevant samples. We need to remove:

โ–ช๏ธ Overexposed or underexposed images
โ–ช๏ธ Images in irrelevant situations
โ–ช๏ธ Faulty images

Leaving them in the dataset will hurt our model's performance!

๐Ÿ‘‡ Image
Preprocess Data โš™๏ธ

Most ML models like their data nicely normalized and properly scaled. Bad normalization can also lead to worse performance (I have a nice story for another time...)

โ–ช๏ธ Crop and resize all images
โ–ช๏ธ Normalize all values (usually 0 mean and 1 std. dev.)

๐Ÿ‘‡
Label Data ๐Ÿท๏ธ

Manual labeling is expensive. Try to be clever and automate as much as possible:

โ–ช๏ธ Generate labels from the input data
โ–ช๏ธ Use slow, but accurate algorithms offline
โ–ช๏ธ Pre-label data during collection
โ–ช๏ธ Develop good labeling tools
โ–ช๏ธ Use synthetic data?

๐Ÿ‘‡
Label Correction โŒ

You will always have errors in the labels - humans make mistakes. Review and iterate!

โ–ช๏ธ Spot checks to find systematic problems
โ–ช๏ธ Improve labeling guidelines and tools
โ–ช๏ธ Review test results and fix labels
โ–ช๏ธ Label samples multiple times

๐Ÿ‘‡
The danger of label errors ๐Ÿง‘โ€๐Ÿซ

A recent study by MIT found that 10 of the most popular public datasets had 3.4% label errors on average (ImageNet had 5.8%).

This even lead authors to choose the wrong (and more complex) model as their best one!

arxiv.org/abs/2103.14749

๐Ÿ‘‡
Balance Dataset โš–๏ธ

Dealing with imbalanced data can be tricky...

Let's classify the color of the ๐Ÿšฆ - we can get 97% just by learning to recognize ๐ŸŸข and ๐Ÿ”ด, just because ๐ŸŸก is severely underrepresented.

I have a separate thread on this topic:


๐Ÿ‘‡
Train and Evaluate Model ๐Ÿ’ช๐Ÿ“

This is the part that is usually covered by ML courses. Now is the time to try out different features, network architectures, fine-tune parameters etc.

But we are not done yet... ๐Ÿ‘‡
Iterative Process ๐Ÿ”„

In most real-world applications the bottleneck is not the model itself, but the data. After having a first model, we need to review where it has problems and go back to:

โ–ช๏ธ Collecting and labeling more data
โ–ช๏ธ Correcting labels
โ–ช๏ธ Balancing the data

๐Ÿ‘‡
Deploy Model ๐Ÿšข

Deploying the model in production poses some additional constraints:

โ–ช๏ธ Speed
โ–ช๏ธ Cost
โ–ช๏ธ Stability
โ–ช๏ธ Privacy
โ–ช๏ธ Hardware availability and integration

We have to find a good trade-off between these factors and accuracy.

Now we are done, right? No...๐Ÿ‘‡
Monitoring ๐Ÿ–ฅ๏ธ

The performance of the model will start degrading over time because the world keeps changing:

โ–ช๏ธ Concept drift - the real-world distribution changes
โ–ช๏ธ Data drift - the properties of the data change

We need to detect this, retrain, and deploy again.

Example ๐Ÿ‘‡
Drift โžก๏ธ

We now have a trained model to recognize ๐Ÿšฆ, but people keep inventing new variants - see what some creative people in Munich came up with ๐Ÿ˜„

We need a way to detect that we have a problem, collect data, label, and retrain our model.

๐Ÿ‘‡ Image
Summary ๐Ÿ

This is how a typical ML pipeline for real-world applications looks like. Please remember this:

โ–ช๏ธ Curating a good dataset is the most important thing
โ–ช๏ธ Dataset curation is an iterative process
โ–ช๏ธ Monitoring is critical to ensure good performance over time Image
Every Friday I repost one of my old threads so more people get the chance to see them. During the rest of the week, I post new content on machine learning and web3.

If you are interested in seeing more, follow me @haltakov
.

โ€ข โ€ข โ€ข

Missing some Tweet in this thread? You can try to force a refresh
ใ€€

Keep Current with haltakov.eth ๐ŸŒ ๐Ÿ‡บ๐Ÿ‡ฆ

haltakov.eth ๐ŸŒ ๐Ÿ‡บ๐Ÿ‡ฆ Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @haltakov

Mar 8
Machine Learning Formulas Explained ๐Ÿ‘จโ€๐Ÿซ

This is the Huber loss - another complicated-looking formula...

Yet again, if you break it down and understand the individual, it becomes really easy.

Let me show you ๐Ÿ‘‡ Image
Background

The Huber loss is a loss function that is similar to the Mean Squared Error (MSE) but it is designed to be more robust to outliers.

MSE suffers from the problem that if there is a small number of severe outliers they can dominate the whole loss

How does it work? ๐Ÿ‘‡
The key to understanding math formulas is not to try to understand everything at the same time.

Try looking at the terms inside the formula. Try to understand them from the inside to the outside...

Here, we can quickly see that one term is repeated several times...

๐Ÿ‘‡ Image
Read 13 tweets
Mar 4
Machine Learning Formulas Explained! ๐Ÿ‘จโ€๐Ÿซ

This is the formula for the Binary Cross Entropy Loss. It is commonly used for binary classification problems.

It may look super confusing, but I promise you that it is actually quite simple!

Let's go step by step ๐Ÿ‘‡

#RepostFriday
The Cross-Entropy Loss function is one of the most used losses for classification problems. It tells us how well a machine learning model classifies a dataset compared to the ground truth labels.

The Binary Cross-Entropy Loss is a special case when we have only 2 classes.

๐Ÿ‘‡
The most important part to understand is this one - this is the core of the whole formula!

Here, Y denotes the ground-truth label, while ลถ is the predicted probability of the classifier.

Let's look at a simple example before we talk about the logarithm... ๐Ÿ‘‡
Read 13 tweets
Mar 3
When machine learning met crypto art... they fell in love โค๏ธ

The Decentralized Autonomous Artist (DAA) is a concept that is uniquely enabled by these technologies.

Meet my favorite DAA - Botto.

Let me tell you how it works ๐Ÿ‘‡
Botto uses a popular technique to create images - VQGAN+CLIP

In simple terms, it uses a neural network model generating images (VQCAN) guided by the powerful CLIP model which can relate images to text.

This method can create stunning visuals from a simple text prompt!

๐Ÿ‘‡
Creating amazing images, though, requires finding the right text prompt

Botto is programmed by its creator - artist Mario Klingemann (@quasimondo), but it creates all art itself. There is no human intervention in the creation of the images!

Botto is trained by the community ๐Ÿ‘‡
Read 11 tweets
Feb 25
There are two problems with ROC curves

โŒ They don't work for imbalanced datasets
โŒ They don't work for object detection problems

So what do we do to evaluate our machine learning models properly in these cases?

We use a Precision-Recall curve.

Thread ๐Ÿ‘‡

#RepostFriday
Last week I wrote another detailed thread on ROC curves. I recommend that you read it first if you don't know what they are.



Then go on ๐Ÿ‘‡
โŒ Problem 1 - Imbalanced Data

ROC curves measure the True Positive Rate (also known as Accuracy). So, if you have an imbalanced dataset, the ROC curve will not tell you if your classifier completely ignores the underrepresented class.

Let's take an example confusion matrix ๐Ÿ‘‡
Read 20 tweets
Feb 24
Is your machine learning model performing well? What about in 6 months? ๐Ÿค”

If you are wondering why I'm asking this, you need to learn about ๐—ฐ๐—ผ๐—ป๐—ฐ๐—ฒ๐—ฝ๐˜ ๐—ฑ๐—ฟ๐—ถ๐—ณ๐˜ and ๐—ฑ๐—ฎ๐˜๐—ฎ ๐—ฑ๐—ฟ๐—ถ๐—ณ๐˜.

Let me explain this to you using two real world examples.

Thread ๐Ÿ‘‡
Imagine you are developing a model for a self-driving car to detect other vehicles at night.

Well, this is not too difficult, since vehicles have two red tail lights and it is easy to get a lot of data. You model works great!

But then... ๐Ÿ‘‡ Image
Car companies decide to experiment with red horizontal bars instead of two individual lights.

Now your model fails to detect these cars because it has never seen this kind of tail light.

Your model is suffering from ๐—ฐ๐—ผ๐—ป๐—ฐ๐—ฒ๐—ฝ๐˜ ๐—ฑ๐—ฟ๐—ถ๐—ณ๐˜

๐Ÿ‘‡ ImageImage
Read 11 tweets
Feb 22
Math is not very important when you are using a machine learning method to solve your problem.

Everybody that disagrees, should study the 92-page appendix of the Self-normalizing networks (SNN) paper, before using
torch.nn.SELU.

And the core idea of SNN is actually simple ๐Ÿ‘‡ ImageImageImageImage
SNNs use an activation function called Scaled Exponential Linear Unit (SELU) that is pretty simple to define.

It has the advantage that the activations converge to zero mean and unit variance, which allows training of deeper networks and employing strong regularization.

๐Ÿ‘‡ ImageImage
There are implementations both in PyTorch (torch.nn.SELU) and TensorFlow (tf.keras.activations.selu).

You need to be careful to use the correct initialization function and dropout, but this is well documented.

The code is open-source as well: github.com/bioinf-jku/SNNs

๐Ÿ‘‡
Read 9 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us on Twitter!

:(