Machine Learning in the Real World 🧠 πŸ€–

ML for real-world applications is much more than designing fancy networks and fine-tuning parameters.

In fact, you will spend most of your time curating a good dataset.

Let's go through the steps of the process together πŸ‘‡
Collect Data πŸ’½

We need to represent the real world as accurately as possible. If some situations are underrepresented we are introducing Sampling Bias.

Sampling Bias is nasty because we'll have high test accuracy, but our model will perform badly when deployed.

πŸ‘‡
Traffic Lights 🚦

Let's build a model to recognize traffic lights for a self-driving car. We need to collect data for different:

β–ͺ️ Lighting conditions
β–ͺ️ Weather conditions
β–ͺ️ Distances and viewpoints
β–ͺ️ Strange variants

And if we sample only 🚦 we won't detect πŸš₯ πŸ€·β€β™‚οΈ

πŸ‘‡
Data Cleaning 🧹

Now we need to clean all corrupted and irrelevant samples. We need to remove:

β–ͺ️ Overexposed or underexposed images
β–ͺ️ Images in irrelevant situations
β–ͺ️ Faulty images

Leaving them in the dataset will hurt our model's performance!

πŸ‘‡
Preprocess Data βš™οΈ

Most ML models like their data nicely normalized and properly scaled. Bad normalization can also lead to worse performance (I have a nice story for another time...)

β–ͺ️ Crop and resize all images
β–ͺ️ Normalize all values (usually 0 mean and 1 std. dev.)

πŸ‘‡
Label Data 🏷️

Manual labeling is expensive. Try to be clever and automate as much as possible:

β–ͺ️ Generate labels from the input data
β–ͺ️ Use slow, but accurate algorithms offline
β–ͺ️ Pre-label data during collection
β–ͺ️ Develop good labeling tools
β–ͺ️ Use synthetic data?

πŸ‘‡
Label Correction ❌

You will always have errors in the labels - humans make mistakes. Review and iterate!

β–ͺ️ Spot checks to find systematic problems
β–ͺ️ Improve labeling guidelines and tools
β–ͺ️ Review test results and fix labels
β–ͺ️ Label samples multiple times

πŸ‘‡
The danger of label errors πŸ§‘β€πŸ«

A recent study by MIT found that 10 of the most popular public datasets had 3.4% label errors on average (ImageNet had 5.8%).

This even lead authors to choose the wrong (and more complex) model as their best one!

arxiv.org/abs/2103.14749

πŸ‘‡
Balance Dataset βš–οΈ

Dealing with imbalanced data can be tricky...

Let's classify the color of the 🚦 - we can get 97% just by learning to recognize 🟒 and πŸ”΄, just because 🟑 is severely underrepresented.

I have a separate thread on this topic:


πŸ‘‡
Train and Evaluate Model πŸ’ͺπŸ“

This is the part that is usually covered by ML courses. Now is the time to try out different features, network architectures, fine-tune parameters etc.

But we are not done yet... πŸ‘‡
Iterative Process πŸ”„

In most real-world applications the bottleneck is not the model itself, but the data. After having a first model, we need to review where it has problems and go back to:

β–ͺ️ Collecting and labeling more data
β–ͺ️ Correcting labels
β–ͺ️ Balancing the data

πŸ‘‡
Deploy Model 🚒

Deploying the model in production poses some additional constraints:

β–ͺ️ Speed
β–ͺ️ Cost
β–ͺ️ Stability
β–ͺ️ Privacy
β–ͺ️ Hardware availability and integration

We have to find a good trade-off between these factors and accuracy.

Now we are done, right? No...πŸ‘‡
Monitoring πŸ–₯️

The performance of the model will start degrading over time because the world keeps changing:

β–ͺ️ Concept drift - the real-world distribution changes
β–ͺ️ Data drift - the properties of the data change

We need to detect this, retrain, and deploy again.

Example πŸ‘‡
Drift ➑️

We now have a trained model to recognize 🚦, but people keep inventing new variants - see what some creative people in Munich came up with πŸ˜„

We need a way to detect that we have a problem, collect data, label, and retrain our model.

πŸ‘‡
Summary 🏁

This is how a typical ML pipeline for real-world applications looks like. Please remember this:

β–ͺ️ Curating a good dataset is the most important thing
β–ͺ️ Dataset curation is an iterative process
β–ͺ️ Monitoring is critical to ensure good performance over time
This week I'm reposting some of my best threads from the past months, so I can focus on creating my machine learning course.

Next week I'm back with some new content on machine learning and web3, so make sure you follow me @haltakov.

β€’ β€’ β€’

Missing some Tweet in this thread? You can try to force a refresh
γ€€

Keep Current with Vladimir Haltakov

Vladimir Haltakov Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @haltakov

18 Nov
Let's talk about a common problem in ML - imbalanced data βš–οΈ

Imagine we want to detect all pixels belonging to a traffic light from a self-driving car's camera. We train a model with 99.88% performance. Pretty cool, right?

Actually, this model is useless ❌

Let me explain πŸ‘‡ ImageImage
The problem is the data is severely imbalanced - the ratio between traffic light pixels and background pixels is 800:1.

If we don't take any measures, our model will learn to classify each pixel as background giving us 99.88% accuracy. But it's useless!

What can we do? πŸ‘‡
Let me tell you about 3 ways of dealing with imbalanced data:

β–ͺ️ Choose the right evaluation metric
β–ͺ️ Undersampling your dataset
β–ͺ️ Oversampling your dataset
β–ͺ️ Adapting the loss

Let's dive in πŸ‘‡
Read 13 tweets
16 Nov
Can you detect COVID-19 using Machine Learning? πŸ€”

You have an X-ray or CT scan and the task is to detect if the patient has COVID-19 or not. Sounds doable, right?

None of the 415 ML papers published on the subject in 2020 was usable. Not a single one!

Let's see why πŸ‘‡
Researchers from Cambridge took all papers on the topic published from January to October 2020.

β–ͺ️ 2212 papers
β–ͺ️ 415 after initial screening
β–ͺ️ 62 chosen for detailed analysis
β–ͺ️ 0 with potential for clinical use

healthcare-in-europe.com/en/news/machin…

There are important lessons here πŸ‘‡
Small datasets 🐁

Getting medical data is hard, because of privacy concerns, and at the beginning of the pandemic, there was just not much data in general.

Many papers were using very small datasets often collected from a single hospital - not enough for real evaluation.

πŸ‘‡
Read 10 tweets
15 Nov
Mastering your Machine Learning Interview πŸ§‘β€πŸ«

I've summarized some great resources for you that will help you with your Machine Learning interview.

Read below πŸ‘‡
A great book by @chipro distilling a lot of information on preparing for a machine learning interview.

huyenchip.com/ml-interviews-…

Next πŸ‘‡
A collection of questions by @svpino who has a lot of experience interviewing people for ML positions.



Next πŸ‘‡
Read 9 tweets
12 Nov
How does decentralization help? An example...

The creator and lead dev of the popular NFT exchange Hic Et Nunc on the Tezos blockchain decided to shut down the project. He pulled the plug on the whole website and the official Twitter account.

Yet, the damage is not fatal πŸ‘‡
How come?

βœ… NFTs are fine - they are stored on the blockchain
βœ… NFT metadata is fine - stored on IPFS
βœ… Exchange backend code is fine - it is in an immutable smart contract
βœ… The website is back online - it is open-source, so a clone was deployed by the community fast

πŸ‘‡
Of course, this is a dramatic event and the quick recovery was only possible because of the immense effort of the community. But it is possible and it took basically 1 day.

Imagine the damage that the creator and lead dev could do if they want to destroy a Web 2.0 company!

πŸ‘‡
Read 4 tweets
9 Nov
How I made $3000 in 3 weeks selling AI-generated art? πŸ’°

Last week I showed you how you can use VQGAN+CLIP to generate interesting images based on text prompts.

Now, I'll tell you how I sold some of these as NFTs for more than $3000 in less than 3 weeks.

Let's go πŸ‘‡
Background

I've been interested in NFTs for 2 months now and one collection I find interesting is @cryptoadzNFT. What's special about it is that the creator @supergremplin published all of the art in the public domain. This spurred the creation of many derivative projects.

πŸ‘‡
The Idea πŸ’‘

My idea was to use VQGAN+CLIP to create interesting versions of the CrypToadz. So, I started experimenting with my own toad #6741.

I took the original NFT image as a start and experimented a lot with different text prompts. The results were very promising!

πŸ‘‡
Read 20 tweets
8 Nov
Why is AI bad at math? πŸ“

Machine learning models today are good at generating realistic-looking text (see GPT-3), images (VQGAN+CLIP), or even code (GitHub Co-Pilot/Codex).

However, these models only learn to imitate, so the results often contain logical errors.

Thread πŸ‘‡
Simple math problems, like the ones 10-year-old kids solve, usually require several logical steps involving simple arithmetics.

The problem is that, if the ML model makes a logical mistake anywhere along the way, it will not be able to recover the correct answer.

πŸ‘‡
@OpenAI is now working on tackling this issue.

In their latest paper, they introduce the so-called verifiers. The generative model generates 100 solutions, but the verifiers select the one that has the highest chance of being factually correct.

openai.com/blog/grade-sch…

πŸ‘‡
Read 13 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Thank you for your support!

Follow Us on Twitter!

:(