Santiago Profile picture
Oct 2, 2020 19 tweets 3 min read Read on X
You might have finished the engine, but there's still a lot of work to put the entire car together.

A Machine Learning model is just a small piece of the equation.

A lot more needs to happen. Let's talk about that.

🧵👇
For simplicity's sake, let's imagine a model that takes a picture of an animal and classifies it among 100 different species.

▫️Input: pre-processed pixels of the image.
▫️Output: a score for each one of the 100 species.

Final answer is the species with the highest score.

👇
There's a lot of work involved in creating a model like this. There's even more work involved in preparing the data to train it.

But it doesn't stop there.

The model is just the start, the core, the engine of what will become a fully-fledged car.

👇
Unfortunately, many companies are hyper-focused on creating these models and forget that productizing them is not just a checkbox in the process.

Reports are pointing out that ~90% of Data Science projects never make it to production!

I'm not surprised.

👇
Our model predicting species is now ready!

— "Good job, everyone!"

— "Oh, wait. Now what? How do we use this thing?"

Let's take our model into production step by step.

👇
First, we need to wrap the model with code that:

1. Pre-processes the input image
2. Translates the output into an appropriate answer

I call this the "extended model." Complexity varies depending on your needs.

👇
Frequently, processing a single image at a time is not enough, and you need to process batches of pictures (you know, to speed things up a bit.)

Doing this requires a non-trivial amount of work.

👇
Now we need to expose the functionality of the extended model.

Usually, you can do this by creating a wrapper API (REST or RPC) and have client applications use it to communicate with the model.

Loading the model in memory brings some other exciting challenges.

👇
Of course, we can't trust what comes into that API, so we need to validate its input:

▫️What's the format of the image we are getting?
▫️What happens if it doesn't exist?
▫️Does it have the expected resolution?
▫️Is it base64? URL?
▫️...

👇
Now that our API is ready, we need to host it. Maybe with a cloud provider. Several things to worry about here:

▫️Package API and model in a container
▫️Where do we deploy it?
▫️How do we deploy it?
▫️How do we take advantage of acceleration?

👇
Also:

▫️How long do we have to return an answer?
▫️How many requests per second can we handle?
▫️Do we need automatic scaling?
▫️What are the criteria to scale in and out?
▫️How can we tell when a model is down?
▫️How do we log what happens?

👇
Let's say we made it.

At this point, we have a frozen, stuck-in-time version of our model deployed.

But we aren't done yet. Far from it!

By now, there's probably a newer version of the model ready to go.

How do we deploy that version? Do we need to start again?

👇
And of course, it would be ideal if you don't just snap the new version of the model in and pray that quality doesn't go down, right?

You want old and new side by side. Then migrate traffic over gradually.

This requires more work.

👇
Creating the pipeline that handles taking new models and hosting them in production takes a lot of planning and effort.

And you are probably thinking, "That's MLOps!"

Yes, it is! But giving it a name doesn't make it less complicated.

👇
And there's more.

As we collect more and more data, we need to train new versions of our model.

We can't expect our people to do this manually. We need to automate the training pipeline.

A whole lot more work!

👇
Some questions:

1. Where's the data coming from?
2. How should it be split?
3. How much data should be used to retrain?
4. How will the training scripts run?
5. What metrics do we need?
6. How to evaluate the quality of the model?

These aren't simply "Yes" or "No" answers.

👇
At this point, are we done yet?

Well, not quite 😞

We need to worry about monitoring our model. How is it performing?

That pesky "concept drift" ensures that the quality of our results will rapidly decay. We need to be on top of it!

👇
And there's even more.

Here are some must-haves for well-rounded, safe production systems that I haven't covered yet:

▫️Ethics
▫️Data capturing and storage
▫️Data quality
▫️Integrating human feedback

👇
Here is the bottom line:

Creating a model with predictive capacity is just a small part of a much bigger equation.

There aren't a lot of companies that understand the entire picture. This opens up a lot of opportunities.

Opportunities for you and me.

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Santiago

Santiago Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @svpino

Mar 31
What a week, huh?

1. Mojo 🔥 went open-source
2. Claude 3 beats GPT-4
3. $100B supercomputer from MSFT and OpenAI
4. Andrew Ng and Harrison Chase discussed AI Agents
5. Karpathy talked about the future of AI
...

And more.

Here is everything that will keep you up at night:
Mojo 🔥, the programming language that turns Python into a beast, went open-source.

This is a huge step and great news for the Python and AI communities!

With Mojo 🔥 you can write Python code or scale all the way down to metal code. It's fast!

modular.com/blog/the-next-…
Claude 3 is the best model in the market right now, overtaking GPT-4.

Claude 3 Opus is #1 in the Arena Leaderboard (beating GPT-4.)

Opus is a huge model, but Claude 3 Haiku is cheap and fast. And it's also beating GPT-4 0613!

Read 10 tweets
Mar 13
The batch size is one of the most important parameters when training neural networks.

Here is everything you need to know about the batch size:

1 of 14 Image
I trained two neural networks.

Same architecture, loss, optimizer, learning rate, momentum, epochs, and training data. Almost everything is the same.

Here is a plot of their losses.

Can you guess what the only difference is?

2 of 14 Image
It's the batch size.

The first network uses batch_size = 1.

The loss is noisy and takes a long time to train.

And every time I run it, I get completely different results. It keeps jumping around and never settles on a good solution.

3 of 14 Image
Read 14 tweets
Jan 5
I had an amazing machine learning professor.

The first thing I learned from him was how to interpret learning curves. (Probably one of the best skills I built and refined over the years.)

Let me show you 4 pictures and you'll see how this process flows:

1/5 Image
I trained a neural network. A simple one.

I plotted the model's training loss. As you can see, it's too high.

This network is underfitting. It's not learning.

I need to make the model larger.

2/5 Image
I increased the capacity of the model. The training loss is now low.

The model is not underfitting anymore, but it might be memorizing the data. I don't know yet.

I need to evaluate this model.

3/5 Image
Read 5 tweets
Dec 21, 2023
AI will be one of the most crucial skills for the next 20 years.

If I were starting today, I'd learn these:

• Python
• LLMs
• Retrieval Augmented Generation (RAG)

Here are 40+ free lessons and practical projects on building advanced RAG applications for production:

1/4
This is one of the most comprehensive courses you'll find. It covers all of LangChain and LlamaIndex.

And it's 100% FREE!

@activeloopai, @towards_AI, and @intel Disruptor collaborated with @llama_index to develop it.

Here is the link:

2/4learn.activeloop.ai/courses/rag
This is a practical course.

It focuses on state-of-the-art retrieval strategies for RAG applications in production.

You will solve problems across industries:

• Biomedical
• Legal
• Financial
• E-commerce and others!

Attached you'll see an example agent you'll build.

3/4 Image
Read 4 tweets
Oct 25, 2023
The best real-life Machine Learning program out there:

"I have seen hundreds of courses; this is the best material and depth of knowledge I've seen."

That's what a professional Software Engineer finishing my program said during class. This is the real deal.

I teach a hard-core live class. It's the best program to learn about building production Machine Learning systems.

But it's not a $9.99 online course. It's not about videos or a bunch of tutorials you can read.

This program is different.

It's 14 hours of live sessions where you interact with me, like in any other classroom. It's tough, with 30 quizzes and 30 coding assignments.

Online courses can't compete with that.

I'll teach you pragmatic Machine Learning for Engineers. This is the type of knowledge every company wants to have.

The program's next iteration (Cohort #8) starts on November 6th. The following (Cohort #9) on December 4th.

It will be different from any other class you've ever taken. It will be tough. It will be fun. It's the closest thing to sitting in a classroom.

And for the first time, the next iteration includes an additional 9 hours of pre-recorded materials to help you as much as possible!

You'll learn about Machine Learning in the real world. You'll learn to train, tune, evaluate, register, deploy, and monitor models. You'll learn how to build a system that continually learns and how to test it in production.

You'll get unlimited access to me and the entire community. I'll help you through the course, answer your questions, and help with your code.

You get lifetime access to all past and future sessions. You get access to every course I've created for free. You get access to recordings, job offers, and many people doing the job you want to do.

No monthly payments. Ever.

The link to join is in the attached image and in the following tweet.
Image
The link to join the program:
The cost to join is $385.

November and December are the last two iterations remaining at that price. The cost will go up starting in January 2024.

Today, there are around 800 professionals in the community.ml.school
Live sessions and recordings:

Sessions are live, and I recommend every student to attend if they can.

But we also record every session, and you get access to the recordings. You can watch them whenever you want.

We also have 2 office hours. They are optional but a lot of fun!
Read 8 tweets
Oct 2, 2023
AI is changing how we build software.

A few weeks ago, I talked about using AI for code reviews. Many dismissed the idea, saying AI can't help beyond trivial suggestions.

You are wrong.

Here are a few examples of what you can do with @CodiumAI's open-source pull request agent: Image
Here, the agent generated the description of a pull request.

It looks at every commit and file involved and summarizes what's happening automatically.

You can do this by using the "/describe" command. Image
Sometimes, you need a more thoughtful review of the pull request.

If you want to go deeper, use the "/review" command and have the model generate a full analysis like this.

The tool lets you control which commands will run automatically on every pull request. Image
Read 8 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us!

:(