You might have finished the engine, but there's still a lot of work to put the entire car together.
A Machine Learning model is just a small piece of the equation.
A lot more needs to happen. Let's talk about that.
🧵👇
For simplicity's sake, let's imagine a model that takes a picture of an animal and classifies it among 100 different species.
▫️Input: pre-processed pixels of the image.
▫️Output: a score for each one of the 100 species.
Final answer is the species with the highest score.
👇
There's a lot of work involved in creating a model like this. There's even more work involved in preparing the data to train it.
But it doesn't stop there.
The model is just the start, the core, the engine of what will become a fully-fledged car.
👇
Unfortunately, many companies are hyper-focused on creating these models and forget that productizing them is not just a checkbox in the process.
Reports are pointing out that ~90% of Data Science projects never make it to production!
I'm not surprised.
👇
Our model predicting species is now ready!
— "Good job, everyone!"
— "Oh, wait. Now what? How do we use this thing?"
Let's take our model into production step by step.
👇
First, we need to wrap the model with code that:
1. Pre-processes the input image 2. Translates the output into an appropriate answer
I call this the "extended model." Complexity varies depending on your needs.
👇
Frequently, processing a single image at a time is not enough, and you need to process batches of pictures (you know, to speed things up a bit.)
Doing this requires a non-trivial amount of work.
👇
Now we need to expose the functionality of the extended model.
Usually, you can do this by creating a wrapper API (REST or RPC) and have client applications use it to communicate with the model.
Loading the model in memory brings some other exciting challenges.
👇
Of course, we can't trust what comes into that API, so we need to validate its input:
▫️What's the format of the image we are getting?
▫️What happens if it doesn't exist?
▫️Does it have the expected resolution?
▫️Is it base64? URL?
▫️...
👇
Now that our API is ready, we need to host it. Maybe with a cloud provider. Several things to worry about here:
▫️Package API and model in a container
▫️Where do we deploy it?
▫️How do we deploy it?
▫️How do we take advantage of acceleration?
👇
Also:
▫️How long do we have to return an answer?
▫️How many requests per second can we handle?
▫️Do we need automatic scaling?
▫️What are the criteria to scale in and out?
▫️How can we tell when a model is down?
▫️How do we log what happens?
👇
Let's say we made it.
At this point, we have a frozen, stuck-in-time version of our model deployed.
But we aren't done yet. Far from it!
By now, there's probably a newer version of the model ready to go.
How do we deploy that version? Do we need to start again?
👇
And of course, it would be ideal if you don't just snap the new version of the model in and pray that quality doesn't go down, right?
You want old and new side by side. Then migrate traffic over gradually.
This requires more work.
👇
Creating the pipeline that handles taking new models and hosting them in production takes a lot of planning and effort.
And you are probably thinking, "That's MLOps!"
Yes, it is! But giving it a name doesn't make it less complicated.
👇
And there's more.
As we collect more and more data, we need to train new versions of our model.
We can't expect our people to do this manually. We need to automate the training pipeline.
A whole lot more work!
👇
Some questions:
1. Where's the data coming from? 2. How should it be split? 3. How much data should be used to retrain? 4. How will the training scripts run? 5. What metrics do we need? 6. How to evaluate the quality of the model?
These aren't simply "Yes" or "No" answers.
👇
At this point, are we done yet?
Well, not quite 😞
We need to worry about monitoring our model. How is it performing?
That pesky "concept drift" ensures that the quality of our results will rapidly decay. We need to be on top of it!
👇
And there's even more.
Here are some must-haves for well-rounded, safe production systems that I haven't covered yet:
▫️Ethics
▫️Data capturing and storage
▫️Data quality
▫️Integrating human feedback
👇
Here is the bottom line:
Creating a model with predictive capacity is just a small part of a much bigger equation.
There aren't a lot of companies that understand the entire picture. This opens up a lot of opportunities.
Opportunities for you and me.
• • •
Missing some Tweet in this thread? You can try to
force a refresh
GPT-4o is slower than Flash, more expensive, chatty, and very stubborn (it doesn't like to stick to my prompts).
Next week, I'll post a step-by-step video on how to build this.
The first request takes longer (warming up), but things work faster from that point.
Few opportunities to improve this:
1. Stream answers from the model (instead of waiting for the full answer.)
2. Add the ability to interrupt the assistant.
3. Whisper running on GPU
Unfortunately, no local modal supports text+images (as far as I know,) so I'm stuck running online models.
The TTS API (synthesizing text to audio) can also be replaced by a local version. I tried, but the available voices suck (too robotic), so I kept OpenAI's.
I’m so sorry about anyone who bought the rabbit r1.
It’s not just that the product is non-functional (as we learned from all the reviews), the real problem is that the whole thing seems to be a lie.
None of what they pitched exists or functions the way they said.
They sold the world on a Large Action Model (LAM), an intelligent AI model that would understand applications and execute the actions requested by the user.
In reality, they are using Playwright, a web automation tool.
No AI. Just dumb, click-around, hard-coded scripts.
Their foundational AI model is just ChatGPT + scripts.
Rabbit’s founder lied on their marketing videos, during interviews, when he presented the product, and lied on Discord when answering questions from early supporters.
1. Mojo 🔥 went open-source 2. Claude 3 beats GPT-4 3. $100B supercomputer from MSFT and OpenAI 4. Andrew Ng and Harrison Chase discussed AI Agents 5. Karpathy talked about the future of AI
...
And more.
Here is everything that will keep you up at night:
Mojo 🔥, the programming language that turns Python into a beast, went open-source.
This is a huge step and great news for the Python and AI communities!
With Mojo 🔥 you can write Python code or scale all the way down to metal code. It's fast!