The best real-life Machine Learning program out there:
"I have seen hundreds of courses; this is the best material and depth of knowledge I've seen."
That's what a professional Software Engineer finishing my program said during class. This is the real deal.
I teach a hard-core live class. It's the best program to learn about building production Machine Learning systems.
But it's not a $9.99 online course. It's not about videos or a bunch of tutorials you can read.
This program is different.
It's 14 hours of live sessions where you interact with me, like in any other classroom. It's tough, with 30 quizzes and 30 coding assignments.
Online courses can't compete with that.
I'll teach you pragmatic Machine Learning for Engineers. This is the type of knowledge every company wants to have.
The program's next iteration (Cohort #8) starts on November 6th. The following (Cohort #9) on December 4th.
It will be different from any other class you've ever taken. It will be tough. It will be fun. It's the closest thing to sitting in a classroom.
And for the first time, the next iteration includes an additional 9 hours of pre-recorded materials to help you as much as possible!
You'll learn about Machine Learning in the real world. You'll learn to train, tune, evaluate, register, deploy, and monitor models. You'll learn how to build a system that continually learns and how to test it in production.
You'll get unlimited access to me and the entire community. I'll help you through the course, answer your questions, and help with your code.
You get lifetime access to all past and future sessions. You get access to every course I've created for free. You get access to recordings, job offers, and many people doing the job you want to do.
No monthly payments. Ever.
The link to join is in the attached image and in the following tweet.
The link to join the program:
The cost to join is $385.
November and December are the last two iterations remaining at that price. The cost will go up starting in January 2024.
Today, there are around 800 professionals in the community.ml.school
Live sessions and recordings:
Sessions are live, and I recommend every student to attend if they can.
But we also record every session, and you get access to the recordings. You can watch them whenever you want.
We also have 2 office hours. They are optional but a lot of fun!
There is a considerable risk to start building with Large Language Models.
Prompt lock-in is a big issue, and I'm afraid many people will find out about it the hard way.
There's no cross-compatibility for many of your prompts. If you change your model, your prompts will stop working.
Here are two examples:
First, an application where an LLM generates marketing copy for a site. Here, you expect open-ended responses. A prompt like that will work across different models with little or no modifications. Use cases like this have high prompt portability.
Second, an LLM that interprets and classifies a customer request. This use case requires terse and structured responses. These prompts are model-dependent and have low portability.
Here is what makes matters worse:
The more complex the responses, the more time you need writing prompts and the less portable they are. In other words, the more you invest, the more you'll lock your implementation to one specific model.
What's the solution?
First, be careful how much you invest in writing prompts for a model that could stop working any day. Having to migrate to a different model will come at a steep cost.
Second, it's too early to understand how these models will evolve. Don't outsource too much to a Large Language Model. The more you do, the more significant the risk.
If you are using an LLM as part of a product, how are you protecting against this?
The biggest issue is not whether the model has the capacity to answer a prompt.
The problem is about the variability of that answer. For example, this is an issue when you require a strictly formatted response.
You can solve a problem using GPT-3.5, GPT-4, and Llama 2. But, in many cases, you'll need different prompts for every one of these models.
That's the issue.
This is a huge problem for the maintainability of a system.
For example, a few models from OpenAI will be deprecated in January 2024. Many applications relying on those models will have to update their prompts.
As we invest more time writing these prompts, this becomes a large issue.
Fun fact: not many companies are currently investing in evaluation metrics and automatic testing for their prompts.
It took a while, but I made $600,000 in Upwork alone. The last time I used the platform, I got paid $200/hr.
I started by building web applications. At some point, I started focusing on Machine Learning systems.
While on Upwork, I learned how to find jobs and get hired. I became a Top Rated Plus freelancer with 100% Job Success.
I've never met anyone with a closing rate higher than mine. I sent 79 proposals and closed 19 of them. If you don't think a 24% closing rate is high, you don't know Upwork.
A few months ago, I recorded a 1-hour video with everything I know about Upwork:
• How to structure your profile so clients can't ignore you.
• How to find the projects that everyone else misses.
• How to get hired, regardless of how many people apply.
• How to structure your proposals and cover letter.
I've been selling this course for $40, but today, I'm running an experiment:
The next 100 people who buy the course can do it for 50% off.
That's $20!
$20 to learn how to crack one of the most profitable online marketplaces for freelancers. I'm biased, but it sounds like a steal to me.
And I'll go one step further:
If you take my course and don't find it valuable, let me know, and I'll refund you. No questions asked.
Here is the link with the discount:
Remember: Only 100 copies will go for $20. After that, the course goes back to $40.
Whenever I post about this, people ask me to prove I'm not lying about my $600,000 earnings. It's a fair ask, so here is my Upwork profile:
To see my profile, log into the platform before.
Hope I can help you break free from the rat race!
Somebody asks a valid question in the replies:
Why would I sell this for $20 when I'm increasing competition for myself on the platform?
There are two reasons:
First, I'm not planning to use the platform anymore. I'm done with freelancing and selling my time for money. You could say, "I'm retired."
Second, freelancing is not a zero-sum game. More capable freelancers will lead to more work for everyone else, not less.
Here is a simple way to think about it:
I focused on taking the Machine Learning models to production using Amazon SageMaker.
I engaged with many clients who wanted to work with me but were too early. They needed 6 - 12 months to focus on other areas before being ready.
I would have benefited from more data engineers and data scientists helping these companies become ready for me. More freelancers would have been good for my business!
Here is a free, 1-week cohort that will teach you how to build AI products using OpenAI.
It starts on August 14, and you can apply right now!
Here are the details you want to know:
This cohort will teach you how to use OpenAI's API and ChatGPT to build an application from scratch.
It's completely free.
You can apply here: .
This will be a hands-on, technical course, and you should be familiar with Python to attend.corise.com/go/building-ai…
Some of the things you'll learn:
• Introduction to Large Language Models
• Introduction to the OpenAI ecosystem of models
• Prompt design techniques
• Overcoming hallucinations
• Overcoming context length limitations
• Preventing prompt injections
Check the following post for more information about the reason we misinterpreted the original study:
@sayashk and @random_walker did an excellent job breaking down the original findings and ran the experiment that shows that GPT-4 was never good at… https://t.co/xzxNJOVWrHaisnakeoil.com/p/is-gpt-4-get… twitter.com/i/web/status/1…
OpenAI is extending the lifespan of the March version of GPT-4.
This is good! Anyone relying on that version will have more time to upgrade to the newest June version.
They are doing this "because of developer feedback."