Santiago Profile picture
I teach hard-core Machine Learning at https://t.co/THCAAZcBMu. YouTube: https://t.co/pROi08OZYJ
fly51fly Profile picture ☀️ Leon-Gerard Vandenberg 🇳🇱🇨🇦🇦🇺 Math+e/acc Profile picture pepe julian onzima 🌬️侘寂 🌭 Profile picture Red Redux V Profile picture Dr. Hansom Davidoff Profile picture 91 subscribed
Jul 12 10 tweets 4 min read
A common fallacy:

If it's raining, the sidewalk is wet. But if the sidewalk is wet, is it raining?

Reversing the implication is called "affirming the consequent." We usually fall for this.

But surprisingly, it's not entirely wrong!

Let's explain it using Bayes Theorem:

1/10 Image This explanation is courtesy of @TivadarDanka. He allowed me to republish it.

He is writing a book about the mathematics of Machine Learning. It's the best book I've read:



Nobody explains complex ideas like he does.

2/10tivadardanka.com/books/mathemat…
Jun 12 6 tweets 2 min read
Some of the skills you need to start building AI applications:

• Python and SQL
• Transformer and diffusion models
• LLMs and fine-tuning
• Retrieval Augmented Generation
• Vector databases

Here is one of the most comprehensive programs that you'll find online: "Generative AI for Software Developers" is a 4-month online course.

It's a 5 to 10-hour weekly commitment, but you can dedicate as much time as you want to finish early.

Here is the link to the program:

I also have a PDF with the syllabus:bit.ly/4aNOJdy
Jun 10 15 tweets 5 min read
There's a stunning, simple explanation behind matrix multiplication.

This is the first time this clicked on my brain, and it will be the best thing you read all week.

Here is a breakdown of the most crucial idea behind modern machine learning:

1/15 Image This explanation is courtesy of @TivadarDanka. He allowed me to republish it

3 years ago, he started writing a book about the mathematics of Machine Learning.

It's the best book you'll ever read:



Nobody explains complex ideas like he does.

2/15tivadardanka.com/books/mathemat…
May 28 4 tweets 1 min read
This assistant has 169 lines of code:

• Gemini Flash
• OpenAI Whisper
• OpenAI TTS API
• OpenCV

GPT-4o is slower than Flash, more expensive, chatty, and very stubborn (it doesn't like to stick to my prompts).

Next week, I'll post a step-by-step video on how to build this. The first request takes longer (warming up), but things work faster from that point.

Few opportunities to improve this:

1. Stream answers from the model (instead of waiting for the full answer.)

2. Add the ability to interrupt the assistant.

3. Whisper running on GPU
May 25 4 tweets 2 min read
I’m so sorry about anyone who bought the rabbit r1.

It’s not just that the product is non-functional (as we learned from all the reviews), the real problem is that the whole thing seems to be a lie.

None of what they pitched exists or functions the way they said. Image They sold the world on a Large Action Model (LAM), an intelligent AI model that would understand applications and execute the actions requested by the user.

In reality, they are using Playwright, a web automation tool.

No AI. Just dumb, click-around, hard-coded scripts. Image
Mar 31 10 tweets 4 min read
What a week, huh?

1. Mojo 🔥 went open-source
2. Claude 3 beats GPT-4
3. $100B supercomputer from MSFT and OpenAI
4. Andrew Ng and Harrison Chase discussed AI Agents
5. Karpathy talked about the future of AI
...

And more.

Here is everything that will keep you up at night: Mojo 🔥, the programming language that turns Python into a beast, went open-source.

This is a huge step and great news for the Python and AI communities!

With Mojo 🔥 you can write Python code or scale all the way down to metal code. It's fast!

modular.com/blog/the-next-…
Mar 13 14 tweets 4 min read
The batch size is one of the most important parameters when training neural networks.

Here is everything you need to know about the batch size:

1 of 14 Image I trained two neural networks.

Same architecture, loss, optimizer, learning rate, momentum, epochs, and training data. Almost everything is the same.

Here is a plot of their losses.

Can you guess what the only difference is?

2 of 14 Image
Jan 5 5 tweets 2 min read
I had an amazing machine learning professor.

The first thing I learned from him was how to interpret learning curves. (Probably one of the best skills I built and refined over the years.)

Let me show you 4 pictures and you'll see how this process flows:

1/5 Image I trained a neural network. A simple one.

I plotted the model's training loss. As you can see, it's too high.

This network is underfitting. It's not learning.

I need to make the model larger.

2/5 Image
Dec 21, 2023 4 tweets 2 min read
AI will be one of the most crucial skills for the next 20 years.

If I were starting today, I'd learn these:

• Python
• LLMs
• Retrieval Augmented Generation (RAG)

Here are 40+ free lessons and practical projects on building advanced RAG applications for production:

1/4
This is one of the most comprehensive courses you'll find. It covers all of LangChain and LlamaIndex.

And it's 100% FREE!

@activeloopai, @towards_AI, and @intel Disruptor collaborated with @llama_index to develop it.

Here is the link:

2/4learn.activeloop.ai/courses/rag
Oct 25, 2023 8 tweets 4 min read
The best real-life Machine Learning program out there:

"I have seen hundreds of courses; this is the best material and depth of knowledge I've seen."

That's what a professional Software Engineer finishing my program said during class. This is the real deal.

I teach a hard-core live class. It's the best program to learn about building production Machine Learning systems.

But it's not a $9.99 online course. It's not about videos or a bunch of tutorials you can read.

This program is different.

It's 14 hours of live sessions where you interact with me, like in any other classroom. It's tough, with 30 quizzes and 30 coding assignments.

Online courses can't compete with that.

I'll teach you pragmatic Machine Learning for Engineers. This is the type of knowledge every company wants to have.

The program's next iteration (Cohort #8) starts on November 6th. The following (Cohort #9) on December 4th.

It will be different from any other class you've ever taken. It will be tough. It will be fun. It's the closest thing to sitting in a classroom.

And for the first time, the next iteration includes an additional 9 hours of pre-recorded materials to help you as much as possible!

You'll learn about Machine Learning in the real world. You'll learn to train, tune, evaluate, register, deploy, and monitor models. You'll learn how to build a system that continually learns and how to test it in production.

You'll get unlimited access to me and the entire community. I'll help you through the course, answer your questions, and help with your code.

You get lifetime access to all past and future sessions. You get access to every course I've created for free. You get access to recordings, job offers, and many people doing the job you want to do.

No monthly payments. Ever.

The link to join is in the attached image and in the following tweet.
Image The link to join the program:
The cost to join is $385.

November and December are the last two iterations remaining at that price. The cost will go up starting in January 2024.

Today, there are around 800 professionals in the community.ml.school
Oct 2, 2023 8 tweets 3 min read
AI is changing how we build software.

A few weeks ago, I talked about using AI for code reviews. Many dismissed the idea, saying AI can't help beyond trivial suggestions.

You are wrong.

Here are a few examples of what you can do with @CodiumAI's open-source pull request agent: Image Here, the agent generated the description of a pull request.

It looks at every commit and file involved and summarizes what's happening automatically.

You can do this by using the "/describe" command. Image
Sep 21, 2023 5 tweets 2 min read
There is a considerable risk to start building with Large Language Models.

Prompt lock-in is a big issue, and I'm afraid many people will find out about it the hard way.

There's no cross-compatibility for many of your prompts. If you change your model, your prompts will stop working.

Here are two examples:

First, an application where an LLM generates marketing copy for a site. Here, you expect open-ended responses. A prompt like that will work across different models with little or no modifications. Use cases like this have high prompt portability.

Second, an LLM that interprets and classifies a customer request. This use case requires terse and structured responses. These prompts are model-dependent and have low portability.

Here is what makes matters worse:

The more complex the responses, the more time you need writing prompts and the less portable they are. In other words, the more you invest, the more you'll lock your implementation to one specific model.

What's the solution?

First, be careful how much you invest in writing prompts for a model that could stop working any day. Having to migrate to a different model will come at a steep cost.

Second, it's too early to understand how these models will evolve. Don't outsource too much to a Large Language Model. The more you do, the more significant the risk.

If you are using an LLM as part of a product, how are you protecting against this? The biggest issue is not whether the model has the capacity to answer a prompt.

The problem is about the variability of that answer. For example, this is an issue when you require a strictly formatted response.

You can solve a problem using GPT-3.5, GPT-4, and Llama 2. But, in many cases, you'll need different prompts for every one of these models.

That's the issue.

Sep 13, 2023 4 tweets 3 min read
I started freelancing at $8/hour.

It took a while, but I made $600,000 in Upwork alone. The last time I used the platform, I got paid $200/hr.

I started by building web applications. At some point, I started focusing on Machine Learning systems.

While on Upwork, I learned how to find jobs and get hired. I became a Top Rated Plus freelancer with 100% Job Success.

I've never met anyone with a closing rate higher than mine. I sent 79 proposals and closed 19 of them. If you don't think a 24% closing rate is high, you don't know Upwork.

A few months ago, I recorded a 1-hour video with everything I know about Upwork:

• How to structure your profile so clients can't ignore you.

• How to find the projects that everyone else misses.

• How to get hired, regardless of how many people apply.

• How to structure your proposals and cover letter.

I've been selling this course for $40, but today, I'm running an experiment:

The next 100 people who buy the course can do it for 50% off.

That's $20!

$20 to learn how to crack one of the most profitable online marketplaces for freelancers. I'm biased, but it sounds like a steal to me.

And I'll go one step further:

If you take my course and don't find it valuable, let me know, and I'll refund you. No questions asked.

Here is the link with the discount:



Remember: Only 100 copies will go for $20. After that, the course goes back to $40.

Whenever I post about this, people ask me to prove I'm not lying about my $600,000 earnings. It's a fair ask, so here is my Upwork profile:



To see my profile, log into the platform before.

Hope I can help you break free from the rat race!
A screenshot of my Upwork profile. Somebody asks a valid question in the replies:

Why would I sell this for $20 when I'm increasing competition for myself on the platform?

There are two reasons:

First, I'm not planning to use the platform anymore. I'm done with freelancing and selling my time for money. You could say, "I'm retired."

Second, freelancing is not a zero-sum game. More capable freelancers will lead to more work for everyone else, not less.

Here is a simple way to think about it:

I focused on taking the Machine Learning models to production using Amazon SageMaker.

I engaged with many clients who wanted to work with me but were too early. They needed 6 - 12 months to focus on other areas before being ready.

I would have benefited from more data engineers and data scientists helping these companies become ready for me. More freelancers would have been good for my business!
Jul 24, 2023 4 tweets 2 min read
Nothing beats FREE education!

Here is a free, 1-week cohort that will teach you how to build AI products using OpenAI.

It starts on August 14, and you can apply right now!

Here are the details you want to know: Image This cohort will teach you how to use OpenAI's API and ChatGPT to build an application from scratch.

It's completely free.

You can apply here: .

This will be a hands-on, technical course, and you should be familiar with Python to attend.corise.com/go/building-ai…
Jul 20, 2023 4 tweets 2 min read
Yes, GPT-4 seems to be getting worse.

But now we have new information. And well, it's complicated.

Yesterday, I posted about a study showing that GPT-4 success rate deciding whether a number is prime went from 97.6% in March to 2.4% in June.

The report also showed how the… https://t.co/6jPFYPoWI0twitter.com/i/web/status/1…
Is GPT-4 getting worse? Check the following post for more information about the reason we misinterpreted the original study:



@sayashk and @random_walker did an excellent job breaking down the original findings and ran the experiment that shows that GPT-4 was never good at… https://t.co/xzxNJOVWrHaisnakeoil.com/p/is-gpt-4-get…
twitter.com/i/web/status/1…
Jul 19, 2023 9 tweets 3 min read
GPT-4 is getting worse over time, not better.

Many people have reported noticing a significant degradation in the quality of the model responses, but so far, it was all anecdotal.

But now we know.

At least one study shows how the June version of GPT-4 is objectively worse than… https://t.co/whhELYY6M4twitter.com/i/web/status/1…
Accuracy comparison between the March version of GPT-4 with the June version on the problem of determining whether a number is prime. In March, GPT-4 solved 97.6% of problems accurately, while in June, it solved only 2.4% of the problems. Here is the original paper:



And you can reproduce the results using this Google Colab:

https://t.co/gffe08ZGty https://t.co/9Tjqx6AFvjarxiv.org/pdf/2307.09009…
colab.research.google.com/github/lchen00…
Image
Jul 15, 2023 4 tweets 2 min read
Photography will never be the same.

In 10 minutes, you can turn your photo gallery into unlimited, amazing pictures. For free!

How much imagination do you have? Image Follow these steps to generate your photos:

1. Find a few photos of you. The more, the merrier.
2. Go to and get an API KEY.
3. Run the code in the notebook below (Upload your photos first.)

Here is the code: https://t.co/LdmtSZlMJ5tryleap.ai
colab.research.google.com/drive/1v45UprB…
Jul 11, 2023 8 tweets 3 min read
How do you think companies are training their Large Language Models? Where do you think the data come from?

Web scraping.

This is one of the most valuable skills you can learn.

Here is how it works and how you can learn it for free: The Web Data Masterclass is a collection of videos about web data and how to collect it:



You'll find tutorials and how-tos from leading data scientists and engineers like @MariyaSha888, @ykilcher, @Jeffdelaney23, and @kunalstwt.

And it's free! https://t.co/IAg4MdHIsZbrdta.com/web-data-maste…
Jul 9, 2023 4 tweets 2 min read
This is the unfortunate state of AI shitfluencing.

People with nothing to add to the conversation and zero originality pump content like this to farm followers.

I'm sad for everyone who believes them. I normally leave these people, but I’m not going to put up with lies and exaggerations that prey on people who don’t know better.

I’ve had more than a few conversations with students that want to quit or never learn programming because they read that AI killed the practice.… twitter.com/i/web/status/1…
Jul 7, 2023 4 tweets 2 min read
You can now fine-tune an LLM without writing a single line of code!

A breakthrough in the open-source LLM space that can increase the speed of AI development and adoption by an order of magnitude.

Let me start from the beginning:

A Large Language Model comes out of the factory… https://t.co/DwBVtReKL0twitter.com/i/web/status/1…
Here is a link to an article with a step-by-step demo of fine-tuning a model without writing any code: .

Thanks to @monsterapis for partnering with me on this thread.blog.monsterapi.ai/no-code-fine-t…
Jul 5, 2023 7 tweets 2 min read
Another deep learning breakthrough:

Deep TDA, a new algorithm using self-supervised learning, overcomes the limitations of traditional dimensionality reduction algorithms.

t-SNE and UMAP have long been the favorites. Deep TDA might change that forever.

Here are the details: Dimensionality reduction algorithms like t-SNE and UMAP have been around for a long time and are essential to analyze complex data.

Specifically, t-SNE is one of the most popular algorithms I've seen used in the industry.

Hinton and van der Maaten developed it in 2008.