Imagine your favorite creator in Twitter starts offering the following:
1. A weekly newsletter 2. Deep dives into your favorite topics 3. A look behind the scenes 4. Live discussion invitations 5. Unfiltered exclusive content
$4.99/mo
Would you subscribe?
@AlejandroPiad and @yudivian I know what you vote would be, but let’s watch these results and see what the broader community thinks.
In my experience 1,000 answers is usually enough to capture the overall sentiment of my audience.
Who hasn’t voted yet?
8 minutes left!
• • •
Missing some Tweet in this thread? You can try to
force a refresh
Here are the best 10 machine learning threads I posted in February.
They go all the way from beginner-friendly content to a broader dive into specific machine learning concepts and techniques.
I'd love to hear which one is your favorite!
🧵👇
Having to pick only 10 threads is painful. I always struggle to decide what should stay out of the list.
This, however, is a great incentive when I'm writing the content: I have to compete against myself to make sure what I write ends up being part of the list!
[2 / 13]
[Thread 1]
An explanation about three of the most important metrics we use: accuracy, precision, and recall.
More specifically, this thread shows what happens when we focus on the wrong metric using an imbalanced classification problem.
For the first time yesterday, I set up a project using a Development Container in Visual Studio Code and it immediately hit me:
✨ This is the way going forward! 🤯
If you haven't used this yet, here are some thoughts.
👇
The basic idea: you can run your entire development environment inside a container.
Every time you open your project, @code prepares and runs your container.
[2 / 7]
There are several advantages to this:
First of all, your entire team will run exactly the same environment, regardless of their preferred operating system, folder structure, existing libraries, etc.
Imagine you have a ton of data, but most of it isn't labeled. Even worse: labeling is very expensive. 😑
How can we get past this problem?
Let's talk about a different—and pretty cool—way to train a machine learning model.
☕️👇
Let's say we want to classify videos in terms of maturity level. We have millions of them, but only a few have labels.
Labeling a video takes a long time (you have to watch it in full!) We also don't know how many videos we need to build a good model.
[2 / 9]
In a traditional supervised approach, we don't have a choice: we need to spend the time and come up with a large dataset of labeled videos to train our model.
But this isn't always an option.
In some cases, this may be the end of the project. 😟