How to think about precision and recall:

Precision: What is the percentage of positive predictions that are actually positive?

Recall: What is the percentage of actual positives that were predicted correctly?
The fewer false positives, the higher the precision. Vice-versa.

The fewer false negatives, the higher the recall. Vice-versa. Image
How do you increase precision? Reduce false positives.

It can depend on the problem, but generally, that might mean fixing the labels of those negative samples(being predicted as positives) or adding more of them in the training data.
How do you increase recall? Reduce false negatives.

Fix the labels of positives samples that are being classified as negatives when they are not, or add more samples to the training data.
What happens when I increase precision? I will hurt recall.

There is a tradeoff between them. Increasing one can reduce the other.
What does it mean when the precision of your classifier is 1?

False positives are 0.

Your classifier is smart about not classifying negative samples as positives.
What's about recall being 1?

False negatives are 0.

Your classifier is smart about not classifying positive samples as negatives.

What if the precision and recall are both 1? You have a totally perfect classifier. This is ideal!
What is a better way to know the performance of the classifier without playing a battle of balancing precision and recall?
Combine them. Find their harmonic mean. If either precision or recall is low, the resulting mean will be low too.

Such harmonic mean is called the F1 Score and it is a reliable metric to use when we are dealing with imbalanced datasets. Image
If your dataset is balanced(positive samples are equal to negative samples in the training set), ordinary accuracy is enough.

Where to go from here?

Follow @Jeande_d. You won't regret it

Thank you!

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Jean de Nyandwi

Jean de Nyandwi Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @Jeande_d

7 Oct
The most useful courses are free. They are only challenging and hard to complete, which is why they are useful.

Here are 4 examples of the free machine learning courses that with enough dedication can help you get useful skills.

🧵
1. Machine Learning by Andrew Ng. on Coursera

Price: Free
Students: Over 4 million people

coursera.org/learn/machine-… Image
2. Full Stack Deep Learning by UC Berkeley

Price: Free

fullstackdeeplearning.com/spring2021/ Image
Read 7 tweets
26 Sep
Releasing a complete machine learning package containing over 30 end to end notebooks for:

◆Data analysis
◆Data visualization
◆Data cleaning
◆Classical ML
◆Computer vision
◆Natural language processing

Everything is now accessible here:

github.com/Nyandwi/machin…
Every single notebook is very interactive.

It starts with a high-level overview of the model/technique being covered and then continues with the implementation.

And wherever possible, there are visuals to support the concepts.
Here is an outline of what you will find there:

PART 1 - Intro to Programming and Working with Data

◆Intro to Python for Machine Learning
◆Data Computation With NumPy
◆Data Manipulation with Pandas
◆Data Visualization
◆Real EDA and Data Preparation
Read 10 tweets
25 Sep
Here are 7 samples from what's coming tomorrow:

1. Data visualization with Seaborn

◆Relational Plots
◆Distribution Plots
◆Categorical Plots
◆Regression Plots
◆Multiplots
◆Matrix Plots: Heat and Cluster Maps
◆Style and Color

colab.research.google.com/drive/1Qkf53B4…
2. Exploratory Data Analysis

◆A quick look into the dataset
◆Summary statistics
◆Finding the basic information about the dataset
◆Checking missing data
◆Checking feature correlations

colab.research.google.com/drive/1iMpQOWH…
3. A Friendly Intro to Machine Learning

◆Intro to ML Paradigm
◆Machine Learning Workflow
◆Evaluation Metrics
◆Handling Underfitting and Overfitting

colab.research.google.com/drive/14uySoOh…
Read 9 tweets
20 Sep
TensorFlow or PyTorch?

Forget about numbers. They are both great at what they do, which is putting machine learning codes together.
TensorFlow is most popular in industries, and PyTorch in research organizations/academics,

but the number of industries that use PyTorch and the number of researches made with TensorFlow have been all increasing.
If you are choosing what to learn for the first time, what is the best than the other does not really matter that much.

Focus on one, know its ins and outs, avoid going back and forth learning all of them, and let everybody else use their favorite tools.
Read 5 tweets
20 Sep
The key differences between shallow learning and deep learning models:

Shallow learning models:

◆ Most of them are simple and require less hyper-parametrization
◆ They need the features to be pre-extracted
◆ They are best suited for tabular datasets
◆ Their architectural changes are very limited.
◆ They don't require huge computation resources
◆ Their results are interpretable than deep learning models
◆ Because of the limit in their design change, there are little researches going on in these models.
Example of shallow learning models:

◆Linear and logistic regression
◆Support vector machines
◆Decision trees
◆Random forests
◆K-Nearest neighbors
Read 9 tweets
11 Sep
Popular deep learning architectures:

◆ Densely connected neural networks
◆ Convolutional neural networks
◆ Recurrent neural networks
◆ Transformers

Let's talk about these architectures and their suites of datasets in-depth 🧵
Machine learning is an experimentation science. An algorithm that was invented to process images can turn out to work well on texts too.

The next tweets are about the main neural network architectures and their suites of datasets.
1. Densely connected neural networks

Densely connected networks are made of stacks of layers that go from the input to the output.

Generally, networks are organized into layers. Each carry takes input data, processes it, and gives the output to the next layer.
Read 34 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!

Follow Us on Twitter!

:(