How to think about precision and recall:

Precision: What is the percentage of positive predictions that are actually positive?

Recall: What is the percentage of actual positives that were predicted correctly?
The fewer false positives, the higher the precision. Vice-versa.

The fewer false negatives, the higher the recall. Vice-versa. Image
How do you increase precision? Reduce false positives.

It can depend on the problem, but generally, that might mean fixing the labels of those negative samples(being predicted as positives) or adding more of them in the training data.
How do you increase recall? Reduce false negatives.

Fix the labels of positives samples that are being classified as negatives when they are not, or add more samples to the training data.
What happens when I increase precision? I will hurt recall.

There is a tradeoff between them. Increasing one can reduce the other.
What does it mean when the precision of your classifier is 1?

False positives are 0.

Your classifier is smart about not classifying negative samples as positives.
What's about recall being 1?

False negatives are 0.

Your classifier is smart about not classifying positive samples as negatives.

What if the precision and recall are both 1? You have a perfect classifier. This is ideal!
What is the better way to know the performance of the classifier without playing a battle of balancing precision and recall?
Combine them. Find their harmonic mean. If either precision or recall is low, the resulting mean will be low too.

Such harmonic mean is called the F1 Score and it is a reliable metric to use when we are dealing with imbalanced datasets. Image
If your dataset is balanced(positive samples are equal to negative samples in the training set), ordinary accuracy is enough.
Thanks for reading.

If you found the thread helpful, share it with your friends on Twitter. It is certainly the best way to support me.

Follow @Jeande_d for more machine learning content.

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Jean de Nyandwi

Jean de Nyandwi Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @Jeande_d

2 Nov
Why is so hard to train neural networks?

Neural networks are hard to train. The more they go deep, the more they are likely to suffer from unstable gradients.

A thread 🧵🧵
Gradients can either explode or vanish, and neither of those is a good thing for the training of our network.
The vanishing gradients problem results in the network taking too long to train(learning will be very slow), and the exploding gradients cause the gradients to be very large.
Read 9 tweets
28 Oct
All in just one repository:

◆Data visualization with Matplotlib & Seaborn
◆Data preprocessing with Pandas
◆Classical machine learning with Scikit-Learn: From linear models, trees, ensemble models to PCA
◆Neural networks with TensorFlow & Keras: ConvNets, RNNs, BERT, etc...
You can get all of the above here:

github.com/Nyandwi/machin…
View the notebooks easily here:

nbviewer.org/github/Nyandwi…
Read 5 tweets
26 Oct
The following are 5 main types of machine learning systems based on the level of supervision involved in the training process:

◆Supervised learning
◆Unsupervised learning
◆Semi-supervised learning
◆Self-supervised learning
◆Reinforcement learning

Let's talk about them...🧵
1. Supervised learning

This is the common most type of machine learning. Most ML problems that we encounter falls into this category.

As the name implies, a supervised learning algorithm is trained with input data along with some form of guidance that we can call labels.
In other words, a supervised learning algorithm maps the input data (or X in many textbooks) to output labels (y).

Labels are also known as targets and they acts as a description of the input data.
Read 30 tweets
24 Oct
My summary of the week on Twitter ML:

◆ 3 threads on explaining complex concepts
◆ 2 on practical learning resources and
◆ 1 good news

🧵🧵
EXPLAINED CONCEPTS/IDEAS

#1 @fchollet on the nature of generalization in deep learning, clearly explaining interpolation and manifold hypothesis.

A long thread that is worth reading

#2 @svpino on what you didn't know about machine learning pipelines.

Read 8 tweets
18 Oct
Kaggle's 2021 State of Data Science and Machine Learning survey was released a few days ago.

If you didn't see it, here are some important takeaways 🧵
Top 5 IDEs

1. Jupyter Notebook
2. Visual Studio Code
3. JupyterLab
4. PyCharm
5. RStudio
ML Algorithms Usage: Top 10

1. Linear/logistic regression
2. Decision trees/random forests
3. Gradient boosting machines(Xgboost, LightGBM)
5. Convnets
6. Bayesian approaches
7. Dense neural networks(MLPs)
8. Recurrent neural networks(RNNs)
9. Transformers(BERT, GPT-3)
10. GANs
Read 12 tweets
17 Oct
Source of errors in building traditional programs:

◆Wrong syntaxes
◆Inefficient codes
Source of errors in machine learning:

◆Solving a wrong problem
◆Using a wrong evaluation metric
◆Not being aware of a skewed data
◆Inconsistent data preprocessing functions
More sources of errors in ML:

◆Putting too much emphasis on models than data
◆Data leakage
◆Training on the test data
◆Model and data drifts
Read 7 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!

Follow Us on Twitter!

:(