Why is so hard to train neural networks?

Neural networks are hard to train. The more they go deep, the more they are likely to suffer from unstable gradients.

A thread 🧵🧵
Gradients can either explode or vanish, and neither of those is a good thing for the training of our network.
The vanishing gradients problem results in the network taking too long to train(learning will be very slow), and the exploding gradients cause the gradients to be very large.
Although those problems are nearly inevitable, the choice of activation function can reduce their effects.

Using ReLU activation in the first layers can help avoid vanishing gradients.
That is also the reason why we do not like to see sigmoid activation being used in the first layers of the network because it can cause the gradients to vanish quickly.
Careful weight initialization can also help, but ReLU is by far the good fix.
This short thread was only about the high-level understanding of the issue. If you would like to learn more, you can read this stats discussion

stats.stackexchange.com/questions/2627…
Or read chapter 5 of the book Neural networks and Deep Learning by Michael Nielsen.

neuralnetworksanddeeplearning.com/chap5.html#the…
Thanks for reading.

If you found the thread helpful, you can retweet or share it with your friends.

Follow @Jeande_d for more machine learning content.

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Jean de Nyandwi

Jean de Nyandwi Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @Jeande_d

1 Nov
How to think about precision and recall:

Precision: What is the percentage of positive predictions that are actually positive?

Recall: What is the percentage of actual positives that were predicted correctly?
The fewer false positives, the higher the precision. Vice-versa.

The fewer false negatives, the higher the recall. Vice-versa. Image
How do you increase precision? Reduce false positives.

It can depend on the problem, but generally, that might mean fixing the labels of those negative samples(being predicted as positives) or adding more of them in the training data.
Read 11 tweets
28 Oct
All in just one repository:

â—†Data visualization with Matplotlib & Seaborn
â—†Data preprocessing with Pandas
â—†Classical machine learning with Scikit-Learn: From linear models, trees, ensemble models to PCA
â—†Neural networks with TensorFlow & Keras: ConvNets, RNNs, BERT, etc...
You can get all of the above here:

github.com/Nyandwi/machin…
View the notebooks easily here:

nbviewer.org/github/Nyandwi…
Read 5 tweets
26 Oct
The following are 5 main types of machine learning systems based on the level of supervision involved in the training process:

â—†Supervised learning
â—†Unsupervised learning
â—†Semi-supervised learning
â—†Self-supervised learning
â—†Reinforcement learning

Let's talk about them...đź§µ
1. Supervised learning

This is the common most type of machine learning. Most ML problems that we encounter falls into this category.

As the name implies, a supervised learning algorithm is trained with input data along with some form of guidance that we can call labels.
In other words, a supervised learning algorithm maps the input data (or X in many textbooks) to output labels (y).

Labels are also known as targets and they acts as a description of the input data.
Read 30 tweets
24 Oct
My summary of the week on Twitter ML:

â—† 3 threads on explaining complex concepts
â—† 2 on practical learning resources and
â—† 1 good news

🧵🧵
EXPLAINED CONCEPTS/IDEAS

#1 @fchollet on the nature of generalization in deep learning, clearly explaining interpolation and manifold hypothesis.

A long thread that is worth reading

#2 @svpino on what you didn't know about machine learning pipelines.

Read 8 tweets
18 Oct
Kaggle's 2021 State of Data Science and Machine Learning survey was released a few days ago.

If you didn't see it, here are some important takeaways đź§µ
Top 5 IDEs

1. Jupyter Notebook
2. Visual Studio Code
3. JupyterLab
4. PyCharm
5. RStudio
ML Algorithms Usage: Top 10

1. Linear/logistic regression
2. Decision trees/random forests
3. Gradient boosting machines(Xgboost, LightGBM)
5. Convnets
6. Bayesian approaches
7. Dense neural networks(MLPs)
8. Recurrent neural networks(RNNs)
9. Transformers(BERT, GPT-3)
10. GANs
Read 12 tweets
17 Oct
Source of errors in building traditional programs:

â—†Wrong syntaxes
â—†Inefficient codes
Source of errors in machine learning:

â—†Solving a wrong problem
â—†Using a wrong evaluation metric
â—†Not being aware of a skewed data
â—†Inconsistent data preprocessing functions
More sources of errors in ML:

â—†Putting too much emphasis on models than data
â—†Data leakage
â—†Training on the test data
â—†Model and data drifts
Read 7 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!

Follow Us on Twitter!

:(