A quick, non-technical explanation of Dropout.

(As easy as I could make it.)

πŸ§΅πŸ‘‡
Remember those two kids from school that sat together and copied from each other during exams?

They aced every test but were hardly brilliant, remember?

Eventually, the teacher had to set them apart. That was the only way to force them to learn.

πŸ‘‡
The same happens with neural networks.

Sometimes, a few hidden units create associations that, over time, provide most of the predictive power, forcing the network to ignore the rest.

This is called co-adaptation, and it prevents networks from generalizing appropriately.

πŸ‘‡
We can solve this problem like teachers do: breaking the associations preventing the network from learning.

This is what Dropout is for.

During training, Dropout randomly removes some of the units. This forces the network to learn in a balanced way.

πŸ‘‡
Units may or may not be present during a round of training.

Now every unit is on its own and can't rely on other units to do their work. They have to work harder by themselves.

Dropout works very well, and it's one of the main mechanisms to reduce overfitting.

πŸ‘‡
Here is an example of how Dropout works.

In this case, we are dropping 50% of all the units.

Notice how the result shows the dropped units (equal to zero), and it scaled the remaining units (to account for the missing units.)

πŸ‘‡
Finally, here is an excellent, more technical introduction to Dropout from @TeachTheMachine:

machinelearningmastery.com/dropout-for-re…

β€’ β€’ β€’

Missing some Tweet in this thread? You can try to force a refresh
γ€€

Keep Current with Santiago πŸŽƒ

Santiago πŸŽƒ Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @svpino

24 Oct
33 applications of Machine Learning, 3 different categories.

(And there are so many more it's not even funny!)

It doesn't matter what you enjoy in life. There's something here for you!

πŸ§΅πŸ‘‡
▫️ Natural Language Processing Applications

1. Speech recognition
2. Answering questions
3. Translation
4. Generating content
5. Summarizing documents
6. Sentiment analysis
7. Virtual assistants
8. Classifying text
9. Autocorrection
10. Urgency detection
11. Text extraction

πŸ‘‡
▫️ Computer Vision Applications

1. Face recognition
2. Image captioning
3. Image coloring
4. Object detection
5. Image classification
6. Pose estimation
7. Image transformation
8. Image analysis
9. Automatic drone inspections
10. Defect detection
11. Image restoration

πŸ‘‡
Read 4 tweets
21 Oct
I always get Normalization and Standardization mixed up.

But they are different.

Notes about them and why do we care.

πŸ§΅πŸ‘‡
Feature scaling is key for a lot of Machine Learning algorithms to work well.

We always want all of our data on the same scale.

πŸ‘‡
Imagine we are working with a dataset of workers.

"Age" will range between 16 and 90.
"Salary" will range between 15,000 and 150,000.

Huge disparity!

Salary will dominate any comparisons because of its magnitude.

We can fix that by scaling both features.
πŸ‘‡
Read 7 tweets
20 Oct
I'm a full-on AI proponent.

But I really don't like the idea of facial recognition software.

This is why.

πŸ§΅πŸ‘‡
▫️It violates our right to privacy

Do you really want thousands of photos with your face stored in hundreds of databases all over the place?

Photos that will be automatically tagged with your personal information.

And you won't have any control over this.

πŸ‘‡
▫️Lack of regulations makes this scary.

Who will be able to use this? Do we have to give consent? Can we trust this? How is this information going to be used? With what purposes?

Are we gonna get tracked every time, everywhere?

πŸ‘‡
Read 9 tweets
19 Oct
Overfitting sucks.

Here are 7 ways you can deal with overfitting in Deep Learning neural networks.

πŸ§΅πŸ‘‡
A quick reminder:

When your model makes good predictions on the same data that was used to train it but shows poor results with data that hasn't seen before, we say that the model is overfitting.

The model in the picture is overfitting.

πŸ‘‡
1⃣ Train your model on more data

The more data you feed the model, the more likely it will start generalizing (instead of memorizing the training set.)

Look at the relationship between dataset size and error.

(Unfortunately, sometimes there's no more data.)

πŸ‘‡
Read 10 tweets
18 Oct
Bias vs. variance in 13 charts.

πŸ§΅πŸ‘‡
Here is a sample 2-dimensional dataset.

(We are just representing here the training data.)

πŸ‘‡
The red line represents a model.

Let's call it "Model A."

A very simple model. Just a straight line.

πŸ‘‡
Read 15 tweets
17 Oct
Wanna maximize the potential reward of every hour you spend?

Here is a tangible way to do this when building real-life Machine Learning solutions.

πŸ§΅πŸ‘‡
Complex systems usually depend on multiple components working together to produce a solution.

Imagine a pipeline like this, where the input goes through 4 different components before getting to the appropriate output.

πŸ‘‡
After everything is said and done, let's imagine this system is correct 60% of the time.

That sucks. We need to improve it.

Unfortunately, we tend to prioritize work in those areas where we *think* there's value. Even worse, areas that are easy or fun to change.

πŸ‘‡
Read 9 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!

Follow Us on Twitter!