Levi Profile picture
Jun 16 7 tweets 2 min read Twitter logo Read on Twitter
Feedforward (FNN) is one of the simplest Neural Networks.

Let's see how they work.

1/7 Image
Complex Neural Networks may contain loops and feedback connections.

These allow information to flow back into the network, enabling it to retain and utilize information from earlier steps.

But FNN works differently 🔽

2/7 Image
Feedforward Neural Network (FNN) is way simpler.

In this network, information moves only forward from the input nodes.

This NN does not contain any loops of feedback connections.

The output of one layer serves as the input for the next layer.

3/7 Image
What are the advantages?

- FNNs are easy to understand since their structure is straightforward.

- Scalability - The number of layers can be increased or decreased.

- Simpler structure means easier interpretability.

- Widely used.

4/7
What are the disadvantages?

- They hardly find complex connections, since every input is handled independently.

- Since they do not contain loops and feedback, they are not the best for modeling sequences.

5/7
That's it for today.

I hope you've found this thread helpful.

Like/Retweet the first tweet below for support and follow @levikul09 for more Data Science threads.

Thanks 😉

6/7
You should also join our newsletter, DSBoost.

We share:

• Interviews

• Podcast notes

• Learning resources

• Interesting collections of content

dsboost.substack. com

7/7

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Levi

Levi Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @levikul09

Jun 18
Recurrent Neural Networks have a game-changing approach.

Let me show you how they work.

1/7 Image
In this thread below we discussed Feedforward Neural Networks (FNN)

But it has a few limitations.

Let's discuss them in the next tweet.



2/7
The issues with FNN:

- Cannot handle sequential data

- Works only with one input

- Cannot memorize previous inputs/results

The RNN corrects these issues. Let's see how 🔽

3/7
Read 7 tweets
Jun 17
A critical skill people learn too late in DS:

Communication.

Make people think "Hey, I can understand Data Science."

How?

Here are 5 tips 🔽 Image
1. Be brief!

Quality > quantity.

Get rid of unnecessary information.

Better communication is less communication.

"If I had more time, I would have written a shorter letter."

Writing impactful words in fewer sentences is a skill. Image
2. Listen carefully!

Do not interrupt!

Understanding what others are communicating is key.

Without the full context, you cannot frame your answer. Image
Read 7 tweets
Jun 15
Knowledge Distillation is another great way to reduce model complexity.

Today I will explain student-teacher models.

1/8 Image
Yesterday we discussed Pruning.

You can find the thread here:



2/8
How does a teacher-student relationship work in real life?

The teacher uses many resources to learn a topic. Her knowledge is really broad.

She breaks down her knowledge into simple lessons.

The students learn the lessons and they will have a more compact knowledge.

3/8
Read 8 tweets
Jun 14
Neural Networks can get too complicated!

But Pruning is a great way to make them simpler.

Here are 3 types of Pruning.

1/8 Image
There are several issues with huge models:

- Long training time
- Long inference time
- Large memory usage

But compression methods, like pruning, make the models simpler, with minimal impact on performance.

2/8
Pruning is typically performed after the initial training of a model, so it only helps storage and inference.

Pruning can be applied on:

- Weights
- Neurons
- Layers

Let's discuss each 🔽

3/8
Read 8 tweets
Jun 11
6 Statistical and Machine Learning pitfalls.

Avoid these traps to be a better data person.

1/8 Gutman, Alex J., and Jordan...
1️⃣ Correlation = Causation

They are related, it is crucial to understand that correlation does not imply causation!

We cannot measure causation statistically!

Resist the temptation to build a causal narrative around correlated variables.

2/8
2️⃣ P-hacking

Statistically significant results do not always imply real-life significance.

Studies can manipulate or selectively analyze data in order to obtain statistically significant results.

It is important to follow transparent and rigorous research practices.

3/8
Read 8 tweets
Jun 10
Gradient descent is a powerful optimization technique.

But it has 3 main types. What are the differences? When to use them?

Let's figure it out!

1/6 Image
The 3 gradient descent types differ in how much data they use in the process.

This will result in a trade-off between accuracy and computation time.

These are really important aspects of an analysis, so using the appropriate gradient descent method is essential.

2/6
1️⃣ Batch gradient descent

This method uses the entire dataset to perform an update in the model.

With large data it can be very slow, but if the data is manageable, this method will provide a smooth convergence.

3/6 Image
Read 6 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us on Twitter!

:(