Feature Scaling is one of the most important transformations you need to apply to your data before feeding it to your machine learning model.
🧵thread 👇
- Feature scaling refers to the concept of normalizing the range of input features so that they have similar scales.
- Most machine learning algorithms do not work well when the input numerical attributes have very different scales. Image
1⃣Min-Max Scaling (Normalization)
- In this method, values are rescaled so they end up ranging between 0 and 1 (the range can be changed in some cases).
- Normalization is very sensitive to the existence of outliers. Image
2⃣Standardization (Z-Score Normalization)
- Centers the data by making the values in each feature have a mean of zero and a variance of one.
- Standardization is much less affected by outliers but it does not reduce its impact on the model. Image
3⃣Quantile Transformation
- Transform features using quantile information so that they follow a normal distribution.
- It uses non-linear transformations to spread out the most frequent values and reduce the impact of outliers. Image
For more content about machine learning and computer science, please follow me @ammar_yasser92.

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Ammar Yasser

Ammar Yasser Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @ammaryh92

21 Jun
Data Science learning roadmap
If you are looking to get into machine learning, I would like to make your life easier by listing some of my favorite resources in this thread 🧵👇
#MachineLearning
1⃣Python is the most common language for data science (even though there are other great languages like R, C++)
This playlist by @JustinMitchel is a great way to start.
youtube.com/playlist?list=…
2⃣SQL
This @udacity free course is amazing.
udacity.com/course/sql-for…
Read 13 tweets
21 Jun
Bias/Variance analysis is one of the most important concepts in machine learning as it allows you to measure and improve the performance of your model.
🧵In this thread, I will discuss the basic idea behind it, so let's dive right in.
#MachineLearning #ArtificialIntelligence
1⃣Bias (Underfitting)
What is bias?
In simple terms, bias is the difference between the model predictions and the actual values.
So if a model has a high bias, it means it has underfitted the training data. In other words, the model has a high training set error.
What causes a high bias?
Bias, or underfitting, mainly occurs when the model is too simple to understand the underlying structure of the training data, and so it misses the relevant relations between the input features and target outputs.
Read 16 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!

Follow Us on Twitter!

:(