World Traveler, Sr.SDE,Researcher Cornell Uni,ACM,Competitive Programmer,Google's WTM,Goldman Sachs 10K Women,Coursera Instructor,IITB,Grace hopper,53 countries
2 subscribers
Dec 16, 2023 • 23 tweets • 5 min read
✅Attention Mechanism in Transformers- Explained in Simple terms.
A quick thread 👇🏻🧵
#MachineLearning #Coding #100DaysofCode #deeplearning #DataScience
PC : Research Gate 1/ Attention mechanism calculates attention scores between all pairs of tokens in a sequence. These scores are then used to compute weighted representations of each token based on its relationship with other tokens in the sequence.
Nov 13, 2023 • 18 tweets • 6 min read
✅Regularization is a technique used in ML to prevent overfitting and improve the generalization of a model - Explained in Simple terms.
A quick thread 👇🏻🧵
#MachineLearning #Coding #100DaysofCode #deeplearning #DataScience
PC : Research Gate 1/ Regularization is a technique in machine learning used to prevent overfitting by adding a penalty term to the model's loss function. The penalty discourages overly complex models and promotes simpler ones, improving generalization to new, unseen data.
Nov 9, 2023 • 25 tweets • 8 min read
✅XGBoost is a powerful and efficient gradient boosting library designed for ML tasks, specifically for supervised learning problems- Explained in Simple terms.
A quick thread 🧵👇🏻
#MachineLearning #Coding #100DaysofCode #deeplearning #DataScience
PC : Research Gate 1/ XGBoost is ensemble learning method that combines multiple decision trees into a strong predictive model. It builds decision trees sequentially, where each tree corrects errors of previous ones. XGBoost optimizes a differentiable loss function to minimize prediction errors.
Nov 9, 2023 • 25 tweets • 6 min read
✅Gradient Boosting is a powerful machine learning technique used for both regression and classification tasks - Explained in Simple terms.
A quick thread 🧵👇🏻
#MachineLearning #Coding #100DaysofCode #deeplearning #DataScience
PC : Research Gate 1/ Gradient Boosting is an ensemble learning method that combines the predictions of multiple weak learners (often decision trees) to create a stronger and more accurate predictive model.
Nov 6, 2023 • 25 tweets • 7 min read
✅Cross-validation in ML is particularly useful for estimating how well a model will perform on unseen data - Explained in Simple terms.
A quick thread 🧵👇🏻
#MachineLearning #Coding #100DaysofCode #deeplearning #DataScience
PC : Research Gate 1/ Cross-validation involves splitting the dataset into multiple subsets and using different parts of the data for training and testing at each iteration. The primary goal of cross-validation is to obtain a more robust and unbiased estimate of a model's performance.
Oct 22, 2023 • 39 tweets • 11 min read
✅Feature selection and Feature scaling are crucial Feature Engineering steps - Explained in Simple terms.
A quick thread 👇🏻🧵
#MachineLearning #Coding #100DaysofCode #deeplearning #DataScience
PC : Research Gate 1/ Feature selection is the process of choosing a subset of the most relevant features (variables or columns) from your dataset. It involves excluding less informative or redundant features to improve model performance and reduce computational complexity.
Oct 21, 2023 • 25 tweets • 8 min read
✅Feature Engineering is a critical aspect of ML that involves creating, selecting, and transforming features to improve model performance - Explained in Simple terms.
A quick thread 👇🏻🧵
#MachineLearning #Coding #100DaysofCode #deeplearning #DataScience
PC : Research Gate 1/ Feature engineering is the process of creating new features or modifying existing ones to improve the performance of machine learning models. It involves selecting, transforming, and creating features from the raw data to make it more suitable for model training.
Oct 15, 2023 • 45 tweets • 10 min read
✅ Important Machine Learning Algorithms with implementation and workings in one post - Explained in simple terms.
A quick thread 🧵👇🏻
#MachineLearning #Coding #100DaysofCode #deeplearning #DataScience
PC : Research Gate 1/ Linear regression - A simple model that establishes a linear relationship between the input features and the target variable. The basic idea behind linear regression is to find a line or a hyperplane (in the case of multiple linear regression) that best fits the data.
Oct 12, 2023 • 24 tweets • 7 min read
✅Optimizers are a critical component of training machine learning models -Explained in simple terms. A quick thread 🧵👇🏻
#MachineLearning #Coding #100DaysofCode #deeplearning #DataScience
PC : Research Gate 1/ An optimizer is a mathematical algorithm that is used to adjust the parameters of a machine learning model during training to minimize a specific objective function, typically the loss function.
Oct 11, 2023 • 27 tweets • 8 min read
✅Kernel PCA is an extension of traditional Principal Component Analysis (PCA) that allows for nonlinear dimensionality reduction-Explained in simple terms.
A quick thread 🧵👇🏻
#MachineLearning #Coding #100DaysofCode #deeplearning #DataScience
PC : Research Gate 1/ While PCA is effective for linear data, it may not capture complex, nonlinear relationships in the data. Kernel PCA addresses this limitation by mapping the data into a higher-dimensional feature space using a kernel function, where it can capture nonlinear patterns.
Oct 9, 2023 • 25 tweets • 8 min read
✅Singular Value Decomposition (SVD) has the ability to provide valuable insights into the structure of data and matrices - Explained in simple terms.
A quick thread 🧵👇🏻
#MachineLearning #Coding #100DaysofCode #deeplearning #DataScience
PC : Research Gate 1/ Singular Value Decomposition (SVD) is a linear algebra technique used in ML for data analysis, dimensionality reduction, and matrix factorization. SVD breaks down a matrix into three simpler matrices, providing insights into the structure of the original data.
Oct 8, 2023 • 25 tweets • 8 min read
✅ Principal Component Analysis ( PCA) is an important technique in ML - Explained in simple terms.
A quick thread 👇🏻🧵
#MachineLearning #Coding #100DaysofCode #deeplearning #DataScience
PC : Research Gate 1/ PCA is a linear transformation method that aims to reduce dimensionality of dataset while preserving as much of original variance. It does this by identifying a new set of uncorrelated variables, called principal components, that are linear combinations of original features.
Sep 28, 2023 • 63 tweets • 16 min read
✅ Training and Evaluation in ML - Explained in simple terms.
A quick thread 🧵👇🏻
#MachineLearning #Coding #100DaysofCode #deeplearning #DataScience
PC : Research Gate 1/ Training is the process of teaching a machine learning model to make predictions or decisions by learning patterns and relationships from a labeled dataset. During training, the model adjusts its internal parameters based on the input data and associated ground truth labels.
Sep 27, 2023 • 25 tweets • 7 min read
✅ Transfer learning and Fine-tuning are powerful techniques enabling the reuse of pre-trained models for new tasks- Explained in simple terms.
A quick thread 👇🏻🧵
#MachineLearning #Coding #100DaysofCode #deeplearning #DataScience
PC : Research Gate 1/ Transfer learning is a technique where a pre-trained model, which has learned to perform a specific task on a large dataset, is adapted for a different but related task.
Sep 25, 2023 • 25 tweets • 8 min read
✅Gradients and Initialization are crucial and fundamental for successfully optimizing models. -Explained in simple terms.
A quick thread 👇🏻🧵
#MachineLearning #Coding #100DaysofCode #deeplearning #DataScience
PC : Research Gate 1/ Gradients refer to the partial derivatives of the loss function with respect to the model's parameters. They indicate how the loss would change if each parameter were adjusted slightly.
Sep 24, 2023 • 27 tweets • 7 min read
✅How to Choose Right Model in ML -Explained in simple terms.
A quick thread 🧵👇🏻
#MachineLearning #Coding #100DaysofCode #deeplearning #DataScience
PC : Research Gate 1/ Choosing the right model in ML involves the process of selecting an appropriate algorithm that is best suited for a given problem or task. It is a crucial step in ML workflow, and it requires careful consideration of various factors to ensure optimal model performance
Sep 23, 2023 • 41 tweets • 10 min read
✅Understanding Bias-Variance trade-off is crucial for model selection,hyperparameter tuning,and preventing underfitting & overfitting-Explained in simple terms.
A quick thread 👇🏻🧵
#MachineLearning #Coding #100DaysofCode #deeplearning #DataScience
PC : Research Gate 1/ Bias (Underfitting): Bias represents the error introduced by overly simplistic assumptions in the learning algorithm. A model with high bias pays little attention to the training data and tends to underfit, meaning it cannot capture the underlying patterns in the data.
Sep 20, 2023 • 25 tweets • 8 min read
✅Hyperparameter tuning is a critical step in machine learning to optimize model performance - Explained in simple terms.
A quick thread 🧵👇🏻
#MachineLearning #DataScientist #Coding #100DaysofCode #deeplearning #DataScience
PC : Research Gate 1/ Hyperparameter tuning is like finding the best settings for a special machine that does tasks like coloring pictures or making cookies. You try different combinations of settings to make the machine work its best, just like adjusting ingredients for the tastiest cookies.
Sep 19, 2023 • 34 tweets • 9 min read
✅Measuring performance in ML is essential to assess the quality and effectiveness of your models - Explained in simple terms.
A quick thread 👇🏻🧵
#MachineLearning #DataScientist #Coding #100DaysofCode #deeplearning #DataScience
PC : Research Gate 1/ It is the process of quantitatively evaluating how well a trained ML model performs on a given task or dataset. It involves using specific metrics and techniques to assess the model's ability to make accurate predictions or decisions.
Sep 14, 2023 • 26 tweets • 6 min read
✅Dimensionality Reduction is a crucial technique in Machine Learning- Explained in simple terms.
A quick thread 👇🏻🧵
#MachineLearning #DataScientist #Coding #100DaysofCode #deeplearning #DataScience
PC : ResearchGate 1/ Dimensionality reduction is like a smart machine that simplifies your big box of colorful building blocks by keeping the important ones and removing the less important ones. This makes it easier to create the picture without losing its essence.
Aug 22, 2023 • 25 tweets • 7 min read
✅Optimizers and Regularizers, Batch normalization, Dropout - Explained in simple terms with implementation details (code & techniques)
A quick thread 👇🏻🧵
#MachineLearning #DataScientist #Coding #100DaysofCode #hubofml #deeplearning #DataScience
PC : ResearchGate 1/ Optimizers: Imagine teaching a computer how to learn from examples. Optimizers are like smart guides that help the computer figure out how to adjust its "thinking knobs" to get better at solving problems. They help the computer learn in small steps.