โซ๏ธ #BackPropagation is an algorithm to train neural networks.ย It is the method of fine-tuning weights of a neural network based on error rate obtained in previous epoch (i.e., iteration)
A Complete ๐งต
โ Backpropagation is an algorithm for supervised learning of artificial neural networks using #gradientdescent
Given an artificial neural network and an error function, method calculates gradient of error function with respect to neural network's weights using chain rule
โซ๏ธ #Padding is simplyย a process of adding layers of zeros to our input images
โซ๏ธ #Stride describesย step size of kernel when you slide a filter over an input image
A Complete ๐งต
โซ๏ธ Padding is simplyย a process of adding layers of zeros to our input images.
The purpose of padding is to preserve original size of an image when applying a #convolutional filter & enable filter to perform full convolutions on edge pixel
โซ๏ธ So to prevent this-
We will be using padding of size 2 (i.e. original image(5) โ feature map(3)).
It is also known as zero padding because we are padding it with 0
โซ๏ธTopic - Machine Learning (ML) vs Deep Learning (DL)
๐Deep learning is a sub-category of #Machinelearning focused on structuring a learning process for computers where they can recognize patterns & make decisions, much like humans do
A Complete ๐งต
DL is essentially a type of sophisticated, multi-layered filter
input raw, unorganized data at top, & it traverses through various layers of neural network, getting refined & analyzed at each level. Eventually, what emerges at bottom is a coherent, structured piece of info a
Input layer -This input can be pixels of an image or a range of time series data
Hidden layer - Commonly known as weights, which are learned while neural network is trained
Output layer - The final layer gives you a prediction of input you fed into your network
โซ๏ธ Topic - XGBoost Algorithm in Machine Learning๐ฐ
๐#XGBoost efficient handling of missing values is one of its core advantages, allowing it to handle real-world #data with missing values without considerable pre-processing
A Complete Thread๐งต
- Optimization & Improvement
process by which ML #algorithm is tuned to improve its performance.This includes adjusting parameters such as learning rate, tree depth, & regularization strength to achieve best model for a given data set
โ XGBoost for Regression
โซ๏ธ most commonly #hyperparameters
n_estimators
max_depth
eta
subsample
colsample_bytree