Sachin Kumar Profile picture
May 12 12 tweets 4 min read Twitter logo Read on Twitter
Day 59 of #100DayswithMachinelearning

Topic - Mini-Batch Gradient Descent

A Thread 🧵 Image
Mini-batch gradient descent is a variation of the gradient descent optimization algorithm used in ML & DL

It is designed to address the limitations of two other variants: BGD and SGD Image
In BGD the entire training dataset is used to compute the gradient of the cost function for each iteration.

This approach guarantees convergence to the global minimum but can be computationally expensive, especially for large datasets
On other hand (SGD) randomly selects a single training example for each iteration and computes the gradient based on that example

SGD is computationally efficient but can exhibit high variance in the gradient estimate, which can lead to slow convergence and noisy updates Image
Mini-batch gradient descent combines best of both worlds by using a small subset or mini-batch of training data for each iteration

Instead of using entire dataset (as in BGD) or just single example (as in SGD), MBGD compute gradient based on mini-batch of training example Image
The mini-batch size is typically chosen to be a compromise between computational efficiency and variance reduction

Common choices for mini-batch sizes are usually in the range of 10 to 1,000, depending on size of the dataset and the available computational resources. Image
The main advantages of mini-batch gradient descent are:

- Efficiency: By using mini-batches, it allows for parallelization of computations, which can significantly speed up the training process, especially on hardware accelerators like GPUs
- Variance reduction: Compared to stochastic gradient descent, mini-batch gradient descent provides a more stable and less noisy estimate of the gradient, resulting in smoother updates and faster convergence.
- Generalization: Mini-batch gradient descent strikes a balance between the biased updates of batch gradient descent and the noisy updates of stochastic gradient descent, often leading to better generalization performance Image
However MBGD also introduce new hyperparameter: mini-batch size.

Selecting appropriate mini-batch size can be trade-off between computational efficiency & convergence speed

larger mini-batch size may reduce noise in gradient estimate but also increase computational overhead
mini-batch gradient descent is widely used as optimization algorithm of choice for training deep neural networks & other large-scale ML models offering good balance between computational efficiency & convergence properties

@CodingNinjasOff Blog Link -
codingninjas.com/codestudio/lib…
🔹If this thread was helpful to you

1. Follow me @Sachintukumar
for daily content

2. Connect with me on Linkedin: linkedin.com/in/sachintukum…

3. RT tweet below to share it with your friend

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Sachin Kumar

Sachin Kumar Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @Sachintukumar

May 13
🔸CONCAT_WS() in SQL { Very Helpful }

A Thread 🧵 Image
CONCAT_WS() function in SQL is used to concatenate multiple strings into single string with specified separator between each string

"WS" stands for "with separator." This function is commonly used to construct strings contain multiple values such create comma-separated list
The syntax for CONCAT_WS() is as follows:

🔸CONCAT_WS(separator, string1, string2, ..., stringN)
Read 6 tweets
May 11
Day 58 of #100DayswithMachineLearning

Topic - Stochastic Gradient Descent ( SGD )

A Thread 🧵 Image
SGD is an optimization algorithm often used in machine learning applications to find the model parameters that correspond to the best fit between predicted and actual outputs. It’s an inexact but powerful technique.
Saddle point or minimax point is point on the surface of graph of function where slopes (derivatives) in orthogonal directions are all zero (a critical point), but which is not local extremum of function

A saddle point (in red) on graph of z = x2 − y2 (hyperbolic paraboloid) Image
Read 10 tweets
May 10
Day 57 of #100dayswithMachinelearning

Topic - Batch Gradient Descent (BGD)

A Thread 🧵 Image
(BGD) is optimization algorithm commonly used in ML & optimization problems to minimize the cost function or maximize the objective function

It is type of GD algorithm that update model parameters by taking the average gradient of entire training dataset at each iteration Image
Here's how the BGD algorithm works:

1) Initialize the model parameters: Start by initializing the model parameters, such as weights and biases, with random values. Image
Read 14 tweets
Apr 30
Day 47 of #100dayswithmachinelearning

Topic -- Principle Component Analysis
(PCA) Part 1 Image
PCA statistics is science of analyzing all the dimension & reducing them as much as possible while preserving exact information

You can monitor multi-dimensional data (can visualize in 2D or 3D dimension) over any platform using the Principal Component Method of factor analysis.
Step by step explanation of Principal Component Analysis

STANDARDIZATION
COVARIANCE MATRIX COMPUTATION
FEATURE VECTOR
RECAST THE DATA ALONG THE PRINCIPAL COMPONENTS AXES Image
Read 6 tweets
Apr 29
Hello Folks 👨‍💻

If you are someone who is learning SQL, then this list can be helpful to you.

SQL - END-TO-END Learning Resources and Guide 👇 ( Must Read ) Image
1. SQL for Data Science

🔗lnkd.in/dw4aAC-q

2. Databases and SQL for Data Science with Python

🔗lnkd.in/d2psKJd9
3. Scripting with Python and SQL for Data Engineering

🔗lnkd.in/dD3cxWAJ

4. Introduction to Structured Query Language (SQL)

🔗lnkd.in/dvB6eA9m
Read 6 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us on Twitter!

:(