Avi Chawla Profile picture
Jul 21 13 tweets 5 min read Read on X
4 stages of training LLMs from scratch, clearly explained (with visuals):
Today, we are covering the 4 stages of building LLMs from scratch to make them applicable for real-world use cases.

We'll cover:
- Pre-training
- Instruction fine-tuning
- Preference fine-tuning
- Reasoning fine-tuning

The visual summarizes these techniques.

Let's dive in!
0️⃣ Randomly initialized LLM

At this point, the model knows nothing.

You ask it “What is an LLM?” and get gibberish like “try peter hand and hello 448Sn”.

It hasn’t seen any data yet and possesses just random weights.

Check this 👇
1️⃣ Pre-training

This stage teaches the LLM the basics of language by training it on massive corpora to predict the next token. This way, it absorbs grammar, world facts, etc.

But it’s not good at conversation because when prompted, it just continues the text.

Check this 👇
2️⃣ Instruction fine-tuning

To make it conversational, we do Instruction Fine-tuning by training on instruction-response pairs. This helps it learn how to follow prompts and format replies.

Now it can:
- Answer questions
- Summarize content
- Write code, etc.

Check this 👇
At this point, we have likely:

- Utilized the entire raw internet archive and knowledge.
- The budget for human-labeled instruction response data.

So what can we do to further improve the model?

We enter into the territory of Reinforcement Learning (RL).

Let's learn next 👇
3️⃣ Preference fine-tuning (PFT)

You must have seen this screen on ChatGPT where it asks: Which response do you prefer?

That’s not just for feedback but it’s valuable human preference data.

OpenAI uses this to fine-tune their models using preference fine-tuning.

Check this 👇 Image
In PFT:

The user chooses between 2 responses to produce human preference data.

A reward model is then trained to predict human preference and the LLM is updated using RL.

Check this 👇
The above process is called RLHF (Reinforcement Learning with Human Feedback) and the algorithm used to update model weights is called PPO.

It teaches the LLM to align with humans even when there’s no "correct" answer.

But we can improve the LLM even more.

Let's learn next👇
4️⃣ Reasoning fine-tuning

In reasoning tasks (maths, logic, etc.), there's usually just one correct response and a defined series of steps to obtain the answer.

So we don’t need human preferences, and we can use correctness as the signal.

This is called reasoning fine-tuning👇
Steps:

- The model generates an answer to a prompt.
- The answer is compared to the known correct answer.
- Based on the correctness, we assign a reward.

This is called Reinforcement Learning with Verifiable Rewards.

GRPO by DeepSeek is a popular technique.

Check this👇
Those were the 4 stages of training an LLM from scratch.

- Start with a randomly initialized model.
- Pre-train it on large-scale corpora.
- Use instruction fine-tuning to make it follow commands.
- Use preference & reasoning fine-tuning to sharpen responses.

Check this 👇
If you found it insightful, reshare it with your network.

Find me → @_avichawla
Every day, I share tutorials and insights on DS, ML, LLMs, and RAGs.

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Avi Chawla

Avi Chawla Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @_avichawla

Jul 20
I have been training neural networks for 9 years now.

Here are 16 ways I actively use to optimize model training:
Before we dive in, the following visual covers what we are discussing today.

Let's understand them in detail below!
These are some basic techniques:

1) Use efficient optimizers—AdamW, Adam, etc.

2) Utilize hardware accelerators (GPUs/TPUs).

3) Max out the batch size.

4) Use multi-GPU training through Model/Data/Pipeline/Tensor parallelism. Check the visual👇
Read 11 tweets
Jul 19
Andrew Ng's team once made a big mistake in a research paper.

And it happened due to randomly splitting the data.

Here's what happened:
It is common to generate train and validation sets using random splitting.

However, in many situations, it can be fatal for model building.

Let's learn below! Image
Consider building a model that generates captions for images.

Due to the inherent nature of language, every image can have many different captions.

- Image-1 → Caption-1, Caption-2, Caption-3, etc.
- Image-2 → Caption-1, Caption-2, Caption-3, etc.

Check this 👇 Image
Read 13 tweets
Jul 18
After MCP, A2A, & AG-UI, there's another Agent protocol.

It's fully open-source and launched by IBM Research.

Here's a complete breakdown (with code): Image
ACP is a standardized, RESTful interface for Agents to discover and coordinate with other Agents, regardless of their framework.

Just like A2A, it lets Agents communicate with Agents. There are some differences, which we shall discuss later.

Let's dive into the code first!
Here's how it works:

- Build the Agents and host them on ACP servers.
- The ACP server receives requests from the ACP Client and forwards them to the Agent.
- ACP Client itself can be an Agent to intelligently route requests to the Agents (like MCP Client does).

Check this 👇
Read 12 tweets
Jul 17
How to compress ML models, clearly explained (with code):
Model performance is rarely the only factor to determine which model will be deployed.

Instead, we also consider several operational metrics depicted below.

Knowledge distillation (KD) is popularly used to compress ML models before deployment.

Let's learn about it below.
In KD, we train a student model that mimics a teacher model.

It has two steps:
- Train a teacher model.
- Train a student model that matches its output.

DistillBERT is a student model of BERT. It is 40% smaller but retains 97% of BERT’s capabilities.

Let's implement KD next. Image
Read 10 tweets
Jul 15
Let's build an MCP-powered financial analyst (100% local):
Before we dive in, here's a quick demo of what we're building!

Tech stack:

- @crewAIInc for multi-agent orchestration
- @Ollama to locally serve DeepSeek-R1 LLM
- @cursor_ai as the MCP host

Let's go! 🚀
System Overview:

- User submits a query.
- The MCP agent kicks off the financial analyst crew.
- The crew conducts research and creates an executable script.
- The agent runs the script to generate an analysis plot.

Now, let's dive into the code!
Read 13 tweets
Jul 11
How to sync GPUs in multi-GPU training, clearly explained (with visuals):
One major run-time bottleneck in multi-GPU training happens during GPU synchronization.

For instance, in multi-GPU training via data parallelism:

- The same model is distributed to different GPUs.
- Each GPU processes a different subset of the whole dataset.

Check this 👇
This leads to different gradients across different devices.

So, before updating the model parameters on each GPU device, we must communicate the gradients to all other devices to sync them.

Let’s understand 2 common strategies next!
Read 14 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us!

:(