You ask it “What is an LLM?” and get gibberish like “try peter hand and hello 448Sn”.
It hasn’t seen any data yet and possesses just random weights.
Check this 👇
1️⃣ Pre-training
This stage teaches the LLM the basics of language by training it on massive corpora to predict the next token. This way, it absorbs grammar, world facts, etc.
But it’s not good at conversation because when prompted, it just continues the text.
Check this 👇
2️⃣ Instruction fine-tuning
To make it conversational, we do Instruction Fine-tuning by training on instruction-response pairs. This helps it learn how to follow prompts and format replies.
Now it can:
- Answer questions
- Summarize content
- Write code, etc.
Check this 👇
At this point, we have likely:
- Utilized the entire raw internet archive and knowledge.
- The budget for human-labeled instruction response data.
So what can we do to further improve the model?
We enter into the territory of Reinforcement Learning (RL).
Let's learn next 👇
3️⃣ Preference fine-tuning (PFT)
You must have seen this screen on ChatGPT where it asks: Which response do you prefer?
That’s not just for feedback but it’s valuable human preference data.
OpenAI uses this to fine-tune their models using preference fine-tuning.
Check this 👇
In PFT:
The user chooses between 2 responses to produce human preference data.
A reward model is then trained to predict human preference and the LLM is updated using RL.
Check this 👇
The above process is called RLHF (Reinforcement Learning with Human Feedback) and the algorithm used to update model weights is called PPO.
It teaches the LLM to align with humans even when there’s no "correct" answer.
But we can improve the LLM even more.
Let's learn next👇
4️⃣ Reasoning fine-tuning
In reasoning tasks (maths, logic, etc.), there's usually just one correct response and a defined series of steps to obtain the answer.
So we don’t need human preferences, and we can use correctness as the signal.
This is called reasoning fine-tuning👇
Steps:
- The model generates an answer to a prompt.
- The answer is compared to the known correct answer.
- Based on the correctness, we assign a reward.
This is called Reinforcement Learning with Verifiable Rewards.
GRPO by DeepSeek is a popular technique.
Check this👇
Those were the 4 stages of training an LLM from scratch.
- Start with a randomly initialized model.
- Pre-train it on large-scale corpora.
- Use instruction fine-tuning to make it follow commands.
- Use preference & reasoning fine-tuning to sharpen responses.
Check this 👇
That's a wrap!
If you found it insightful, reshare with your network.
Find me → @akshay_pachaar ✔️
For more insights and tutorials on LLMs, AI Agents, and Machine Learning!
dLLM is a Python library that unifies the training & evaluation of diffusion language models.
You can also use it to turn ANY autoregressive LM into a diffusion LM with minimal compute.
100% open-source.
Here's why this matters:
Traditional autoregressive models generate text left-to-right, one token at a time. Diffusion models work differently - they refine the entire sequence iteratively, giving you better control over generation quality and more flexible editing capabilities.
You're in a Research Scientist interview at Google.
Interviewer: We have a base LLM that's terrible at maths. How would you turn it into a maths & reasoning powerhouse?
You: I'll get some problems labeled and fine-tune the model.
Interview over.
Here's what you missed:
When outputs are verifiable, labels become optional.
Maths, code, and logic can be automatically checked and validated.
Let's use this fact to build a reasoning model without manual labelling.
We'll use:
- @UnslothAI for parameter-efficient finetuning.
- @HuggingFace TRL to apply GRPO.
Let's go! 🚀
What is GRPO?
Group Relative Policy Optimization is a reinforcement learning method that fine-tunes LLMs for math and reasoning tasks using deterministic reward functions, eliminating the need for labeled data.
Here's a brief overview of GRPO before we jump into code:
NOBODY wants to send their data to Google or OpenAI.
Yet here we are, shipping proprietary code, customer information, and sensitive business logic to closed-source APIs we don't control.
While everyone's chasing the latest closed-source releases, open-source models are quietly becoming the practical choice for many production systems.
Here's what everyone is missing:
Open-source models are catching up fast, and they bring something the big labs can't: privacy, speed, and control.
I built a playground to test this myself. Used CometML's Opik to evaluate models on real code generation tasks - testing correctness, readability, and best practices against actual GitHub repos.
Here's what surprised me:
OSS models like MiniMax-M2, Kimi k2 performed on par with the likes of Gemini 3 and Claude Sonnet 4.5 on most tasks.
But practically MiniMax-M2 turns out to be a winner as it's twice as fast and 12x cheaper when you compare it to models like Sonnet 4.5.
Well, this isn't just about saving money.
When your model is smaller and faster, you can deploy it in places closed-source APIs can't reach:
↳ Real-time applications that need sub-second responses
↳ Edge devices where latency kills user experience
↳ On-premise systems where data never leaves your infrastructure
MiniMax-M2 runs with only 10B activated parameters. That efficiency means lower latency, higher throughput, and the ability to handle interactive agents without breaking the bank.
The intelligence-to-cost ratio here changes what's possible.
You're not choosing between quality and affordability anymore. You're not sacrificing privacy for performance. The gap is closing, and in many cases, it's already closed.
If you're building anything that needs to be fast, private, or deployed at scale, it's worth taking a look at what's now available.
MiniMax-M2 is 100% open-source, free for developers right now. I have shared the link to their GitHub repo in the next tweet.
You will also find the code for the playground and evaluations I've done.
Claude Skills might be the biggest upgrade to AI agents so far!
Some say it's even bigger than MCP.
I've been testing skills for the past 3-4 days, and they're solving a problem most people don't talk about: agents just keep forgetting everything.
In this video, I'll share everything I've learned so far.
It covers:
> The core idea (skills as SOPs for agents)
> Anatomy of a skill
> Skills vs. MCP vs. Projects vs. Subagents
> Building your own skill
> Hands-on example
Skills are the early signs of continual learning, and they can change how we work with agents forever!
Here's everything you need to know:
Skills vs. Projects vs. Subagents:
If you found it insightful, reshare with your network.
Find me → @akshay_pachaar ✔️
For more insights and tutorials on LLMs, AI Agents, and Machine Learning!