Akshay πŸš€ Profile picture
Aug 14 β€’ 13 tweets β€’ 4 min read β€’ Read on X
How LLMs work, clearly explained:
Before diving into LLMs, we must understand conditional probability.

Let's consider a population of 14 individuals:

- Some of them like Tennis 🎾
- Some like Football ⚽️
- A few like both 🎾 ⚽️
- And few like none

Here's how it looks πŸ‘‡ Image
So what is Conditional probability ⁉️

It's a measure of the probability of an event given that another event has occurred.

If the events are A and B, we denote this as P(A|B).

This reads as "probability of A given B"

Check this illustration πŸ‘‡ Image
For instance, if we're predicting whether it will rain today (event A), knowing that it's cloudy (event B) might impact our prediction.

As it's more likely to rain when it's cloudy, we'd say the conditional probability P(A|B) is high.

That's conditional probability for you! πŸŽ‰
Now, how does this apply to LLMs like GPT-4❓

These models are tasked with predicting the next word in a sequence.

This is a question of conditional probability: given the words that have come before, what is the most likely next word? Image
To predict the next word, the model calculates the conditional probability for each possible next word, given the previous words (context).

The word with the highest conditional probability is chosen as the prediction. Image
The LLM learns a high-dimensional probability distribution over sequences of words.

And the parameters of this distribution are the trained weights!

The training or rather pre-training** is supervised.

I'll talk about the different training steps next time!**

Check this πŸ‘‡ Image
But there a problem❗️

If we always pick the word with the highest probability, we end up with repetitive outputs, making LLMs almost useless and stifling their creativity.

This is where temperature comes into picture.

Check this before we understand more about it...πŸ‘‡ Image
However a high temperate value produces gibberish

Let's understand what's going on...πŸ‘‡ Image
So, the LLMs instead of selecting the best token (for simplicity let's think of tokens as words), they "sample" the prediction.

So even if β€œToken 1” has the highest score, it may not be chosen since we are sampling. Image
Now, temperature introduces the following tweak in the softmax function, which, in turn, influences the sampling process: Image
Let take a code example!

At low temperature, probabilities concentrate around the most likely token, resulting in nearly greedy generation.

At high temperature, probabilities become more uniform, producing highly random and stochastic outputs.

Check this outπŸ‘‡ Image
That's a wrap!

Hopefully, this guide has demystified some of the magic behind LLMs.

And, if you enjoyed this breakdown:

Find me β†’ @akshay_pachaar βœ”οΈ
For more insights and tutorials on AI and Machine Learning.

β€’ β€’ β€’

Missing some Tweet in this thread? You can try to force a refresh
γ€€

Keep Current with Akshay πŸš€

Akshay πŸš€ Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @akshay_pachaar

Aug 15
Google just dropped a new LLM!

You can run it locally on just 0.5 GB RAM.

Let's fine-tune this on our own data (100% locally):
Google released Gemma 3 270M, a new model for hyper-efficient local AI!

We'll fine-tune this model and make it very smart at playing chess and predict the next move.

Tech stack:
- @UnslothAI for efficient fine-tuning.
- @huggingface transformers to run it locally.

Let's go! πŸš€
1️⃣ Load the model

We start by loading the Gemma 3 270M and its tokenizer using Unsloth.

Check this πŸ‘‡ Image
Read 10 tweets
Aug 12
Traditional RAG vs. Agentic RAG, clearly explained (with visuals):
Traditional RAG has many issues:

- It retrieves once and generates once. If the context isn't enough, it cannotΒ dynamically search for more info.

- It cannot reason through complex queries.

- The system can't modify its strategy based on the problem.
Agentic RAG attempts to solve this.

The following visual depicts how it differs from traditional RAG.

The core idea is to introduce agentic behaviors at each stage of RAG.
Read 7 tweets
Aug 10
Let's build a Browser Automation Agent using gpt-oss (100% local):
Browser is still the most universal interface with 4.3 billion page visited every day!

Here's a quick demo of how we can completely automate it!

Tech stack:

- @stagehanddev open-source AI browser automation
- @crewAIInc for orchestration
- @ollama to run gpt-oss

Let's go!πŸš€
System overview:

- User enters an automation query.
- Planner Agent creates an automation plan.
- The Browser Automation Agent executes it using the Stagehand tool.
- The Response Agent generates a response.

Now, let's dive into the code!
Read 13 tweets
Aug 9
I switched to AI Engineering 2 years ago!

It was the best career move I ever made.

If you want to start today, here's a roadmap:
1️⃣ Master Python

While many are busy vibe coding, those with strong coding fundamentals will always stand out.

Python is the language AI community speaks, and Harvard's CS50p is the best place to learn it.

πŸ”— pll.harvard.edu/course/cs50s-i…Image
2️⃣ AI with Python

Once you're done with the fundamentals, it's the right time to understand how Python is used in AI.

This 4 hours course by Andrew Ng is a great starting point.

πŸ”— deeplearning.ai/short-courses/…Image
Read 12 tweets
Aug 8
Let's compare GPT-5 and Claude Opus-4.1 for code generation:
Today, we're building a CodeArena, where you can compare any two code-gen models side-by-side.

Tech stack:

- @LiteLLM for orchestration
- @Cometml's Opik to build the eval pipeline
- @OpenRouterAI to access cutting-edge models
- @LightningAI for hosting CodeArena

Let's go!πŸš€
Here's the workflow:

- Choose models for code generation comparison
- Import a GitHub repository and offer it as context to LLMs
- Use context + query to generate code from both models
- Evaluate generated code using Opik's G-Eval

Let’s implement this!
Read 16 tweets
Aug 6
Let's compare OpenAI gpt-oss and Qwen-3 on maths & reasoning:
Before we dive in, here's a quick demo of what we're building!

Tech stack:

- @LiteLLM for orchestration
- @Cometml's Opik to build the eval pipeline (open-source)
- @OpenRouterAI to access the models

You'll also learn about G-Eval & building custom eval metrics.

Let's go! πŸš€
Here's the workflow:

- User submits query
- Both models generate reasoning tokens along with the final response
- Query, response and reasoning logic are sent for evaluation
- Detailed evaluation is conducted using Opik's G-Eval across four metrics.

Let’s implement this!
Read 14 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us!

:(