Avi Chawla Profile picture
Jul 26 9 tweets 3 min read Read on X
5 levels of Agentic AI systems, clearly explained (with visuals):
Agentic AI systems don't just generate text; they can make decisions, call functions, and even run autonomous workflows.

The visual explains 5 levels of AI agency, starting from simple responders to fully autonomous agents.

Let's dive in to learn more! Image
1️⃣ Basic responder

- A human guides the entire flow.
- The LLM is just a generic responder that receives an input and produces an output. It has little control over the program flow.

See this visual👇
2️⃣ Router pattern

- A human defines the paths/functions that exist in the flow.
- The LLM makes basic decisions on which function or path it can take.

Check this visual👇
3️⃣ Tool calling

- A human defines a set of tools the LLM can access to complete a task.
- LLM decides when to use them and also the arguments for execution.

Check this visual👇
4️⃣ Multi-agent pattern

A manager agent coordinates multiple sub-agents and decides the next steps iteratively.

- A human lays out the hierarchy between agents, their roles, tools, etc.
- The LLM controls execution flow, deciding what to do next.

See this visual👇
5️⃣ Autonomous pattern

The most advanced pattern, wherein, the LLM generates and executes new code independently, effectively acting as an independent AI developer.

Here's a visual to understand this👇
To recall:

1) Basic responder only generate text.
2) Router pattern decides when to take a path.
3) Tool calling picks & runs tools.
4) Multi-Agent pattern manages several agents.
5) Autonomous pattern works fully independently.

Here's the visual again for your reference👇 Image
That's a wrap!

If you found it insightful, reshare it with your network.

Find me → @_avichawla
Every day, I share tutorials and insights on DS, ML, LLMs, and RAGs.

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Avi Chawla

Avi Chawla Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @_avichawla

Jul 24
Let's compare Qwen 3 Coder & Sonnet 4 for code generation:
Qwen-3 Coder is Alibaba’s most powerful open-source coding LLM.

Today, let's build a pipeline to compare it to Sonnet 4 using:

- @LiteLLM for orchestration.
- @deepeval to build the eval pipeline (open-source).
- @OpenRouterAI to access @Alibaba_Qwen 3 Coder.

Let's dive in!
Here's the workflow:

- Ingest a GitHub repo and provide it as context to the LLMs.
- Generate code from both models using context + query.
- Compare the generated code using DeepEval.

Let’s implement this!
Read 15 tweets
Jul 21
4 stages of training LLMs from scratch, clearly explained (with visuals):
Today, we are covering the 4 stages of building LLMs from scratch to make them applicable for real-world use cases.

We'll cover:
- Pre-training
- Instruction fine-tuning
- Preference fine-tuning
- Reasoning fine-tuning

The visual summarizes these techniques.

Let's dive in!
0️⃣ Randomly initialized LLM

At this point, the model knows nothing.

You ask it “What is an LLM?” and get gibberish like “try peter hand and hello 448Sn”.

It hasn’t seen any data yet and possesses just random weights.

Check this 👇
Read 13 tweets
Jul 20
I have been training neural networks for 9 years now.

Here are 16 ways I actively use to optimize model training:
Before we dive in, the following visual covers what we are discussing today.

Let's understand them in detail below!
These are some basic techniques:

1) Use efficient optimizers—AdamW, Adam, etc.

2) Utilize hardware accelerators (GPUs/TPUs).

3) Max out the batch size.

4) Use multi-GPU training through Model/Data/Pipeline/Tensor parallelism. Check the visual👇
Read 11 tweets
Jul 19
Andrew Ng's team once made a big mistake in a research paper.

And it happened due to randomly splitting the data.

Here's what happened:
It is common to generate train and validation sets using random splitting.

However, in many situations, it can be fatal for model building.

Let's learn below! Image
Consider building a model that generates captions for images.

Due to the inherent nature of language, every image can have many different captions.

- Image-1 → Caption-1, Caption-2, Caption-3, etc.
- Image-2 → Caption-1, Caption-2, Caption-3, etc.

Check this 👇 Image
Read 13 tweets
Jul 18
After MCP, A2A, & AG-UI, there's another Agent protocol.

It's fully open-source and launched by IBM Research.

Here's a complete breakdown (with code): Image
ACP is a standardized, RESTful interface for Agents to discover and coordinate with other Agents, regardless of their framework.

Just like A2A, it lets Agents communicate with Agents. There are some differences, which we shall discuss later.

Let's dive into the code first!
Here's how it works:

- Build the Agents and host them on ACP servers.
- The ACP server receives requests from the ACP Client and forwards them to the Agent.
- ACP Client itself can be an Agent to intelligently route requests to the Agents (like MCP Client does).

Check this 👇
Read 12 tweets
Jul 17
How to compress ML models, clearly explained (with code):
Model performance is rarely the only factor to determine which model will be deployed.

Instead, we also consider several operational metrics depicted below.

Knowledge distillation (KD) is popularly used to compress ML models before deployment.

Let's learn about it below.
In KD, we train a student model that mimics a teacher model.

It has two steps:
- Train a teacher model.
- Train a student model that matches its output.

DistillBERT is a student model of BERT. It is 40% smaller but retains 97% of BERT’s capabilities.

Let's implement KD next. Image
Read 10 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us!

:(