Akshay 🚀 Profile picture
Mar 13 11 tweets 3 min read Read on X
Model Context Protocol (MCP), clearly explained:
MCP is like a USB-C port for your AI applications.

Just as USB-C offers a standardized way to connect devices to various accessories, MCP standardizes how your AI apps connect to different data sources and tools.

Let's dive in! 🚀
At its core, MCP follows a client-server architecture where a host application can connect to multiple servers.

Key components include:

- Host
- Client
- Server

Here's an overview before we dig deep 👇
The Host and Client:

Host: An AI app (Claude desktop, Cursor) that provides an environment for AI interactions, accesses tools and data, and runs the MCP Client.

MCP Client: Operates within the host to enable communication with MCP servers.

Next up, MCP server...👇 Image
The Server

A server exposes specific capabilities and provides access to data.

3 key capabilities:

- Tools: Enable LLMs to perform actions through your server
- Resources: Expose data and content from your servers to LLMs
- Prompts: Create reusable prompt templates and workflowsImage
The Client-Server Communication

Understanding client-server communication is essential for building your own MCP client-server.

Let's begin with this illustration and then break it down step by step... 👇
1️⃣ & 2️⃣: capability exchange

client sends an initialize request to learn server capabilities.

server responds with its capability details.

e.g., a Weather API server provides available `tools` to call API endpoints, `prompts`, and API documentation as `resource`.
3️⃣ Notification

Client then acknowledgment the successful connection and further message exchange continues.

Before we wrap, one more key detail...👇
Unlike traditional APIs, the MCP client-server communication is two-way.

Sampling, if needed, allows servers to leverage clients' AI capabilities (LLM completions or generations) without requiring API keys.

While clients to maintain control over model access and permissions Image
I hope this clarifies what MCP does.

In the future, I'll explore creating custom MCP servers and building hands-on demos around them.

Over to you! What is your take on MCP and its future?
That's a wrap!

If you enjoyed this breakdown:

Follow me → @akshay_pachaar ✔️

Every day, I share insights and tutorials on LLMs, AI Agents, RAGs, and Machine Learning!

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Akshay 🚀

Akshay 🚀 Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @akshay_pachaar

Oct 27
Claude Skills might be the biggest upgrade to AI agents so far!

Some say it's even bigger than MCP.

I've been testing skills for the past 3-4 days, and they're solving a problem most people don't talk about: agents just keep forgetting everything.

In this video, I'll share everything I've learned so far.

It covers:

> The core idea (skills as SOPs for agents)
> Anatomy of a skill
> Skills vs. MCP vs. Projects vs. Subagents
> Building your own skill
> Hands-on example

Skills are the early signs of continual learning, and they can change how we work with agents forever!

Here's everything you need to know:
Skills vs. Projects vs. Subagents: Image
If you found it insightful, reshare with your network.

Find me → @akshay_pachaar ✔️
For more insights and tutorials on LLMs, AI Agents, and Machine Learning!
Read 4 tweets
Oct 25
I've been coding in Python for 9 years now.

If I were to start over today, here's a complete roadmap:
While everyone's vibecoding, a few truly understand what's actually happening.

This roadmap matters more now than ever.

So, let's dive in! 🚀
1️⃣ Python bootcamp by @freeCodeCamp

4 hours Python bootcamp with over 46M views!! It covers:

- Installing Python
- Setting up an IDE
- Basic Syntax
- Variables & Datatypes
- Looping in Python
- Exception handling
- Modules & pip
- Mini hands-on projects

Check this out👇 Image
Read 9 tweets
Oct 20
You're in an ML Engineer interview at OpenAI.

The interviewer asks:

"Our GPT model generates 100 tokens in 42 seconds. How do you make it 5x faster?"

You: "I'll optimize the model architecture and use a better GPU."

Interview over.

Here's what you missed:
The real bottleneck isn't compute—it's redundant computation.

Without KV caching, your model recalculates keys and values for each token, repeating work.

- with KV caching → 9 seconds
- without KV caching → 42 seconds (~5x slower)

Check this out👇
So let's dive in and understand how KV caching works...👇
Read 11 tweets
Oct 6
You're in a Research Scientist interview at OpenAI.

The interviewer asks:

"How would you expand the context length of an LLM from 2K to 128K tokens?"

You: "I will fine-tune the model on longer docs with 128K context"

Interview over.

Here's what you missed:
Extending the context window isn't just about larger matrices.

In a traditional transformer, expanding tokens by 8x increases memory needs by 64x due to the quadratic complexity of attention. Refer to the image below!

So, how do we manage it?

continue...👇 Image
1) Sparse Attention

It limits the attention computation to a subset of tokens by:

- Using local attention (tokens attend only to their neighbors).
- Letting the model learn which tokens to focus on.

But this has a trade-off between computational complexity and performance. Image
Read 10 tweets
Sep 25
Local MCP clients are so underrated!

Everyone's using Cursor, Claude Desktop, and ChatGPT as MCP hosts, but if you're building your own apps that support MCP, you need custom clients.

Here's the problem: Writing MCP clients from scratch is painful and time-consuming.

Today, I'm showing you how to build custom MCP clients in minutes, not hours.

To prove this, I built a fully private, ultimate AI assistant that can:

- Connects to any MCP server
- Automates browser usage
- Scrapes web data seamlessly
- Controls the terminal of my computer
- Processes images, audio, and documents
- Remembers everything with knowledge graphs

The secret? mcp-use — a 100% open-source framework that makes MCP integration trivial.

Building custom MCP agents takes 3 steps:

1. Define your MCP server configuration
2. Connect any LLM with the MCP client
3. Deploy your agent

That's it. No complex setup, no proprietary dependencies.

The best part? Everything runs locally. Your data stays private, and you control the entire stack.

Full breakdown with code...👇
Let's break this down by exploring each integration and understanding how it works, using code and illustrations:
1️⃣ Stagehand MCP server

We begin by allowing our Agent to control a browser, navigate web pages, take screenshots, etc., using @Stagehanddev MCP.

Below, I asked a weather query, and the Agent autonomously responded to it by initiating a browser session.

Check this👇
Read 11 tweets
Sep 23
Context engineering, clearly explained!

Everybody is talking about context engineering, but no one tells you what it actually means.

Today, I'll explain everything you need to know about context engineering in a step-by-step manner.

Here's an illustrated guide:
So, what is context engineering?

It’s the art and science of delivering the right information, in the right format, at the right time, to your LLM.

Here's a quote by Andrej Karpathy on context engineering...👇 Image
To understand context engineering, it's essential to first understand the meaning of context.

Agents today have evolved into much more than just chatbots.

The graphic below summarizes the 6 types of contexts an agent needs to function properly.

Check this out 👇
Read 11 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us!

:(