MCP is like a USB-C port for your AI applications.
Just as USB-C offers a standardized way to connect devices to various accessories, MCP standardizes how your AI apps connect to different data sources and tools.
Let's dive in! 🚀
At its core, MCP follows a client-server architecture where a host application can connect to multiple servers.
Key components include:
- Host
- Client
- Server
Here's an overview before we dig deep 👇
The Host and Client:
Host: An AI app (Claude desktop, Cursor) that provides an environment for AI interactions, accesses tools and data, and runs the MCP Client.
MCP Client: Operates within the host to enable communication with MCP servers.
Next up, MCP server...👇
The Server
A server exposes specific capabilities and provides access to data.
3 key capabilities:
- Tools: Enable LLMs to perform actions through your server
- Resources: Expose data and content from your servers to LLMs
- Prompts: Create reusable prompt templates and workflows
The Client-Server Communication
Understanding client-server communication is essential for building your own MCP client-server.
Let's begin with this illustration and then break it down step by step... 👇
1️⃣ & 2️⃣: capability exchange
client sends an initialize request to learn server capabilities.
server responds with its capability details.
e.g., a Weather API server provides available `tools` to call API endpoints, `prompts`, and API documentation as `resource`.
3️⃣ Notification
Client then acknowledgment the successful connection and further message exchange continues.
Before we wrap, one more key detail...👇
Unlike traditional APIs, the MCP client-server communication is two-way.
Sampling, if needed, allows servers to leverage clients' AI capabilities (LLM completions or generations) without requiring API keys.
While clients to maintain control over model access and permissions
I hope this clarifies what MCP does.
In the future, I'll explore creating custom MCP servers and building hands-on demos around them.
Over to you! What is your take on MCP and its future?
That's a wrap!
If you enjoyed this breakdown:
Follow me → @akshay_pachaar ✔️
Every day, I share insights and tutorials on LLMs, AI Agents, RAGs, and Machine Learning!
• • •
Missing some Tweet in this thread? You can try to
force a refresh
Let's build a Multimodal RAG app over complex webpages using DeepSeek's Janus-Pro (running locally):
The video depicts a multimodal RAG built with a SOTA tech stack.
We'll use:
- ColiVara's SOTA document understanding and retrieval to index webpages.
- @firecrawl_dev for reliable scrapping.
- @huggingface transformers to locally run DeepSeek Janus
Let's build it!
Here's an overview of our app:
1-2) Generate a PDF of web page screenshots with Firecrawl. 3) Index it on ColiVara for SOTA retrieval.
4-5) Query the ColiVara client to retrieve context.
6-7) Use DeepSeek Janus Pro as the LLM to generate a response.
Before we dive in, here's a quick demo of our agentic workflow!
Tech stack:
- @Llama_Index workflows for orchestration
- @Linkup_platform for deep web search
- @Cometml's Opik to trace and monitor
- @Qdrant_engine to self-host vectorDB
Let's go! 🚀
Here's an overview of what the app does:
— First search the docs with user query
— Evaluate if the retrieved context is relevant using LLM
— Only keep the relevant context
— Do web search if needed
— Aggregate the context & generate response