Simplifying LLMs, AI Agents, RAGs and Machine Learning for you! • Co-founder @dailydoseofds_• BITS Pilani • 3 Patents • ex-AI Engineer @ LightningAI
25 subscribers
Jul 12 • 5 tweets • 2 min read
A Crash Course on Building AI Agents!
Here's what it covers:
- What is an AI agent
- Connecting agents to tools
- Overview of MCP
- Replacing tools with MCP servers
- Setting up observability and tracing
All with 100% open-source tools!
This course builds agents based on the following definition:
An AI agent uses an LLM as its brain, has memory to retain context, and can take real-world actions through tools, like browsing web, running code, etc.
In short, it thinks, remembers, and acts.
Jul 11 • 12 tweets • 4 min read
MCP is on fire.
AI agents can now talk to real world tools, apps and actually get stuff done.
This changes everything.
Here are 10 amazing examples:
1️⃣ WhatsApp MCP
Exchange images, videos, and voice notes on WhatsApp!
Pair it with the ElevenLabs MCP server for AI-powered transcription & audio messages with 3,000+ voices.
Check this out👇
Jul 10 • 15 tweets • 5 min read
90% of Python programmers don't know these 11 ways to declare type hints:
Type hints are incredibly valuable for improving code quality and maintainability.
Today, I'll walk you through 11 must-know principles to declare type hints in just two minutes.
Let's begin! 🚀
Jul 7 • 9 tweets • 3 min read
Temperature in LLMs, clearly explained (with code):
Let's prompt OpenAI GPT-3.5 with a low temperature value twice.
It produces identical responses from the LLM.
Check the response below👇
Jul 3 • 10 tweets • 4 min read
7 MCP projects for AI Engineers (with video tutorials):
1️⃣ MCP meets Ollama
An MCP client is a component in an AI app (like Cursor) that establishes connections to external tools.
Learn how to build it 100% locally.
Full walkthrough:
Jun 28 • 9 tweets • 5 min read
I have tested 100+ MCP servers in the last 3 months!
Here are 6 must-use MCP servers for all developers (open-source):
1️⃣ Graphiti MCP server
Agents forget everything after each task.
Graphiti MCP server lets Agents build and query temporally-aware knowledge graphs, which act as an Agent's memory!
Check this👇
Jun 25 • 12 tweets • 4 min read
Let's generate our own LLM fine-tuning dataset (100% local):
Before we begin, here's what we're doing today!
We'll cover:
- What is instruction fine-tuning?
- Why is it important for LLMs?
Finally, we'll create our own instruction fine-tuning dataset.
Let's dive in!
Jun 22 • 11 tweets • 4 min read
Let's build a real-time Voice RAG Agent, step-by-step:
Before we begin, here's a quick demo of what we're building
Tech stack:
- @Cartesia_AI for SOTA text-to-speech
- @AssemblyAI for speech-to-text
- @LlamaIndex to power RAG
- @livekit for orchestration
Let's go! 🚀
Jun 21 • 11 tweets • 4 min read
Let's build an MCP-powered audio analysis toolkit:
Before we dive in, here's a demo of what we're building!
Tech stack:
- @AssemblyAI for transcription and audio analysis.
- Claude Desktop as the MCP host.
- @streamlit for the UI
Let's build it!
Jun 19 • 4 tweets • 2 min read
AI agents can finally talk to your frontend!
The AG-UI Protocol bridges the critical gap between AI agents and frontend apps, making human-agent collaboration seamless.
MCP: Agents to tools
A2A: Agents to agents
AG-UI: Agents to users
100% open-source.
Here's the official GitHub repo for @CopilotKit's AG-UI:
Model Context Protocol (MCP), clearly explained:
MCP is like a USB-C port for your AI applications.
Just as USB-C offers a standardized way to connect devices to various accessories, MCP standardizes how your AI apps connect to different data sources and tools.
Let's dive in! 🚀
Jun 11 • 11 tweets • 3 min read
Object-oriented programming in Python, clearly explained:
We break it down to 6 important concepts:
Self-attention in LLMs, clearly explained:
Before we start a quick primer on tokenization!
Raw text → Tokenization → Embedding → Model
Embedding is a meaningful representation of each token (roughly a word) using a bunch of numbers.
This embedding is what we provide as an input to our language models.
Check this👇
Jun 3 • 12 tweets • 4 min read
Let's build an MCP-powered Agentic RAG (100% local):
Below, we have an MCP-powered Agentic RAG that searches a vector database and falls back to web search if needed.
To build this, we'll use:
- @firecrawl_dev search endpoint for web search.
- @qdrant_engine as the vector DB.
- @cursor_ai as the MCP client.
Let's build it!
Jun 1 • 10 tweets • 4 min read
Function calling & MCP for LLMs, clearly explained (with visuals):
Before MCPs became popular, AI workflows relied on traditional Function Calling for tool access. Now, MCP is standardizing it for Agents/LLMs.
The visual below explains how Function Calling and MCP work under the hood.
Let's learn more!
May 30 • 12 tweets • 4 min read
Let's build an MCP server that connects to 200+ data sources (100% local):
Before we dive in, here's a quick demo of what we're building!
Tech stack:
- @MindsDB to power our unified MCP server
- @cursor_ai as the MCP host
- @Docker to self-host the server
Let's go! 🚀
May 29 • 11 tweets • 4 min read
KV caching in LLMs, clearly explained (with visuals):
KV caching is a technique used to speed up LLM inference.
Before understanding the internal details, look at the inference speed difference in the video:
- with KV caching → 9 seconds
- without KV caching → 42 seconds (~5x slower)
Let's dive in!
May 27 • 14 tweets • 5 min read
Let's build an MCP-powered financial analyst (100% local):
Before we dive in, here's a quick demo of what we're building!
Tech stack:
- @crewAIInc for multi-agent orchestration
- @Ollama to locally serve DeepSeek-R1 LLM
- @cursor_ai as the MCP host
Let's go! 🚀
May 20 • 9 tweets • 3 min read
5 levels of Agentic AI systems, clearly explained (with visuals):
Agentic AI systems don't just generate text; they can make decisions, call functions, and even run autonomous workflows.
The visual explains 5 levels of AI agency—from simple responders to fully autonomous agents.