Akshay 🚀 Profile picture
Simplifying LLMs, AI Agents, RAGs and Machine Learning for you! • Co-founder @dailydoseofds_• BITS Pilani • 3 Patents • ex-AI Engineer @ LightningAI
25 subscribers
Jul 3 10 tweets 4 min read
7 MCP projects for AI Engineers (with video tutorials): 1️⃣ MCP meets Ollama

An MCP client is a component in an AI app (like Cursor) that establishes connections to external tools.

Learn how to build it 100% locally.

Full walkthrough:
Jun 28 9 tweets 5 min read
I have tested 100+ MCP servers in the last 3 months!

Here are 6 must-use MCP servers for all developers (open-source): 1️⃣ Graphiti MCP server

Agents forget everything after each task.

Graphiti MCP server lets Agents build and query temporally-aware knowledge graphs, which act as an Agent's memory!

Check this👇
Jun 25 12 tweets 4 min read
Let's generate our own LLM fine-tuning dataset (100% local): Before we begin, here's what we're doing today!

We'll cover:
- What is instruction fine-tuning?
- Why is it important for LLMs?

Finally, we'll create our own instruction fine-tuning dataset.

Let's dive in!
Jun 22 11 tweets 4 min read
Let's build a real-time Voice RAG Agent, step-by-step: Before we begin, here's a quick demo of what we're building

Tech stack:

- @Cartesia_AI for SOTA text-to-speech
- @AssemblyAI for speech-to-text
- @LlamaIndex to power RAG
- @livekit for orchestration

Let's go! 🚀
Jun 21 11 tweets 4 min read
Let's build an MCP-powered audio analysis toolkit: Before we dive in, here's a demo of what we're building!

Tech stack:
- @AssemblyAI for transcription and audio analysis.
- Claude Desktop as the MCP host.
- @streamlit for the UI

Let's build it!
Jun 19 4 tweets 2 min read
AI agents can finally talk to your frontend!

The AG-UI Protocol bridges the critical gap between AI agents and frontend apps, making human-agent collaboration seamless.

MCP: Agents to tools
A2A: Agents to agents
AG-UI: Agents to users

100% open-source. Here's the official GitHub repo for @CopilotKit's AG-UI:

(don't forget to star 🌟)github.com/ag-ui-protocol…
Jun 16 6 tweets 2 min read
Top 4 open-source LLM finetuning libraries!

From single-GPU “click-to-tune” notebooks to trillion-param clusters, these four libraries cover every LLM finetuning scenario.

Understand which one to use, & when...👇 Image 1️⃣ Unsloth

Unsloth makes fine-tuning easy and fast, turning a mid-range GPU into a powerhouse with a simple Colab or Kaggle notebook.

Perfect for hackers and small teams using 12–24 GB GPUs needing quick LoRA experiments without DeepSpeed configs or clusters

Check this out👇
github.com/unslothai/unsl…
Jun 15 14 tweets 4 min read
12 powerful tools for your AI Agents!

Here's a breakdown of what each does...👇 1️⃣ FileReadTool

This tool instantly pulls data from the local file system.

Read more👇
docs.crewai.com/tools/file-doc…
Jun 13 11 tweets 3 min read
Model Context Protocol (MCP), clearly explained: MCP is like a USB-C port for your AI applications.

Just as USB-C offers a standardized way to connect devices to various accessories, MCP standardizes how your AI apps connect to different data sources and tools.

Let's dive in! 🚀
Jun 11 11 tweets 3 min read
Object-oriented programming in Python, clearly explained: We break it down to 6 important concepts:

- Object 🚘
- Class 🏗️
- Inheritance 🧬
- Encapsulation 🔐
- Abstraction 🎭
- Polymorphism 🌀

Let's take them one-by-one... 🚀 Image
Jun 5 9 tweets 3 min read
Self-attention in LLMs, clearly explained: Before we start a quick primer on tokenization!

Raw text → Tokenization → Embedding → Model

Embedding is a meaningful representation of each token (roughly a word) using a bunch of numbers.

This embedding is what we provide as an input to our language models.

Check this👇 Image
Jun 3 12 tweets 4 min read
Let's build an MCP-powered Agentic RAG (100% local): Below, we have an MCP-powered Agentic RAG that searches a vector database and falls back to web search if needed.

To build this, we'll use:
- @firecrawl_dev search endpoint for web search.
- @qdrant_engine as the vector DB.
- @cursor_ai as the MCP client.

Let's build it!
Jun 1 10 tweets 4 min read
Function calling & MCP for LLMs, clearly explained (with visuals): Before MCPs became popular, AI workflows relied on traditional Function Calling for tool access. Now, MCP is standardizing it for Agents/LLMs.

The visual below explains how Function Calling and MCP work under the hood.

Let's learn more!
May 30 12 tweets 4 min read
Let's build an MCP server that connects to 200+ data sources (100% local): Before we dive in, here's a quick demo of what we're building!

Tech stack:

- @MindsDB to power our unified MCP server
- @cursor_ai as the MCP host
- @Docker to self-host the server

Let's go! 🚀
May 29 11 tweets 4 min read
KV caching in LLMs, clearly explained (with visuals): KV caching is a technique used to speed up LLM inference.

Before understanding the internal details, look at the inference speed difference in the video:

- with KV caching → 9 seconds
- without KV caching → 42 seconds (~5x slower)

Let's dive in!
May 27 14 tweets 5 min read
Let's build an MCP-powered financial analyst (100% local): Before we dive in, here's a quick demo of what we're building!

Tech stack:

- @crewAIInc for multi-agent orchestration
- @Ollama to locally serve DeepSeek-R1 LLM
- @cursor_ai as the MCP host

Let's go! 🚀
May 20 9 tweets 3 min read
5 levels of Agentic AI systems, clearly explained (with visuals): Agentic AI systems don't just generate text; they can make decisions, call functions, and even run autonomous workflows.

The visual explains 5 levels of AI agency—from simple responders to fully autonomous agents.

Let's dive to learn more about them.
May 17 11 tweets 5 min read
9 MCP, LLM, and AI Agent cheat sheets for AI engineers (with visuals): 1️⃣ Model context Protocol

MCP is like a USB-C port for your AI applications.

Just as USB-C standardizes device connections; MCP standardizes AI app connections to data sources and tools.

Here's my detailed thread about it👇
May 16 14 tweets 4 min read
Let's build an MCP-powered synthetic data generator (100% local): Today, we're building an MCP server that every data scientist will love to have.

Tech stack:

- @cursor_ai as the MCP host
- @datacebo's SDV to generate realistic tabular synthetic data

Let's go! 🚀
May 15 14 tweets 5 min read
Let's build a multi-agent book writer, powered Qwen3 (100% local): Today, we are building an Agentic workflow that writes a 20k word book from a 3-5 word book title.

Tech stack:
- @firecrawl_dev for web scraping.
- @crewAIInc for orchestration.
- @ollama to serve Qwen 3 locally.
- @LightningAI for development and hosting

Let's go! 🚀
May 9 7 tweets 3 min read
Traditional RAG vs. Agentic RAG, clearly explained (with visuals): Traditional RAG has many issues:

- It retrieves once and generates once. If the context isn't enough, it cannot dynamically search for more info.

- It cannot reason through complex queries.

- The system can't modify its strategy based on the problem.