Covers:
- Notations & general concepts
- Linear regression
- Generalised linear models
- Gaussian discriminant analysis
- Tree-based & ensemble methods
Check this👇
May 7 • 9 tweets • 3 min read
5 Agentic AI design patterns, clearly explained (with visuals):
Agentic behaviors allow LLMs to refine their output by incorporating self-evaluation, planning, and collaboration!
The visual depicts the 5 most popular design patterns for building AI Agents.
Let's understand them below!
May 6 • 12 tweets • 4 min read
Let's generate our own LLM fine-tuning dataset (100% local):
Before we begin, here's what we're doing today!
We'll cover:
- What is instruction fine-tuning?
- Why is it important for LLMs?
Finally, we'll create our own instruction fine-tuning dataset.
Let's dive in!
May 4 • 11 tweets • 3 min read
Let's fine-tune Qwen 3 (100% locally):
Before we begin, here's what we'll be doing.
We'll fine-tune our private and locally running Qwen 3.
To do this, we'll use:
- @UnslothAI for efficient fine-tuning.
- @huggingface transformers to run it locally.
Let's begin!
May 2 • 10 tweets • 3 min read
Tool calling in LLMs, clearly explained (with code):
When generating text, the LLM may need to invoke external tools or APIs to perform tasks beyond its built-in capabilities.
This is known as tool calling, and it turns the AI into more of a coordinator.
Let's dive in!
May 1 • 14 tweets • 4 min read
Let's compare Qwen 3 and DeepSeek-R1 on RAG (100% local):
Today, we're building a Streamlit app to compare Alibaba's latest Qwen 3 against DeepSeek-R1 using RAG.
Here's our tool stack:
- @Llama_Index workflows for orchestration.
- @Cometml Opik for evaluation.
- @ollama to serve both LLMs locally.
Let's begin!
Apr 29 • 12 tweets • 4 min read
Let's build our own MCP-powered ChatGPT app:
Before diving in, here's what we are doing!
We'll add 8 MCPs to Claude to get ChatGPT-like capabilities:
- @GetzepAI for memory
- @firecrawl for scraping
- @Stagehanddev for browser access
- @trychroma for vector DB
- CLI, GitHub, Jupyter, and Python MCPs.
See this👇
Apr 26 • 8 tweets • 4 min read
I've been coding in Python for 9 years now.
If I were to start over today, here's a complete roadmap:
1️⃣ Python bootcamp by @freeCodeCamp
4 hours Python bootcamp with over 46M views!! It covers:
- Installing Python
- Setting up an IDE
- Basic Syntax
- Variables & Datatypes
- Looping in Python
- Exception handling
- Modules & pip
- Mini hands-on projects
Check this out👇
Apr 25 • 15 tweets • 5 min read
90% of Python programmers don't know these 11 ways to declare type hints:
Type hints are incredibly valuable for improving code quality and maintainability.
Today, I'll walk you through 10 must-know principles to declare type hints in just two minutes.
Let's begin! 🚀
Apr 22 • 9 tweets • 3 min read
Temperature in LLMs, clearly explained (with code):
Let's prompt OpenAI GPT-3.5 with a low temperature value twice.
It produces identical responses from the LLM.
Check the response below👇
Apr 20 • 10 tweets • 3 min read
Function calling & MCP for LLMs, clearly explained (with visuals):
Before MCPs became popular, AI workflows relied on traditional Function Calling for tool access. Now, MCP is standardizing it for Agents/LLMs.
The visual below explains how Function Calling and MCP work under the hood.
Let's learn more!
Apr 19 • 12 tweets • 4 min read
10 AWESOME (and must-use) libraries for Python programmers:
1️⃣ CleanLab
You're missing out on a lot if you haven't started using Cleanlab yet!
Cleanlab helps you clean data and labels by automatically detecting issues in any dataset.
Check this👇
Apr 14 • 7 tweets • 2 min read
5 open-source MCP servers that will give superpowers to your AI Agents:
1️⃣ Stagehand MCP Server
This MCP server (by @browserbasehq) provides web automation capabilities to AI Agents.
Combining Stagehand with Claude delivers an OpenAI Operator alternative that’s more controlled and reliable.
Fully open-source!
Apr 13 • 12 tweets • 6 min read
10 MCP, AI Agents, and RAG projects for AI Engineers (with code):
1️⃣ MCP-powered Agentic RAG
In this project, you'll learn how to create an MCP-powered Agentic RAG that searches a vector database and falls back to web search if needed.
Let's build an MCP-powered Agentic RAG (100% local):
Below, we have an MCP-driven Agentic RAG that searches a vector database and falls back to web search if needed.
To build this, we'll use:
- Bright Data to scrape web at scale.
- @qdrant_engine as the vector DB.
- @cursor_ai as the MCP client.
Let's build it!
Apr 7 • 11 tweets • 3 min read
Let's build a RAG app with Meta's latest Llama 4:
Meta just released multilingual and multimodal open-source LLMs.
Today, we're building a RAG app powered by @Meta's Llama4.
Tech stack:
- @Llama_Index for orchestration.
- @CerebrasSystems for blazing-fast Llama4 inference.
- @Streamlit for the UI.
Let's build it!
Mar 28 • 11 tweets • 5 min read
9 RAG, LLM, and AI Agent cheat sheets for AI engineers (with visuals):
1️⃣ Transformer vs. Mixture of Experts in LLMs
Mixture of Experts (MoE) is a popular architecture that uses different "experts" to improve Transformer models.
The visual below explains how they differ from Transformers.
5 MCP servers that will give superpowers to your AI Agents:
Integrating a tool/API with Agents demands:
- reading docs
- writing code
- updating the code, etc.
To simplify this, platforms now offer MCP servers. Developers can plug them to let Agents use their APIs instantly.
Below, let's look at 5 incredibly powerful MCP servers.
Mar 25 • 14 tweets • 5 min read
Let's build an MCP server (100% locally):
Before diving in, here's what we'll be doing:
- Understand MCP with a simple analogy.
- Build a local MCP server and interact with it via @cursor_ai IDE.
- Integrate @firecrawl_dev's MCP server and interact with its tools (shown in the video).
Let's dive in 🚀!
Mar 21 • 9 tweets • 3 min read
5 levels of Agentic AI systems, clearly explained (with visuals):
Agentic AI systems don't just generate text; they can make decisions, call functions, and even run autonomous workflows.
The visual explains 5 levels of AI agency—from simple responders to fully autonomous agents.
Let's dive to learn more about them.
Mar 15 • 9 tweets • 3 min read
Let's fine-tune DeepMind's latest Gemma 3 (100% locally):
Before we begin, here's what we'll be doing.
We'll fine-tune our private and locally running Gemma 3.
To do this, we'll use:
- @UnslothAI for efficient fine-tuning.
- @ollama to run it locally.