Let's build an MCP-powered multi-agent deep researcher (100% local):
Before we dive in, here's a quick demo of what we're building!
Tech stack:
- @Linkup_platform for deep web research
- @crewAIInc for multi-agent orchestration
- @Ollama to locally server DeepSeek
- @cursor_ai as MCP host
Let's go! ๐
System overview:
- User submits a query
- Web search agent runs deep web search via Linkup
- Research analyst verifies and deduplicates results
- Technical writer crafts a coherent response with citations
Traditional RAG vs. Graph RAG, clearly explained (with visuals):
top-k retrieval in RAG rarely works.
Imagine you want to summarize a biography where each chapter details a specific accomplishment of an individual.
Traditional RAG struggles because it retrieves only top-k chunks while it needs the entire context.
Graph RAG solves this by:
- Building a graph with entities and relationships from docs.
- Traversing the graph for context retrieval.
- Sending the entire context to the LLM for a response.
The visual shows how its different from naive RAG: