Let's build a mini-ChatGPT that's powered by DeepSeek-R1 (100% local):
Here's a mini-ChatGPT app that runs locally on your computer. You can chat with it just like you would chat with ChatGPT.
We use:
- @DeepSeek_AI R1 as the LLM
- @Ollama to locally serve R1
- @chainlit_io for the UI
Let's build it!
We begin with the import statements and define the start_chat method.
It is invoked as soon as a new chat session starts.
Next, we define another method which will be invoked to generate a response from the LLM:
• The user inputs a prompt.
• We add it to the interaction history.
• We generate a response from the LLM.
• We store the LLM response in the interaction history.
Finally, we define the main method and run the app as follows:
Done!
This launches our 100% locally running mini-ChatGPT that is powered by DeepSeek-R1.
That's a wrap!
If you enjoyed this tutorial:
Find me → @_avichawla
Every day, I share tutorials and insights on DS, ML, LLMs, and RAGs.
• • •
Missing some Tweet in this thread? You can try to
force a refresh
Decorators in Python, clearly explained (with code):
Decorators are one of the most powerful features of Python!
However, understanding them can be a bit overwhelming!
Today, let's understand how decorators work!
Before we jump onto decorator, we must understand that functions in python are "first-class" objects!
This means, just like integers or strings, a function can be:
- passed around as an argument
- used in expressions
- returned as values of other functions
Traditional RAG vs. Graph RAG, clearly explained (with visuals):
RAG has many issues.
Imagine you want to summarize a biography, and each chapter of the document covers a specific accomplishment of a person (P).
This is difficult with traditional RAG since it only retrieves the top-k relevant chunks, but this task needs full context.
Graph RAG solves this.
The following visual depicts how it differs from naive RAG.
The core idea is to:
- Create a graph (entities & relationships) from documents.
- Traverse the graph during retrieval to fetch context.
- Pass the context to the LLM to get a response.
Let's build a Multimodal RAG with DeepSeek's latest Janus-Pro (100% local):
The video depicts a multimodal RAG running locally on your computer.
We use:
- Colpali to understand and embed docs using vision capabilities.
- @qdrant_engine as the vector database.
- @deepseek_ai's latest Janus-Pro multimodal LLM to generate a response.
Let's build it!
0) Data
I used this complex multimodal PDF here.
It has several complex diagrams, text within visualizations, and tables—perfect for multimodal RAG.