Akshay 🚀 Profile picture
Feb 15 10 tweets 3 min read Read on X
Let's fine-tune DeepSeek-R1 (distilled Llama) 100% locally:
Before we begin, here’s what we’ll be doing:

We’ll fine-tune our private and locally running DeepSeek-R1 (a distilled Llama variant).

Tech stack:

- @UnslothAI for efficient fine-tuning.
- @Ollama to run it locally.

Let’s go! 🚀
1️⃣ Load the model

We begin by loading the Distilled Llama-8B model and the tokenizer for DeepSeek-R1 using Unsloth: Image
2️⃣ Define LoRA Config

We must use efficient techniques like LoRA to avoid fine-tuning the entire model's weights.

In this code, we utilize Unsloth's PEFT by specifying:

- The model
- LoRA low-rank (r)
- Modules for fine-tuning
- A few more parameters Image
3️⃣ Prepare dataset

Next, we use the Alpaca dataset to prepare a conversation dataset.

The conversation_extension parameter defines the number of user messages in a single conversation. Image
4️⃣ Define Trainer

Here, we create a Trainer object by specifying the training config like learning rate, model, tokenizer, and more.

Check this out👇 Image
5️⃣ Train

With that done, we initiate training. We notice a decreasing loss, which means the model is fine-tuning well.

Check this code and output👇 Image
6️⃣ Export to Ollama

Finally, we export the model to Ollama as follows.

Done! Image
We have fine-tuned DeepSeek (distilled Llama).

Now we can interact with it like any other model running on Ollama using:

- The CLI
- Ollama's Python package
- Ollama's LlamaIndex integration, etc. Image
That's a wrap!

And, if you enjoyed this breakdown:

Find me → @akshay_pachaar ✔️

Everyday, I share insights and tutorials around AI and Machine Learning.

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Akshay 🚀

Akshay 🚀 Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @akshay_pachaar

Jun 5
Self-attention in LLMs, clearly explained:
Before we start a quick primer on tokenization!

Raw text → Tokenization → Embedding → Model

Embedding is a meaningful representation of each token (roughly a word) using a bunch of numbers.

This embedding is what we provide as an input to our language models.

Check this👇 Image
The core idea of Language modelling is to understand the structure and patterns within language.

By modeling the relationships between words (tokens) in a sentence, we can capture the context and meaning of the text. Image
Read 9 tweets
Jun 3
Let's build an MCP-powered Agentic RAG (100% local):
Below, we have an MCP-powered Agentic RAG that searches a vector database and falls back to web search if needed.

To build this, we'll use:
- @firecrawl_dev search endpoint for web search.
- @qdrant_engine as the vector DB.
- @cursor_ai as the MCP client.

Let's build it!
Here's how it works:

1) The user inputs a query through the MCP client (Cursor).
2-3) The client contacts the MCP server to select a relevant tool.
4-6) The tool output is returned to the client to generate a response.

Let's implement this!
Read 12 tweets
Jun 1
Function calling & MCP for LLMs, clearly explained (with visuals):
Before MCPs became popular, AI workflows relied on traditional Function Calling for tool access. Now, MCP is standardizing it for Agents/LLMs.

The visual below explains how Function Calling and MCP work under the hood.

Let's learn more!
In Function Calling:

- The LLM receives a prompt.
- The LLM decides the tool.
- The programmer implements a procedure to accept a tool call request and prepare a function call.
- A backend service executes the tool.

Let's implement this to understand better!
Read 10 tweets
May 30
Let's build an MCP server that connects to 200+ data sources (100% local):
Before we dive in, here's a quick demo of what we're building!

Tech stack:

- @MindsDB to power our unified MCP server
- @cursor_ai as the MCP host
- @Docker to self-host the server

Let's go! 🚀
Here's the workflow:

- User submits a query
- Agent connects to MindsDB MCP server to find tools
- Selects appropriate tool based on user query and call it
- Finally, returns a contextually relevant response

Now, let's dive into the code!
Read 12 tweets
May 29
KV caching in LLMs, clearly explained (with visuals):
KV caching is a technique used to speed up LLM inference.

Before understanding the internal details, look at the inference speed difference in the video:

- with KV caching → 9 seconds
- without KV caching → 42 seconds (~5x slower)

Let's dive in!
To understand KV caching, we must know how LLMs output tokens.

- Transformer produces hidden states for all tokens.
- Hidden states are projected to vocab space.
- Logits of the last token is used to generate the next token.
- Repeat for subsequent tokens.

Check this👇
Read 11 tweets
May 27
Let's build an MCP-powered financial analyst (100% local):
Before we dive in, here's a quick demo of what we're building!

Tech stack:

- @crewAIInc for multi-agent orchestration
- @Ollama to locally serve DeepSeek-R1 LLM
- @cursor_ai as the MCP host

Let's go! 🚀
System Overview:

- User submits a query.
- The MCP agent kicks off the financial analyst crew.
- The crew conducts research and creates an executable script.
- The agent runs the script to generate analysis plot.

Now, let's dive into the code!
Read 14 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us!

:(