- Creating virtual envs. using uv is ~80x faster than python -m venv.
- Package installation is 4–12x faster without caching, and ~100x with caching
Today, let's understand how to use uv for Python package management.
Let's dive in!
uv is a Rust-based Python package manager built to be fast and reliable.
It replaces not just pip but also pip-tools, virtualenv, pipx, poetry, and pyenv, all with a single standalone binary.
Here's a uv cheatsheet for Python devs👇
Let's look at the code next!
1️⃣ Create a new project
To set up a new Python project, run: uv init project-name.
This creates a directory structure, a TOML file, a sample script, and a README.
Check this 👇
2️⃣ Initialize an env.
Although uv automatically initializes a virtual env. in a project, you can explicitly create a virtual env. with `uv venv` command.
Activate it as follows:
- MacOS/Linux: source .venv/bin/activate
- Windows: .venv\Scripts\activate
Check this 👇
3️⃣ Install packages
Next, you can add dependencies using `uv add <library-name>` command.
When you add packages, uv updates the toml file and resolves the full dependency tree, generating a lockfile.
Check this 👇
4️⃣ Execute a script
To run a script, use `uv run script[.]py` command.
If a package is not available in your environment but it is used in the script, uv will install it when you run the script, provided the dependency is specified in the toml file.
Check this 👇
5️⃣ Reproduce an env.
Finally, uv gives 100% reproducible installs.
Say you cloned a project that used uv. You can run `uv sync` to precisely match the project.
This works across OS, and even if the project you cloned used a different Python version.
Check this 👇
And that is how you can start using uv.
Note: When you push your project to GitHub, DO NOT add the uv[.]lock file to your gitignore file. This helps uv reproduce the environment when others use your project.
Here is the cheatsheet again for your reference 👇
If you found it insightful, reshare with your network.
Find me → @akshay_pachaar ✔️
For more insights and tutorials on LLMs, AI Agents, and Machine Learning!
Let's build a (Text2SQL + RAG), hybrid agentic workflow:
Before we dive in, here's a quick demo of what we're building!
Tech stack:
- @Llama_Index for orchestration
- @Milvusio to self-host a vectorDB
- @CleanlabAI to validate the response
- @OpenRouterAI to access the latest Qwen3
Let's go! 🚀
Here's how our app works:
- LLM processes the query to select a tool
- Converts the query into right format (text/SQL)
- Executes the tool and fetch the output
- Generates a response with enriched context
- Validates the response using Cleanlab's Codex
I have been fine-tuning LLMs for more that 2 years now!
Here are the top 5 LLM fine-tuning techniques, explained with visuals:
Traditional fine‑tuning is impractical for LLMs (billions of params; 100s GB).
Since this kind of computing isn't accessible to everyone, parameter-efficient finetuning (PEFT) came into existence.
Today, we’ll cover the top 5 PEFT techniques, step by step.
Some background!
LLM weights are matrices of numbers adjusted during finetuning.
Most PEFT techniques involve finding a lower-rank adaptation of these matrices—a smaller-dimensional matrix that can still represent the information stored in the original.
Let's build a "Chat with your Code" RAG app using Qwen3-Coder:
Before we begin, take a look at what we're about to create!
Tech stack:
- @Llama_Index for orchestration
- @Milvusio to self-host a vectorDB
- @CleanlabAI codex to validate the response
- @OpenRouterAI to access @Alibaba_Qwen 3 Coder.
Let's go! 🚀
The architecture diagram presented below illustrates some of the key components & how they interact with each other!
It will be followed by detailed descriptions & code for each component: