- Creating virtual envs. using uv is ~80x faster than python -m venv.
- Package installation is 4–12x faster without caching, and ~100x with caching
Today, let's understand how to use uv for Python package management.
Let's dive in!
uv is a Rust-based Python package manager built to be fast and reliable.
It replaces not just pip but also pip-tools, virtualenv, pipx, poetry, and pyenv, all with a single standalone binary.
Here's a uv cheatsheet for Python devs👇
Let's look at the code next!
1️⃣ Create a new project
To set up a new Python project, run: uv init project-name.
This creates a directory structure, a TOML file, a sample script, and a README.
Check this 👇
2️⃣ Initialize an env.
Although uv automatically initializes a virtual env. in a project, you can explicitly create a virtual env. with `uv venv` command.
Activate it as follows:
- MacOS/Linux: source .venv/bin/activate
- Windows: .venv\Scripts\activate
Check this 👇
3️⃣ Install packages
Next, you can add dependencies using `uv add <library-name>` command.
When you add packages, uv updates the toml file and resolves the full dependency tree, generating a lockfile.
Check this 👇
4️⃣ Execute a script
To run a script, use `uv run script[.]py` command.
If a package is not available in your environment but it is used in the script, uv will install it when you run the script, provided the dependency is specified in the toml file.
Check this 👇
5️⃣ Reproduce an env.
Finally, uv gives 100% reproducible installs.
Say you cloned a project that used uv. You can run `uv sync` to precisely match the project.
This works across OS, and even if the project you cloned used a different Python version.
Check this 👇
And that is how you can start using uv.
Note: When you push your project to GitHub, DO NOT add the uv[.]lock file to your gitignore file. This helps uv reproduce the environment when others use your project.
Here is the cheatsheet again for your reference 👇
If you found it insightful, reshare with your network.
Find me → @akshay_pachaar ✔️
For more insights and tutorials on LLMs, AI Agents, and Machine Learning!
Let's compare GPT-5 and Claude Opus-4.1 for code generation:
Today, we're building a CodeArena, where you can compare any two code-gen models side-by-side.
Tech stack:
- @LiteLLM for orchestration
- @Cometml's Opik to build the eval pipeline
- @OpenRouterAI to access cutting-edge models
- @LightningAI for hosting CodeArena
Let's go!🚀
Here's the workflow:
- Choose models for code generation comparison
- Import a GitHub repository and offer it as context to LLMs
- Use context + query to generate code from both models
- Evaluate generated code using Opik's G-Eval
Let's compare OpenAI gpt-oss and Qwen-3 on maths & reasoning:
Before we dive in, here's a quick demo of what we're building!
Tech stack:
- @LiteLLM for orchestration
- @Cometml's Opik to build the eval pipeline (open-source)
- @OpenRouterAI to access the models
You'll also learn about G-Eval & building custom eval metrics.
Let's go! 🚀
Here's the workflow:
- User submits query
- Both models generate reasoning tokens along with the final response
- Query, response and reasoning logic are sent for evaluation
- Detailed evaluation is conducted using Opik's G-Eval across four metrics.
Tech giants use Multimodal RAG every day in production!
- Spotify uses it to answer music queries
- YouTube uses it to turn prompts into tracks
- Amazon Music uses it to create playlist from prompt
Let's learn how to build a Multimodal Agentic RAG (with code):
Today, we'll build a multimodal Agentic RAG that can query documents and audio files using the user's speech.
Tech stack:
- @AssemblyAI for transcription.
- @milvusio as the vector DB.
- @beam_cloud for deployment.
- @crewAIInc Flows for orchestration.
Let's build it!
Here's the workflow:
- User inputs data (audio + docs).
- AssemblyAI transcribes the audio files.
- Transcribed text & docs are embedded in the Milvus vector DB.
- Research Agent retrieves info from user query.
- Response Agent uses it to craft a response.
Let's build a (Text2SQL + RAG), hybrid agentic workflow:
Before we dive in, here's a quick demo of what we're building!
Tech stack:
- @Llama_Index for orchestration
- @Milvusio to self-host a vectorDB
- @CleanlabAI to validate the response
- @OpenRouterAI to access the latest Qwen3
Let's go! 🚀
Here's how our app works:
- LLM processes the query to select a tool
- Converts the query into right format (text/SQL)
- Executes the tool and fetch the output
- Generates a response with enriched context
- Validates the response using Cleanlab's Codex