Akshay 🚀 Profile picture
Aug 3 10 tweets 4 min read Read on X
uv in Python, clearly explained (with code):
uv is incredibly fast.

- Creating virtual envs. using uv is ~80x faster than python -m venv.
- Package installation is 4–12x faster without caching, and ~100x with caching

Today, let's understand how to use uv for Python package management.

Let's dive in! Image
uv is a Rust-based Python package manager built to be fast and reliable.

It replaces not just pip but also pip-tools, virtualenv, pipx, poetry, and pyenv, all with a single standalone binary.

Here's a uv cheatsheet for Python devs👇

Let's look at the code next!
1️⃣ Create a new project

To set up a new Python project, run: uv init project-name.

This creates a directory structure, a TOML file, a sample script, and a README.

Check this 👇 Image
2️⃣ Initialize an env.

Although uv automatically initializes a virtual env. in a project, you can explicitly create a virtual env. with `uv venv` command.

Activate it as follows:
- MacOS/Linux: source .venv/bin/activate
- Windows: .venv\Scripts\activate

Check this 👇 Image
3️⃣ Install packages

Next, you can add dependencies using `uv add <library-name>` command.

When you add packages, uv updates the toml file and resolves the full dependency tree, generating a lockfile.

Check this 👇 Image
4️⃣ Execute a script

To run a script, use `uv run script[.]py` command.

If a package is not available in your environment but it is used in the script, uv will install it when you run the script, provided the dependency is specified in the toml file.

Check this 👇 Image
5️⃣ Reproduce an env.

Finally, uv gives 100% reproducible installs.

Say you cloned a project that used uv. You can run `uv sync` to precisely match the project.

This works across OS, and even if the project you cloned used a different Python version.

Check this 👇 Image
And that is how you can start using uv.

Note: When you push your project to GitHub, DO NOT add the uv[.]lock file to your gitignore file. This helps uv reproduce the environment when others use your project.

Here is the cheatsheet again for your reference 👇
If you found it insightful, reshare with your network.

Find me → @akshay_pachaar ✔️
For more insights and tutorials on LLMs, AI Agents, and Machine Learning!

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Akshay 🚀

Akshay 🚀 Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @akshay_pachaar

Aug 8
Let's compare GPT-5 and Claude Opus-4.1 for code generation:
Today, we're building a CodeArena, where you can compare any two code-gen models side-by-side.

Tech stack:

- @LiteLLM for orchestration
- @Cometml's Opik to build the eval pipeline
- @OpenRouterAI to access cutting-edge models
- @LightningAI for hosting CodeArena

Let's go!🚀
Here's the workflow:

- Choose models for code generation comparison
- Import a GitHub repository and offer it as context to LLMs
- Use context + query to generate code from both models
- Evaluate generated code using Opik's G-Eval

Let’s implement this!
Read 16 tweets
Aug 6
Let's compare OpenAI gpt-oss and Qwen-3 on maths & reasoning:
Before we dive in, here's a quick demo of what we're building!

Tech stack:

- @LiteLLM for orchestration
- @Cometml's Opik to build the eval pipeline (open-source)
- @OpenRouterAI to access the models

You'll also learn about G-Eval & building custom eval metrics.

Let's go! 🚀
Here's the workflow:

- User submits query
- Both models generate reasoning tokens along with the final response
- Query, response and reasoning logic are sent for evaluation
- Detailed evaluation is conducted using Opik's G-Eval across four metrics.

Let’s implement this!
Read 14 tweets
Aug 5
Tech giants use Multimodal RAG every day in production!

- Spotify uses it to answer music queries
- YouTube uses it to turn prompts into tracks
- Amazon Music uses it to create playlist from prompt

Let's learn how to build a Multimodal Agentic RAG (with code):
Today, we'll build a multimodal Agentic RAG that can query documents and audio files using the user's speech.

Tech stack:

- @AssemblyAI for transcription.
- @milvusio as the vector DB.
- @beam_cloud for deployment.
- @crewAIInc Flows for orchestration.

Let's build it!
Here's the workflow:

- User inputs data (audio + docs).
- AssemblyAI transcribes the audio files.
- Transcribed text & docs are embedded in the Milvus vector DB.
- Research Agent retrieves info from user query.
- Response Agent uses it to craft a response.

Check this👇
Read 13 tweets
Aug 4
Sub-agents in Claude Code, clearly explained:
Claude Code subagents solved two of AI’s biggest problems:

- Large Context management
- Right tool selection

Making it the best AI coding assistant!

Let's understand how to build and use Sub-agents in Claude code:
What are subagents?

Subagents are like focused teammates for your IDE

Each one:

- Has a specific purpose
- Uses a separate context window
- Can be limited to selected tools
- Follows a custom system prompt

It works independently and returns focused results.
Read 13 tweets
Aug 1
Let's build a (Text2SQL + RAG), hybrid agentic workflow:
Before we dive in, here's a quick demo of what we're building!

Tech stack:

- @Llama_Index for orchestration
- @Milvusio to self-host a vectorDB
- @CleanlabAI to validate the response
- @OpenRouterAI to access the latest Qwen3

Let's go! 🚀
Here's how our app works:

- LLM processes the query to select a tool
- Converts the query into right format (text/SQL)
- Executes the tool and fetch the output
- Generates a response with enriched context
- Validates the response using Cleanlab's Codex

Now, let's see the code!
Read 14 tweets
Jul 31
"Attention is all you need" implemented from scratch using PyTorch:
This is the paper that revolutionized AI!

Today, we'll implement:

- The complete Transformer architecture
- Multi-Head Attention mechanism
- Encoder-Decoder structure
- Positional Encoding

Everything in clean, educational Python code!

Let's go! 🚀 Image
Here's the full Transformer model that we'll build piece by piece!

Notice the key components:

- Encoder & Decoder stacks
- Multi-head attention layers
- Position-wise feed-forward networks
- Positional encoding

Now let's break it down! 👇 Image
Read 17 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us!

:(