Tech with Mak Profile picture
Nov 17, 2025 1 tweets 2 min read Read on X
Random UUIDs are killing your database performance

You switched from integer IDs (1, 2, 3…) to UUIDs (a1b2-3c4d-…) for security or distributed generation.

Then your database writes get slower, sometimes much slower.

Here’s why:

Index Fragmentation.

Most database indexes are B-Trees (balanced, sorted trees). The physical location of your data matters.

1./ 𝐒𝐞𝐪𝐮𝐞𝐧𝐭𝐢𝐚𝐥 𝐈𝐃𝐬

When you insert sequential integers (1, 2, 3), new data always goes to the rightmost leaf page of the index.

Writes are predictable and sequential.

Cache hits are maximized.

Pages stay 100% full.

This is the speed limit of your database.

2./ 𝐑𝐚𝐧𝐝𝐨𝐦 𝐔𝐔𝐈𝐃𝐯4

UUIDv4 values are uniformly random. This means a new insert can land anywhere in the tree structure.

Because the inserts are scattered:

- The database must constantly load random pages from disk to memory (Random I/O).

- Page Splitting => When a target page is full, the database has to split it in half to make room, leaving you with two half-empty pages.

- 'Swiss Cheese' Effect => Your index becomes larger and full of holes, wasting RAM and disk space.

This can degrade write throughput by 20–90% once your index size exceeds your available RAM.

3./ 𝐔𝐔𝐈𝐃𝐯7

Stop using UUIDv4 for primary keys. Use UUIDv7 (Standardized in RFC 9562).

UUIDv7 embeds a timestamp at the start of the ID, making it sortable.

This gives you the best of both worlds:

- Distributed generation => (No central counter needed).

- Monotonic inserts => They behave like sequential integers in a B-Tree, eliminating fragmentation.

- Security => Prevents trivial ID enumeration (attackers can't guess that user 101 follows user 100), though note that it does reveal the record's creation time.

You get the utility of UUIDs without the performance penalty.

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Tech with Mak

Tech with Mak Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @techNmak

Dec 28, 2025
I still can’t believe this is free.

Most universities are charging tuition to teach theoretical AI.

Meanwhile, Google is giving away practical, state-of-the-art training for $0.

Some courses take just 45 minutes. ⏱️

• Generative AI & LLMs? ✅ • Diffusion & Image Generation? ✅ • Transformers & BERT? ✅ • Earn a Badge? ✅

Here are 8 free courses you can start today (plus many more available on the platform):Image
➡️ 1. Intro to Generative AI

skills.google/course_templat…
➡️ 2. Intro to Large Language Models

skills.google/course_templat…
Read 10 tweets
Dec 16, 2025
I still can’t believe this is free.

Most bootcamps are charging $3,000 to teach you outdated material.

Meanwhile, @huggingface is giving away the state-of-the-art curriculum for $0.

• Agents? ✅ • Robotics? ✅ • The new MCP standard?✅

Check this. Bookmark.👇 Image
LLM Course

Check here: huggingface.co/learn/llm-cour…
MCP Course

Check here: huggingface.co/learn/mcp-cour…
Read 11 tweets
Dec 15, 2025
I compiled 20 must-read AI Agent Guides from 2025, straight from Google, OpenAI, Microsoft, AWS, McKinsey, BCG, Anthropic & more.

If you’re building, leading, or investing in AI… this is your unfair advantage 👇
Google - Startup Technical Guide for AI Agents

How Google thinks about building agents from the ground up.

Architecture, tooling & best practices.

cloud.google.com/resources/cont…Image
Microsoft - Agent Governance Whitepaper

If agents can act, they must be governed.
Security, compliance & control for enterprise AI.

adoption.microsoft.com/files/copilot-…Image
Read 22 tweets
Dec 11, 2025
9 GitHub repositories that are worth more than a $20,000 degree.

Master AI/ML for free: 👇
Andrej Karpathy's NN Course

Build neural nets from scratch. The gold standard.

github.com/karpathy/nn-ze…
nanoGPT

Train your own GPT. Pure PyTorch. Concise.

github.com/karpathy/nanoG…
Read 11 tweets
Dec 7, 2025
Stop trying to learn complex Math and AI concepts from static PDF files. 🛑

Interactive visualization is the cheat code for deep understanding.

Here are 3 incredible, free resources to master Linear Algebra, Probability, and Deep Learning visually. 🧵👇
Topic: Linear Algebra 📐
Resource: Immersive Linear Algebra
Link: immersivemath.com/ila/index.html

Instead of staring at a 2D drawing of a 3D vector, you can rotate, stretch, and manipulate the figures right on the page.

Key coverage => Vectors, Dot Products, Matrices, Eigenvalues.

Why it helps => It bridges the gap between abstract equations and geometric intuition.Image
Topic: Probability & Statistics 🎲
Resource: Seeing Theory (Brown University)
Link: seeing-theory.brown.edu

Created by Daniel Kunin at Brown University, this project uses D3(dot)js to visualize concepts that usually confuse students.

Key coverage => Basic Probability, Distributions, Bayesian Inference, and Regression.

Why it helpsYou can "play" with data. Adjust parameters in real-time to see how distributions shift or how "Chance" actually works.Image
Read 5 tweets
Dec 6, 2025
Make the most of your weekend.

Don't sleep on this.

Stanford's Autumn 2025 Transformers & LLMs course. 8 lectures. Free.

While others scroll, you could understand how Flash Attention achieves 3x speedup, how LoRA cuts fine-tuning costs by 90%, and how MoE makes models efficient.

➕ What's covered:

➡️ Lecture 1: Transformer Fundamentals
→ Tokenization and word representation
→ Self-attention mechanism explained
→ Complete transformer architecture
→ Detailed implementation example

➡️ Lecture 2: Advanced Transformer Techniques
→ Position embeddings (RoPE, ALiBi, T5 bias)
→ Layer normalization and sparse attention
→ BERT deep dive and finetuning
→ Extensions of BERT

➡️ Lecture 3: LLMs & Inference Optimization
→ Mixture of Experts (MoE) explained
→ Decoding strategies (greedy, beam search, sampling)
→ Prompting and in-context learning
→ Chain-of-thought reasoning
→ Inference optimizations (KV cache, PagedAttention)

➡️ Lecture 4: LLM Training & Fine-tuning
→ Pretraining and scaling laws (Chinchilla law)
→ Training optimizations (ZeRO, model parallelism)
→ Flash Attention for 3x speedup
→ Quantization and mixed precision
→ Parameter-efficient finetuning (LoRA, QLoRA)

➡️ Lecture 5: LLM Tuning
→ Preference tuning
→ RLHF overview
→ Reward modeling
→ RL approaches (PPO and variants)
→ DPO

➡️ Lecture 6: LLM Reasoning
→ Reasoning models
→ RL for reasoning
→ GRPO
→ Scaling

➡️ Lecture 7: Agentic LLMs
→ Retrieval-augmented generation
→ Advanced RAG techniques
→ Function calling
→ Agents
→ ReAct framework

➡️Lecture 8: LLM Evaluation
→ LLM-as-a-judge overview
→Best practices and benefits
→Biases and pitfalls

From Stanford Online:
Rigorous instruction. Latest techniques. Free access.

Perfect for:
→ ML engineers building with LLMs
→ AI engineers understanding transformers
→ Researchers working on language models
→ Anyone learning beyond API calls

This weekend: learn the techniques that separate good engineers from great ones.

(I will put the playlist in the comments.)

♻️ Repost to save someone $$$ and a lot of confusion.
✔️ Follow @techNmak for more AI/ML insights.Image
Lecture 1: Transformer

- Class logistics
- NLP overview
- Tokenization
- Word representation
- Recurrent neural networks
- Self-attention mechanism
- Transformer architecture

Lecture 2: Transformer-Based Models & Tricks

- Overview of position embeddings
- Sinusoidal embeddings
- T5 bias, ALiBi
- RoPE
- Layer normalization
- Sparse attention
- Sharing attention heads
- Transformer-based models
- BERT deep dive
- BERT finetuning
- Extensions of BERT

Read 10 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us!

:(