Today, let’s design an S3 like object storage system.
Before we dive into the design, let’s define some terms. 1/11
𝐁𝐮𝐜𝐤𝐞𝐭. A logical container for objects. The bucket name is globally unique. To upload data to S3, we must first create a bucket. 2/11
𝐎𝐛𝐣𝐞𝐜𝐭. An object is an individual piece of data we store in a bucket. It contains object data (also called payload) and metadata. Object data can be any sequence of bytes we want to store. The metadata is a set of name-value pairs that describe the object. 3/11
An S3 object consists of (Figure 1):
🔹 Metadata. It is mutable and contains attributes such as ID, bucket name, object name, etc.
🔹 Object data. It is immutable and contains the actual data. 4/11
In S3, an object resides in a bucket. The path looks like this: /bucket-to-share/script.txt. The bucket only has metadata. The object has metadata and the actual data. 5/11
The diagram below (Figure 2) illustrates how file uploading works. In this example, we first create a bucket named “bucket-to-share” and then upload a file named “script.txt” to the bucket. 6/11
1. The client sends an HTTP PUT request to create a bucket named “bucket-to-share.” The request is forwarded to the API service.
2. The API service calls the Identity and Access Management (IAM) to ensure the user is authorized and has WRITE permission. 7/11
3. The API service calls the metadata store to create an entry with the bucket info in the metadata database. Once the entry is created, a success message is returned to the client. 8/11
4. After the bucket is created, the client sends an HTTP PUT request to create an object named “script.txt”.
5. The API service verifies the user’s identity and ensures the user has WRITE permission on the bucket. 9/11
6. Once validation succeeds, the API service sends the object data in the HTTP PUT payload to the data store. The data store persists the payload as an object and returns the UUID of the object. 10/11
7. The API service calls the metadata store to create a new entry in the metadata database. It contains important metadata such as the object_id (UUID), bucket_id (which bucket the object belongs to), object_name, etc. 12/12
• • •
Missing some Tweet in this thread? You can try to
force a refresh
Model Context Protocol (MCP) is a new system introduced by Anthropic to make AI models more powerful.
It is an open standard (also being run as an open-source project) that allows AI models (like Claude) to connect to databases, APIs, file systems, and other tools without needing custom code for each new integration.
MCP follows a client-server model with 3 key components:
1 - Host: AI applications like Claude that provide the environment for AI interactions so that different tools and data sources can be accessed. The host runs the MCP Client.
Kubernetes (K8S) is an open-source container orchestration platform originally developed by Google and now maintained by CNCF.
Here’s how developers interact with Kubernetes:
1 - Developers create manifest files describing the application.
2 - Kubernetes takes these manifest files, validates them, and deploys the applications across its cluster of worker nodes.
3 - Kubernetes manages the entire lifecycle of the application.
Kubernetes is made up of two main components:
1 - Control Plane: It is like the brain of Kubernetes and consists of the following parts:
- API Server: It receives all incoming requests from users or CLI.
1 - Collaboration Tools
Software development is a social activity. Learn to use collaboration tools like Jira, Confluence, Slack, MS Teams, Zoom, etc.
2 - Programming Languages
Pick and master one or two programming languages. Choose from options like Java, Python, JavaScript, C#, Go, etc.
3 - API Development
Learn the ins and outs of API Development approaches such as REST, GraphQL, and gRPC.
4 - Web Servers and Hosting
Know about web servers as well as cloud platforms like AWS, Azure, GCP, and Kubernetes
5 - Authentication and Testing
Learn how to secure your applications with authentication techniques such as JWTs, OAuth2, etc. Also, master testing techniques like TDD, E2E Testing, and Performance Testing
6 - Databases
Learn to work with relational (Postgres, MySQL, and SQLite) and non-relational databases (MongoDB, Cassandra, and Redis).
Twitter has enforced very strict rate limiting. Some people cannot even see their own tweets.
Rate limiting is a very important yet often overlooked topic. Let's use this opportunity to take a look at what it is and the most popular algorithms.
A thread.
#RateLimitExceeded
What is rate limiting? Rate limiting controls the rate at which users or services can access a resource. Here are some examples:
- A user can send a message no more than 2 per second
- One can create a maximum of 10 accounts per day from the same IP address
Fixed Window Counter
The algorithm divides the timeline into fixed-size time windows and assigns a counter for each window. Each request increments the counter by some value. Once the counter reaches the threshold, subsequent requests are blocked until the new time window begins