A deep technical dive into all things Redis. Covering various Redis topologies, data persistence and process forking.
Redis redis.io (“REmote DIctionary Service”) is an open-source key-value database server.
The most accurate description of Redis is that it's a data structure server. This specific nature of Redis has led to much of its popularity and adoption amongst developers.
Primarily, Redis is an in-memory database used as a cache in front of another "real" database like MySQL or PostgreSQL to help improve application performance. It leverages the speed of memory and alleviates load off the central application database
There are several ways to deploy Redis which one you go with highly depends on scale and use case. For simple deployments a single node cluster is all you need. For more complicated and mission critical workloads you might want Redis Sentinel.
Many have thought about what happens when you can't store all your data in memory on one machine. Currently, the maximum RAM available in a single server is 24TIB, presently listed online at AWS. Granted, that's a lot, but for some systems, that isn't enough. Thus Redis Cluster.
If we are going to use Redis to store any kind of data for safe keeping, it's important to understand how Redis is doing it. There are many usecases where if you were to lose the data Redis is storing is not the end of the world.
This coolest part of Redis in my opinion is how it leverages forking and copy-on-write to facilitate data persistence performantly. When you fork a process, the parent and child share memory, and in that child process Redis begins the snapshotting (Redis) process.
Redis Explained architecturenotes.co/redis/
I hope you've found this thread helpful.

Follow me @arcnotes for more.

Like/Retweet the first tweet below if you can:
Enjoyed this thread?

Follow @arcnotes and sign up at architecturenotes.co

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Architecture Notes

Architecture Notes Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @arcnotes

Jul 30
There is often a level of focus on the bigger picture when it comes to system design, but we often don't think about the underlying components in these systems. So let's chat about different levels of memory. Image
Over the years memory has increased in capacity and in speed as you can see with the chart below it's been following a trajectory called Moore's law. Image
There are several types of RAM; the two main types are SRAM and DRAM. SRAM is more closely associated with CPU caches and provides lower latency but are more expensive. Meanwhile DRAM is slower but much cheaper and can be packed densely, which makes it ideal for main memory. Image
Read 10 tweets
Jul 15
There are many discussions about which level of system granulation is the best. We went from monoliths to microservices and back again. Today we will break it down.. a bit. Image
The big kahuna, the monolith! Most systems start here, and I would argue most systems should stay here. Monoliths are deployed and developed as a single unit. It contains all the functionality that the application supports, from UI to the database. Image
Pros:
• Early on, simpler and faster development
• Performant
• Fewer cross-functional concerns, and if there are easier to grok them.

Cons:
• As complexity and size increase, so does the agility of the said system
• Larger systems changes become more complex.
Read 14 tweets
Jul 11
Latency. Latency. You hear about this every day. Understanding latency is essential in all parts of our systems, including registers, main memory, disk, and network. In addition, understanding these latencies is vital so you can reason about various aspects of your system. Image
Latency is physically a consequence of the limited velocity at which any physical interaction can propagate, which is always < speed of light.
Latency always tends to sneak up on you. Oh, 1ms per item processing isn't bad. What if I had to process a billion items? That's just over 11 days. 🤯 Image
Read 8 tweets
Jul 6
Lets talk about encryption foundational topic in system design. Encryption involves converting human-readable plaintext into incomprehensible text, which is known as ciphertext and decrypting it back to plaintext again. Image
There are two classes of encryption, but until 1976 symmetric key encryption was the only show in town. It involves a shared key used to encrypt and decrypt messages.
The main problem with symmetric key encryption is sharing the shared key securely. Since a third party with the key could also decrypt the message if they got hold of the shared key. Image
Read 10 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us on Twitter!

:(