Rakesh Jain Profile picture
Sep 25, 2021 β€’ 22 tweets β€’ 5 min read β€’ Read on X
What is CPU Load Average?

#Linux #DevOps #Compute

A thread πŸ‘‡
Load averages are the three numbers shown with the uptime and top commands - they look like this:

load average: 0.09, 0.05, 0.01 Image
The three numbers represent averages over progressively longer periods of time (one, five, and fifteen-minute averages), and that lower numbers are better. Higher numbers represent a problem or an overloaded machine.
But, what's the threshold?

What constitutes "good" and "bad" load average values?

When should you be concerned over a load average value, and when should you scramble to fix it ASAP?
First, a little background on what the load average values mean. We'll start out with the simplest case: a machine with one single-core processor.

The traffic analogy:
A single-core CPU is like a single lane of traffic.
Imagine you are a bridge operator. Sometimes your bridge is so busy there are cars lined up to cross. You want to let folks know how traffic is moving on your bridge. A decent metric would be how many cars are waiting at a particular time.
If no cars are waiting, incoming drivers know they can drive across right away. If cars are backed up, drivers know they're in for delays.

So, Bridge Operator, what numbering system are you going to use? How about:
0.00 means there's no traffic on the bridge at all. In fact, between 0.00 and 1.00 means there's no backup, and an arriving car will just go right on.

1.00 means the bridge is exactly at capacity. All is still good, but if traffic gets a little heavier, things are going to slow.
over 1.00 means there's backup. How much? Well, 2.00 means that there are two lanes worth of cars total -- one lane's worth on the bridge, and one lane's worth waiting. 3.00 means there are three lanes worth total -- one lane's worth on the bridge, and two lanes' worth waiting. Image
This is basically what CPU load is. "Cars" are processes using a slice of CPU time (crossing the bridge) or queued up to use d CPU. Unix refers this as the run-queue length: the sum of d number of processes that are currently running plus d number that are waiting (queued) to run
Like the bridge operator, you'd like your cars/processes to never be waiting. So, your CPU load should ideally stay below 1.00. Also, like the bridge operator, you are still ok if you get some temporary spikes above 1.00. but when you're consistently above 1.00, you need to worry
So you're saying the ideal load is 1.00?

Well, not exactly. The problem with a load of 1.00 is that you have no headroom. In practice, many sysadmins will draw a line at 0.70:
The "Need to Look into it" Rule of Thumb: 0.70 If your load average is staying above > 0.70, it's time to investigate before things get worse.
The "Fix this now" Rule of Thumb: 1.00. If your load average stays above 1.00, find the problem and fix it now. Otherwise, you're going to get woken up in the middle of the night, and it's not going to be fun.
Uff it's 3 AM WTF?" Rule of Thumb: 5.0- If your load average is above 5.00, you could be in serious trouble, your box is either hanging or slowing way down, and this will (inexplicably) happen in the worst possible time like in the middle of the night. Don't let it get there.
What about Multi-processors? My load says 3.00, but things are running fine!

Got a quad-processor system? It's still healthy with a load of 3.00.

On a multi-processor system, the load is relative to the number of processor cores available.
The "100% utilization" mark is 1.00 on a single-core system, 2.00, on a dual-core, 4.00 on a quad-core, etc.
If we go back to the bridge analogy, the "1.00" really means "one lane's worth of traffic". On a one-lane bridge, that means it's filled up. On a two-lane bridge, a load of 1.00 means it's at 50% capacity -- only one lane is full, so there's another whole lane that can be filled.
Multicore vs. multiprocessor

The "number of cores = max load" Rule of Thumb: on a multicore system, your load should not exceed the number of cores available.
The "cores is cores" Rule of Thumb: How the cores are spread out over CPUs doesn't matter. Two quad-cores == four dual-cores == eight single-cores. It's all eight cores for these purposes.
Which average should I be observing? One, five, or 15 minutes?

For the numbers we've talked about (1.00 = fix it now, etc), you should be looking at the five or 15-minute averages. Frankly, if your box spikes above 1.0 on the one-minute average, you're still fine.
So # of cores is important to interpreting load averages ... how do I know how many cores my system has?

cat /proc/cpuinfo to get info on each processor in your system.

To get just a count, run it through grep and word count:

grep 'model name' /proc/cpuinfo | wc -l

β€’ β€’ β€’

Missing some Tweet in this thread? You can try to force a refresh
γ€€

Keep Current with Rakesh Jain

Rakesh Jain Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @devops_tech

Apr 22
🚨Kubernetes Interview Guide!

A thread with 15 interview questions & answers for new/intermediate administrators βš“οΈπŸ‘‡
1/15: Question: What is a Kubernetes Pod, and why is it used?

Answer: A Pod is the smallest deployable unit in Kubernetes, representing one or more containers that share resources. It's used to deploy, manage, and scale containers.
2/15: Question: How does Kubernetes manage container networking?

Answer: Kubernetes uses the Container Network Interface (CNI) to manage container networking. CNI plugins allow for different network configurations and overlays, enabling communication between pods across nodes.
Read 19 tweets
Apr 20
🚨Linux Interview Guide!🚨

A thread with 15 advanced questions &answers πŸ§πŸ‘‡ Image
1️⃣ Q: How do you optimize disk I/O performance in Linux?

A: Utilize techniques like RAID striping, I/O schedulers (e.g., deadline, noop), and file system optimizations (e.g., tuning journaling options).
2️⃣ Q: Explain the concept of kernel namespaces in Linux.

A: Kernel namespaces isolate and virtualize system resources, enabling processes to have their own view of the system, improving security and resource management.
Read 18 tweets
Apr 18
🚨🚨Interview Guide!

15 Docker scenario-based interview questions and answers πŸ‘‡πŸ›³οΈ
Q1: U r tasked with deploying a multi-container app on Docker. How would u orchestrate these containers effectively?

A: Utilize Docker Compose, defining services, networks, & volumes in a YAML file. It simplifies multi-container deployments, ensuring consistency & scalability.
Q2: ur team wants to ensure seamless updates without downtime. How would u achieve zero-downtime deployments with Docker?

A: Implement rolling updates with Docker Swarm/ Kubernetes. By gradually updating containers while keeping the app available, u ensure uninterrupted service.
Read 18 tweets
Apr 16
🚨🚨 Interview Tip!

What happens at the backend when you launch a Pod in Kubernetes!

A Thread πŸ‘‡ Image
🐳 Step 1: YAML Configuration

To kick things off, you create a YAML file defining your Pod's configuration. This includes details like container images, ports, volumes, etc. #Kubernetes #YAML
πŸ” Step 2: API Request

Once you've got your YAML ready, you send an API request to the Kubernetes cluster. This request includes your Pod configuration. #K8s #API
Read 21 tweets
Apr 13
Docker Master Class in 15 practical examples πŸ›³οΈ

A Thread πŸ‘‡
1/ Hello Docker!

docker run hello-world

Experience the magic of Docker with the simplest container ever.
2/ Run a Web Server

docker run -d -p 8080:80 nginx

Fire up a web server on port 8080 from the official NGINX image.
Read 18 tweets
Apr 4
Mastering TCPDUMP!

A thread on Linux packet analyzer toolπŸ‘‡πŸ§
πŸ“‘ What is tcpdump?

πŸ•΅οΈβ€β™‚οΈ tcpdump is a powerful command-line packet analyzer. It allows you to capture and analyze network traffic in real-time or save it to a file for later inspection.
πŸ” How does tcpdump work?

πŸ›  Tcpdump operates by intercepting and displaying packets as they pass through a network interface. It can capture packets at various layers of the OSI model, providing insight into network activity.
Read 18 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us!

:(