Load averages are the three numbers shown with the uptime and top commands - they look like this:
load average: 0.09, 0.05, 0.01
The three numbers represent averages over progressively longer periods of time (one, five, and fifteen-minute averages), and that lower numbers are better. Higher numbers represent a problem or an overloaded machine.
But, what's the threshold?
What constitutes "good" and "bad" load average values?
When should you be concerned over a load average value, and when should you scramble to fix it ASAP?
First, a little background on what the load average values mean. We'll start out with the simplest case: a machine with one single-core processor.
The traffic analogy:
A single-core CPU is like a single lane of traffic.
Imagine you are a bridge operator. Sometimes your bridge is so busy there are cars lined up to cross. You want to let folks know how traffic is moving on your bridge. A decent metric would be how many cars are waiting at a particular time.
If no cars are waiting, incoming drivers know they can drive across right away. If cars are backed up, drivers know they're in for delays.
So, Bridge Operator, what numbering system are you going to use? How about:
0.00 means there's no traffic on the bridge at all. In fact, between 0.00 and 1.00 means there's no backup, and an arriving car will just go right on.
1.00 means the bridge is exactly at capacity. All is still good, but if traffic gets a little heavier, things are going to slow.
over 1.00 means there's backup. How much? Well, 2.00 means that there are two lanes worth of cars total -- one lane's worth on the bridge, and one lane's worth waiting. 3.00 means there are three lanes worth total -- one lane's worth on the bridge, and two lanes' worth waiting.
This is basically what CPU load is. "Cars" are processes using a slice of CPU time (crossing the bridge) or queued up to use d CPU. Unix refers this as the run-queue length: the sum of d number of processes that are currently running plus d number that are waiting (queued) to run
Like the bridge operator, you'd like your cars/processes to never be waiting. So, your CPU load should ideally stay below 1.00. Also, like the bridge operator, you are still ok if you get some temporary spikes above 1.00. but when you're consistently above 1.00, you need to worry
So you're saying the ideal load is 1.00?
Well, not exactly. The problem with a load of 1.00 is that you have no headroom. In practice, many sysadmins will draw a line at 0.70:
The "Need to Look into it" Rule of Thumb: 0.70 If your load average is staying above > 0.70, it's time to investigate before things get worse.
The "Fix this now" Rule of Thumb: 1.00. If your load average stays above 1.00, find the problem and fix it now. Otherwise, you're going to get woken up in the middle of the night, and it's not going to be fun.
Uff it's 3 AM WTF?" Rule of Thumb: 5.0- If your load average is above 5.00, you could be in serious trouble, your box is either hanging or slowing way down, and this will (inexplicably) happen in the worst possible time like in the middle of the night. Don't let it get there.
What about Multi-processors? My load says 3.00, but things are running fine!
Got a quad-processor system? It's still healthy with a load of 3.00.
On a multi-processor system, the load is relative to the number of processor cores available.
The "100% utilization" mark is 1.00 on a single-core system, 2.00, on a dual-core, 4.00 on a quad-core, etc.
If we go back to the bridge analogy, the "1.00" really means "one lane's worth of traffic". On a one-lane bridge, that means it's filled up. On a two-lane bridge, a load of 1.00 means it's at 50% capacity -- only one lane is full, so there's another whole lane that can be filled.
Multicore vs. multiprocessor
The "number of cores = max load" Rule of Thumb: on a multicore system, your load should not exceed the number of cores available.
The "cores is cores" Rule of Thumb: How the cores are spread out over CPUs doesn't matter. Two quad-cores == four dual-cores == eight single-cores. It's all eight cores for these purposes.
Which average should I be observing? One, five, or 15 minutes?
For the numbers we've talked about (1.00 = fix it now, etc), you should be looking at the five or 15-minute averages. Frankly, if your box spikes above 1.0 on the one-minute average, you're still fine.
So # of cores is important to interpreting load averages ... how do I know how many cores my system has?
cat /proc/cpuinfo to get info on each processor in your system.
To get just a count, run it through grep and word count:
grep 'model name' /proc/cpuinfo | wc -l
β’ β’ β’
Missing some Tweet in this thread? You can try to
force a refresh
A List of critical #AWS services and their limitations π
1. EC2 β Instance limits by region, instance type restrictions. 2. RDS β Max database storage limits, instance size restrictions. 3. S3 β Max object size is 5TB, bucket policies can limit access. 4. EBS β Volume size max of 64TB, 20,000 IOPS for io1/io2 volumes.
5. IAM β Max 5,000 roles per account, policy size limits. 6. Lambda β Max execution timeout of 15 minutes, memory max 10GB. 7. DynamoDB β Partition throughput limits, item size max of 400KB. 8. CloudFormation β 200 resources limit per stack.
𧡠Mastering Docker Troubleshooting: 15 Key Tips for Developers and DevOps Engineers!
A Thread ππ
1/ π³ Check Container Status
Use docker ps -a to view all containers and their statuses. A container may have exited unexpectedly.
Look at STATUS and RESTART policies to identify potential issues.
2/ π Inspect Logs
Run docker logs <container_name> to see the container logs.
This helps troubleshoot crashes, errors, or other issues within the app or service.
π Control traffic flow between pods using Network Policies. Limit communication to what's needed, reducing the attack surface.
Example: A policy that only allows inbound traffic from specific pods:
1οΈβ£ Kubernetes Overview:
K8s is like the conductor of an orchestra, managing containerized apps across multiple machines. π»
Example: You have a web app, API, and database, all in different containersβK8s ensures they play in harmony. πΆ
2οΈβ£ Nodes & Clusters:
A Cluster is like a city, with Nodes as buildings. The Master node is City Hall ποΈ, directing Worker nodes (buildings) π’ that run containers (apps).
Example: The cluster ensures all apps have power and connectivity! β‘