Understanding Linux CPU Load - when should you be worried?

You might be familiar with Linux load averages already. Load averages are the three numbers shown with the uptime and top commands - they look like this:

load average: 0.09, 0.05, 0.01

Most people have an inkling of what the load averages mean: the three numbers represent averages over progressively longer periods of time (one, five, and fifteen minute averages), and that lower numbers are better. Higher numbers represent a problem or an overloaded machine. But, what's the the threshold? What constitutes "good" and "bad" load average values? When should you be concerned over a load average value, and when should you scramble to fix it ASAP?

First, a little background on what the load average values mean. We'll start out with the simplest case: a machine with one single-core processor.

The traffic analogy

A single-core CPU is like a single lane of traffic. Imagine you are a bridge operator ... sometimes your bridge is so busy there are cars lined up to cross. You want to let folks know how traffic is moving on your bridge. A decent metric would be how many cars are waiting at a particular time. If no cars are waiting, incoming drivers know they can drive across right away. If cars are backed up, drivers know they're in for delays.

So, Bridge Operator, what numbering system are you going to use? How about:

  • 0.00 means there's no traffic on the bridge at all. In fact, between 0.00 and 1.00 means there's no backup, and an arriving car will just go right on.
  • 1.00 means the bridge is exactly at capacity. All is still good, but if traffic gets a little heavier, things are going to slow down.
  • over 1.00 means there's backup. How much? Well, 2.00 means that there are two lanes worth of cars total -- one lane's worth on the bridge, and one lane's worth waiting. 3.00 means there are three lane's worth total -- one lane's worth on the bridge, and two lanes' worth waiting. Etc.

= load of 1.00

= load of 0.50

= load of 1.70


This is basically what CPU load is. "Cars" are processes using a slice of CPU time ("crossing the bridge") or queued up to use the CPU. Unix refers to this as the run-queue length: the sum of the number of processes that are currently running plus the number that are waiting (queued) to run.

Like the bridge operator, you'd like your cars/processes to never be waiting. So, your CPU load should ideally stay below 1.00. Also like the bridge operator, you are still ok if you get some temporary spikes above 1.00 ... but when you're consistently above 1.00, you need to worry.

So you're saying the ideal load is 1.00?

Well, not exactly. The problem with a load of 1.00 is that you have no headroom. In practice, many sysadmins will draw a line at 0.70:

  • The "Need to Look into it" Rule of Thumb: 0.70 If your load average is staying above > 0.70, it's time to investigate before things get worse.

  • The "Fix this now" Rule of Thumb: 1.00. If your load average stays above 1.00, find the problem and fix it now. Otherwise, you're going to get woken up in the middle of the night, and it's not going to be fun.

  • The "Arrgh, it's 3AM WTF?" Rule of Thumb: 5.0. If your load average is above 5.00, you could be in serious trouble, your box is either hanging or slowing way down, and this will (inexplicably) happen in the worst possible time like in the middle of the night or when you're presenting at a conference. Don't let it get there.

What about Multi-processors? My load says 3.00, but things are running fine!

Got a quad-processor system? It's still healthy with a load of 3.00.

On multi-processor system, the load is relative to the number of processor cores available. The "100% utilization" mark is 1.00 on a single-core system, 2.00, on a dual-core, 4.00 on a quad-core, etc.

If we go back to the bridge analogy, the "1.00" really means "one lane's worth of traffic". On a one-lane bridge, that means it's filled up. On a two-late bridge, a load of 1.00 means its at 50% capacity -- only one lane is full, so there's another whole lane that can be filled.

Same with CPUs: a load of 1.00 is 100% CPU utilization on single-core box. On a dual-core box, a load of 2.00 is 100% CPU utilization.

Multicore vs. multiprocessor

While we're on the topic, let's talk about multicore vs. multiprocessor. For performance purposes, is a machine with a single dual-core processor basically equivalent to a machine with two processors with one core each? Yes. Roughly. There are lots of subtleties here concerning amount of cache, frequency of process hand-offs between processors, etc. Despite those finer points, for the purposes of sizing up the CPU load value, the total number of cores is what matters, regardless of how many physical processors those cores are spread across.

Which leads us to a two new Rules of Thumb:

  • The "number of cores = max load" Rule of Thumb: on a multicore system, your load should not exceed the number of cores available.

  • The "cores is cores" Rule of Thumb: How the cores are spread out over CPUs doesn't matter. Two quad-cores == four dual-cores == eight single-cores. It's all eight cores for these purposes.

Bringing It Home

Let's take a look at the load averages output from uptime:

~ $ uptime
23:05 up 14 days, 6:08, 7 users, load averages: 0.65 0.42 0.36

This is on a dual-core CPU, so we've got lots of headroom. I won't even think about it until load gets and stays above 1.7 or so.

Now, what about those three numbers? 0.65 is the average over the last minute, 0.42 is the average over the last five minutes, and 0.36 is the average over the last 15 minutes. Which brings us to the question:

Which average should I be observing? One, five, or 15 minute?

For the numbers we've talked about (1.00 = fix it now, etc), you should be looking at the five or 15-minute averages. Frankly, if your box spikes above 1.0 on the one-minute average, you're still fine. It's when the 15-minute average goes north of 1.0 and stays there that you need to snap to. (obviously, as we've learned, adjust these numbers to the number of processor cores your system has).

So # of cores is important to interpreting load averages ... how do I know how many cores my system has?

cat /proc/cpuinfo to get info on each processor in your system. Note: not available on OSX, Google for alternatives. To get just a count, run it through grep and word count: grep 'model name' /proc/cpuinfo | wc -l

Monitoring Linux CPU Load with Scout

Scout provides 2 ways to modify the CPU load. Our original server load plugin and Jesse Newland's Load-Per-Processor plugin both report the CPU load and alert you when the load peaks and/or is trending in the wrong direction:

load alert

More Reading

More on Linux Performance

Subscribe to our RSS feed or follow us on Twitter for more insight on Linux performance.

Comments

  1. mjablonski said 3 days later:

    Very well explained, thank you very much for sharing this great article!!!

  2. Luke Hartman said 3 days later:

    Indeed. I never was quite sure what the #s meant, except that lower is better. Thanks for the clarification.

  3. John Liptak said 3 days later:

    I would add a paragraph about CPU bound applications. For example, if you have a ray tracing app that will use all of CPU that you can give it, your load average will always be > 1.0. It’s not a problem if you planned it that way.

  4. John Walker said 3 days later:

    Nice explanation. Now can you explain vmstat and how to understand server io ;)

  5. feelinglucky said 3 days later:

    nice article, this is chinese translated version: http://www.gracecode.com/archives/2973/

  6. Gimi said 4 days later:

    Great writes! Learned a lot from it. Goooood work!

  7. A real linux admin said 4 days later:

    You forgot to mention that any process in IOWAIT is considered on the CPU . So you could have one process spending most of its time waiting on a slow NFS share that could drive your load average up to 1.0, even though your CPU is virtually idle.

  8. Colin said 4 days later:

    typo guy: eight sign-cores is single?

    Fell free to remove after you fix

  9. skinp said 4 days later:

    Great info, thanks. I actually always though the RAM was also taken in count for the load average.

  10. Derek (Scout) said 4 days later:

    Thanks Colin. Fixed.

  11. Buttson said 4 days later:

    grep -c will give you a count without needing to pipe through wc.

  12. Stephen said 4 days later:

    In addition to the CPU bound processed, you’ve not mentioned nice. If you have processes that are very low priority (high nice value), they may not interfere with interactive use of the machine. For example, the machine i’m typing on is quite responsive. The load average is almost a little over 4. It’s a dual CPU . 4 of the processes are niced as far as they can go. These can be ignored. A little over 4 minus 4 is a little over zero. So, there’s no idle time, but the machine does what i want it to do, when i want it to do it.

    And, i have 4 processes running because i want to make use of both CPUs. But the code is written to either run single thread, or use 4 threads, cooperating. Well, that’s the breaks.

  13. chkno said 4 days later:

    The above is accurate for CPU -bound loads, but there are other resources in play. For example, if you have a one-CPU one-disk system with a load average of 15, do not just run out and buy a 16-core one-disk system. If all those processes are waiting on disk i/o, you would be much better off with a one-CPU 16-disk system (or an ssd).

  14. jet said 4 days later:

    Well done

  15. Chris said 4 days later:

    While I understood the basics of the load rule (over .7 watch out) I never truly got how it scaled to multi-core processors or how it broke it down into multiple minute time frames. Thanks!

    One question though, the Pent4 had hyperthreading which “faked” a second core which could help, or seriously hinder performance depending upon the code. Its come back now with the i7/xeon derivatives (not sure about the new i5’s). How does this affect the values to be wary of? Or does it? Since its a “virtual” core in a sense it can help, but too much or the wrong kind of work can starve your main core and cause serious problems.

    Any thoughts?

  16. oldschool said 4 days later:

    One slight disagreement…in large scale processing applications, a load of .7 is considered wasteful. While a web server may need available headroom for unexpected traffic spikes, a server dedicated to processing data should be worked as hard as possible. Consider the difference between the plumbing to and from your house (and neighbor hood) versus an oil pipeline to Alaska. You expect the plumbing for sporadically used water or sewage to and from your house to handle unexpected usage patterns. The pipeline from Alaska should be running at full capacity at all times to avoid costly down time. Large grid computing makes use of this model by allowing your servers to have the additional capacity for spikes, while consuming the idle cpu cycles when possible.

     

     

 

我们知道判断一个系统的负载可以使用top,uptime等命令去查看,它分别记录了一分钟、五分钟、以及十五分钟的系统平均负载

例如我的某台服务器:

$ uptime

09:50:21 up 200 days, 15:07, 1 user, load average: 0.27, 0.33, 0.37

大部分的人都认为这个数字越小越好,其实有很多关联的提示信息,今天看到这个好文,应该可以给大家说清楚很多问题,转一下:

原文链接: http://blog.scoutapp.com/articles/2009/07/31/understanding-load-averages

你可能对于 Linux 的负载均值(load averages)已有了充分的了解。负载均值在 uptime 或者 top 命令中可以看到,它们可能会显示成这个样子:

load average: 0.09, 0.05, 0.01

很多人会这样理解负载均值:三个数分别代表不同时间段的系统平均负载(一分钟、五 分钟、以及十五分钟),它们的数字当然是越小越好。数字越高,说明服务器的负载越 大,这也可能是服务器出现某种问题的信号。

而事实不完全如此,是什么因素构成了负载均值的大小,以及如何区分它们目前的状况是 “好”还是“糟糕”?什么时候应该注意哪些不正常的数值?

回答这些问题之前,首先需要了解下这些数值背后的些知识。我们先用最简单的例子说明, 一台只配备一块单核处理器的服务器。

行车过桥

一只单核的处理器可以形象得比喻成一条单车道。设想下,你现在需要收取这条道路的过桥 费 — 忙于处理那些将要过桥的车辆。你首先当然需要了解些信息,例如车辆的载重、以及 还有多少车辆正在等待过桥。如果前面没有车辆在等待,那么你可以告诉后面的司机通过。 如果车辆众多,那么需要告知他们可能需要稍等一会。

因此,需要些特定的代号表示目前的车流情况,例如:

0.00 表示目前桥面上没有任何的车流。 实际上这种情况与 0.00 和 1.00 之间是相同的,总而言之很通畅,过往的车辆可以丝毫不用等待的通过。

1.00 表示刚好是在这座桥的承受范围内。 这种情况不算糟糕,只是车流会有些堵,不过这种情况可能会造成交通越来越慢。

超过 1.00,那么说明这座桥已经超出负荷,交通严重的拥堵。 那么情况有多糟糕? 例如 2.00 的情况说明车流已经超出了桥所能承受的一倍,那么将有多余过桥一倍的车辆正在焦急的等待。3.00 的话情况就更不妙了,说明这座桥基本上已经快承受不了,还有超出桥负载两倍多的车辆正在等待。

上面的情况和处理器的负载情况非常相似。一辆汽车的过桥时间就好比是处理器处理某线程 的实际时间。Unix 系统定义的进程运行时长为所有处理器内核的处理时间加上线程 在队列中等待的时间。

和收过桥费的管理员一样,你当然希望你的汽车(操作)不会被焦急的等待。所以,理想状态 下,都希望负载平均值小于 1.00 。当然不排除部分峰值会超过 1.00,但长此以往保持这 个状态,就说明会有问题,这时候你应该会很焦急。

“所以你说的理想负荷为 1.00 ?”

嗯,这种情况其实并不完全正确。负荷 1.00 说明系统已经没有剩余的资源了。在实际情况中 ,有经验的系统管理员都会将这条线划在 0.70:

“需要进行调查法则”: 如果长期你的系统负载在 0.70 上下,那么你需要在事情变得更糟糕之前,花些时间了解其原因。

“现在就要修复法则”:1.00 。 如果你的服务器系统负载长期徘徊于 1.00,那么就应该马上解决这个问题。否则,你将半夜接到你上司的电话,这可不是件令人愉快的事情。

“凌晨三点半锻炼身体法则”:5.00。 如果你的服务器负载超过了 5.00 这个数字,那么你将失去你的睡眠,还得在会议中说明这情况发生的原因,总之千万不要让它发生。

那么多个处理器呢?我的均值是 3.00,但是系统运行正常!

哇喔,你有四个处理器的主机?那么它的负载均值在 3.00 是很正常的。

在多处理器系统中,负载均值是基于内核的数量决定的。以 100% 负载计算,1.00 表示单个处理器,而 2.00 则说明有两个双处理器,那么 4.00 就说明主机具有四个处理器。

回到我们上面有关车辆过桥的比喻。1.00 我说过是“一条单车道的道路”。那么在单车道 1.00 情况中,说明这桥梁已经被车塞满了。而在双处理器系统中,这意味着多出了一倍的 负载,也就是说还有 50% 的剩余系统资源 — 因为还有另外条车道可以通行。

所以,单处理器已经在负载的情况下,双处理器的负载满额的情况是 2.00,它还有一倍的资源可以利用。

[NextPage]

多核与多处理器

先脱离下主题,我们来讨论下多核心处理器与多处理器的区别。从性能的角度上理解,一台主 机拥有多核心的处理器与另台拥有同样数目的处理性能基本上可以认为是相差无几。当然实际 情况会复杂得多,不同数量的缓存、处理器的频率等因素都可能造成性能的差异。

但即便这些因素造成的实际性能稍有不同,其实系统还是以处理器的核心数量计算负载均值 。这使我们有了两个新的法则:

“有多少核心即为有多少负荷”法则: 在多核处理中,你的系统均值不应该高于处理器核心的总数量。

“核心的核心”法则: 核心分布在分别几个单个物理处理中并不重要,其实两颗四核的处理器 等于 四个双核处理器 等于 八个单处理器。所以,它应该有八个处理器内核。

审视我们自己

让我们再来看看 uptime 的输出

~ $ uptime

23:05 up 14 days, 6:08, 7 users, load averages: 0.65 0.42 0.36

这是个双核处理器,从结果也说明有很多的空闲资源。实际情况是即便它的峰值会到 1.7,我也从来没有考虑过它的负载问题。

那么,怎么会有三个数字的确让人困扰。我们知道,0.65、0.42、0.36 分别说明上一分钟、最后五分钟以及最后十五分钟的系统负载均值。那么这又带来了一个问题:

我们以哪个数字为准?一分钟?五分钟?还是十五分钟?

其实对于这些数字我们已经谈论了很多,我认为你应该着眼于五分钟或者十五分钟的平均数 值。坦白讲,如果前一分钟的负载情况是 1.00,那么仍可以说明认定服务器情况还是正常的。 但是如果十五分钟的数值仍然保持在 1.00,那么就值得注意了(根据我的经验,这时候你应 该增加的处理器数量了)。

那么我如何得知我的系统装备了多少核心的处理器?

在 Linux 下,可以使用

cat /proc/cpuinfo

获取你系统上的每个处理器的信息。如果你只想得到数字,那么就使用下面的命令:

grep 'model name' /proc/cpuinfo | wc -l

Popularity: 11% [?]

posted @ 2015-07-28 12:40  Q_Quan  阅读(173)  评论(0)    收藏  举报