Understanding the Linux operating system

OS, cell

32bit, 64bit,

2^32=4G*4GB

CPU

  运算器

  控制器

  寄存器 register

PAE:

  Physical Address Extension 物理地址扩展

  32bit, + 4bit = 64G

程序,局部性

置换策略

空间局部性、时间局部性

N路关联

Write through: 通写

write Back: 回写

Video Card 显卡

South Bridge 南桥

固态硬盘

IO Port

65535

一批连续端口

poll

中断控制器

临界区

DMA: 直接内存访问

monitor

  OS --> VM

Process: 一个运行单位

  系统资源: CPU时间,存储空间

OS: VM

  CPU:

    时间:切片

      缓存:缓存当前程序数据

    进程切换:保存线程、恢复现场

  内存:线性地址 <-- 物理地址

    空间:映射

4k, page, page frame(页框)

  I/O:

    内核 --> 进程

Ready

sleeping

可打断

不可打断

Schematic interaction of different performance components

The process descriptor and task list

List 链表

进程描述符:

  进程元数据

  双向链表

Context switching

Linux: 抢占

  tick: 滴答

    时间解析度

  100Hz

  1000Hz

时钟中断

A: 5ms, 1ms

C:

进程类别:

  交互式进程(I/O)

  批处理进程(CPU)

  实时进程(Real-time)

桌面:

服务器:CPU

  CPU: 时间片长,优先级低

  IO: 时间片短,优先级高

Linux优先级:priority

  实时优先级:1-99,数字越小,优先级越低

  静态优先级: 100-139,数据越小,优先级越高

    实时优先级比静态优先级高

  nice值:调整静态优先级

调度类别:

  实时进程:

    SCHED FIFO: First In First Out

    SHCED_RR:Round Robin

    SCHED_Other: 用来调度100-139之间的进程

      100-139

        10:100

        30:115

        2:130

动态优先级:

  dynamic priority = max (100, min ( static priority - bonus + 5, 139))

    bonus: 0-10

  110, 10

  110

手动调整优先级:

  100-139: nice

    nice N COMMAND 启动进程时候指定优先级

    renice -n # PID 指定运行中的进程的优先级

    chart -p [prio] COMMAND 正在100-139之间的优先级
  1-99:
    chrt -f -p [prio] PID 已经运行的进程调整优先级,-f FIFO类别

    chrt -r -p [prio] PID 已经运行的进程调整优先级,-r RR类别

    chrt -f -p [prio] COMMAND 启动一个进程指定优先级;

    ps -e -o class,rtprio,pri,nice,cmd 查看优先级

O(1)

SCHED_Other

CFS: Complete Fair Scheduler 完全公平调度器

Process address space

COW

  Kernel --> init

    init

      fork(): 系统调用

      task_struct

        Memory --> Parent

        COW: Copy On Write


        prefork

[root@Smoke ~]# top(查看cpu使用情况)
top - 19:56:19 up 1 day, 22:22,  3 users,  load average: 0.00, 0.00, 0.00
Tasks: 119 total,   1 running, 118 sleeping,   0 stopped,   0 zombie
Cpu(s):  0.0%us,  3.8%sy,  0.0%ni, 95.8%id,  0.0%wa,  0.0%hi,  0.4%si,  0.0%st
Mem:   1938968k total,  1909364k used,    29604k free,   106984k buffers
Swap:  2097144k total,      180k used,  2096964k free,  1521396k cached

  PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND
   28 root      20   0     0    0    0 S 15.6  0.0   0:01.08 kswapd0
    4 root      20   0     0    0    0 S  0.3  0.0   0:01.78 ksoftirqd/0      
  369 root      20   0     0    0    0 S  0.3  0.0   0:44.95 kjournald      
 6382 root      20   0  2696 1124  876 R  0.3  0.1   0:00.42 top
    1 root      20   0  2900 1388 1224 S  0.0  0.1   0:01.65 init               
    2 root      20   0     0    0    0 S  0.0  0.0   0:00.00 kthreadd 
    3 root      RT   0     0    0    0 S  0.0  0.0   0:00.00 migration/0                  
    5 root      RT   0     0    0    0 S  0.0  0.0   0:00.00 migration/0               
    6 root      RT   0     0    0    0 S  0.0  0.0   0:19.02 watchdog/0                      
    7 root      20   0     0    0    0 S  0.0  0.0   0:13.24 events/0        
    8 root      20   0     0    0    0 S  0.0  0.0   0:00.00 cgroup            
    9 root      20   0     0    0    0 S  0.0  0.0   0:00.00 khelper                                
   10 root      20   0     0    0    0 S  0.0  0.0   0:00.00 netns               
   11 root      20   0     0    0    0 S  0.0  0.0   0:00.00 async/mgr            
   12 root      20   0     0    0    0 S  0.0  0.0   0:00.00 pm     
[root@Smoke ~]# ps -e -o class,rtprio,pri,nice,cmd(查看优先级)
CLS RTPRIO PRI  NI CMD
TS       -  19   0 /sbin/init
TS       -  19   0 [kthreadd]
FF      99 139   - [migration/0]
TS       -  19   0 [ksoftirqd/0]
FF      99 139   - [migration/0]
FF      99 139   - [watchdog/0]
TS       -  19   0 [events/0]
TS       -  19   0 [cgroup]
TS       -  19   0 [khelper]
TS       -  19   0 [netns]
TS       -  19   0 [async/mgr]
TS       -  19   0 [pm]
TS       -  19   0 [sync_supers]
TS       -  19   0 [bdi-default]
TS       -  19   0 [kintegrityd/0]
TS       -  19   0 [kblockd/0]
TS       -  19   0 [kacpid]
TS       -  19   0 [kacpi_notify]
TS       -  19   0 [kacpi_hotplug]
TS       -  19   0 [ata/0]
TS       -  19   0 [ata_aux]
TS       -  19   0 [ksuspend_usbd]
TS       -  19   0 [khubd]
TS       -  19   0 [kseriod]
TS       -  19   0 [md/0]
TS       -  19   0 [md_misc/0]
TS       -  19   0 [khungtaskd]
TS       -  19   0 [kswapd0]
TS       -  14   5 [ksmd]
TS       -  19   0 [aio/0]
TS       -  19   0 [crypto/0]
TS       -  19   0 [kthrotld/0]
TS       -  19   0 [pciehpd]
TS       -  19   0 [kpsmoused]
TS       -  19   0 [usbhid_resumer]
TS       -  19   0 [scsi_eh_0]
TS       -  19   0 [scsi_eh_1]
TS       -  19   0 [mpt_poll_0]
TS       -  19   0 [mpt/0]
TS       -  19   0 [scsi_eh_2]
TS       -  19   0 [scsi_eh_3]
TS       -  19   0 [scsi_eh_4]
TS       -  19   0 [scsi_eh_5]
TS       -  19   0 [scsi_eh_6]
TS       -  19   0 [scsi_eh_7]
TS       -  19   0 [scsi_eh_8]
TS       -  19   0 [scsi_eh_9]
TS       -  19   0 [scsi_eh_10]
TS       -  19   0 [scsi_eh_11]
TS       -  19   0 [scsi_eh_12]
TS       -  19   0 [scsi_eh_13]
TS       -  19   0 [scsi_eh_14]
TS       -  19   0 [scsi_eh_15]
TS       -  19   0 [scsi_eh_16]
TS       -  19   0 [scsi_eh_17]
TS       -  19   0 [scsi_eh_18]
TS       -  19   0 [scsi_eh_19]
TS       -  19   0 [scsi_eh_20]
TS       -  19   0 [scsi_eh_21]
TS       -  19   0 [scsi_eh_22]
TS       -  19   0 [scsi_eh_23]
TS       -  19   0 [scsi_eh_24]
TS       -  19   0 [scsi_eh_25]
TS       -  19   0 [scsi_eh_26]
TS       -  19   0 [scsi_eh_27]
TS       -  19   0 [scsi_eh_28]
TS       -  19   0 [scsi_eh_29]
TS       -  19   0 [scsi_eh_30]
TS       -  19   0 [scsi_eh_31]
TS       -  19   0 [scsi_eh_32]
TS       -  19   0 [kjournald]
TS       -  23  -4 /sbin/udevd -d
TS       -  19   0 [vmmemctl]
TS       -  19   0 [flush-8:0]
TS       -  19   0 [bluetooth]
TS       -  19   0 [kstriped]
TS       -  19   0 [kjournald]
TS       -  19   0 [kauditd]
TS       -  19   0 /usr/local/memcached/bin/memcached -d -p 11211 -u nobody -c 1024 -m 128
TS       -  23  -4 auditd
TS       -  19   0 /sbin/rsyslogd -i /var/run/syslogd.pid -c 5
TS       -  19   0 dbus-daemon --system
TS       -  19   0 nginx: worker process                   
TS       -  19   0 /usr/sbin/sshd
TS       -  19   0 /usr/libexec/postfix/master
TS       -  19   0 qmgr -l -t fifo -u
TS       -  19   0 crond
TS       -  19   0 /usr/bin/rhsmcertd
TS       -  19   0 login -- root     
TS       -  19   0 /sbin/mingetty /dev/tty2
TS       -  19   0 /sbin/mingetty /dev/tty3
TS       -  19   0 /sbin/mingetty /dev/tty4
TS       -  21  -2 /sbin/udevd -d
TS       -  19   0 /sbin/mingetty /dev/tty5
TS       -  21  -2 /sbin/udevd -d
TS       -  19   0 /sbin/mingetty /dev/tty6
TS       -  19   0 /usr/sbin/console-kit-daemon --no-daemon
TS       -  19   0 -bash
TS       -  19   0 php-fpm: master process (/usr/local/php/etc/php-fpm.conf)  
TS       -  19   0 php-fpm: pool www 
TS       -  19   0 php-fpm: pool www 
TS       -  19   0 php-fpm: pool www
TS       -  19   0 php-fpm: pool www
TS       -  19   0 php-fpm: pool www
TS       -  19   0 php-fpm: pool www 
TS       -  19   0 php-fpm: pool www
TS       -  19   0 php-fpm: pool www
TS       -  19   0 sshd: root@pts/0 
TS       -  19   0 -bash
TS       -  19   0 sshd: root@notty 
TS       -  19   0 /usr/libexec/openssh/sftp-server
TS       -  19   0 /usr/libexec/openssh/sftp-server
TS       -  19   0 pickup -l -t fifo -u
TS       -  19   0 sshd: root@pts/1 
TS       -  19   0 -bash
TS       -  19   0 ps -e -o class,rtprio,pri,nice,cmd
TS       -  19   0 nginx: master process /usr/sbin/nginx -c /etc/nginx/nginx.conf
TS       -  19   0 /bin/sh /usr/local/mysql/bin/mysqld_safe --datadir=/mydata/data --pid-file=/mydata/data/Smoke.com.pid
TS       -  19   0 /usr/local/mysql/bin/mysqld --basedir=/usr/local/mysql --datadir=/mydata/data --plugin-dir=/usr/local/mysql/lib/plugin --
user=mysql --log-error=/var/log/mysqld.log --pid-fi   
[root@Smoke ~]# man chrt(查看chrt的man帮主)

       chrt - manipulate real-time attributes of a process

       chrt [options] prio command [arg]...
       chrt [options] -p [prio](优先级) pid(进程)

       -b, --batch(批处理进程)
              set scheduling policy to SCHED_BATCH (Linux specific)

       -f, --fifo(调整fifo类别)
              set scheduling policy to SCHED_FIFO

       -i, --idle(空闲进程)
              set schedulng policy to SCHED_IDLE (Linux specific)

       -r, --rr(调整rr类别)
              set scheduling policy to SCHED_RR (the default)

调度类别:

  RT

    SCHED_FIFO

    SCHED_RR

  100-139

    SCHED_Other

  SCHED_BATCH

  SCHED_IDLE

抢占

  tick: 时钟终端

    100Hz

    1000Hz

RHEL 6.4

  tick less 无滴答

  interrupt-driven

    硬中断

    软中断

    深度睡眠

I1(一级指令缓存), D1(一级数据缓存)

SMP

对称多处理器

NUMA 非对一致性内存访问

Local and Remote Memory Access In NUMA Topolog

 

CPU affinity: CPU姻亲关系

numastat

numactl

numad 只是在硬件级别将某个进程或某些进程跟我们cpu和node绑定的,而要实现cpu姻亲关系,要将进程跟cpu绑定;

taskset: 绑定进程至某CPU上

  mask:

    0x0000 0001(底0颗CPU)

0001: 0号cpu;

0x0000 0003

0011:0号和1号cpu;

101

0x0000 0005: 0号和2号cpu;

111

0x0000 0007: 0-2号cpu;

# taskset -p mask pid

101, 3# CPU

#taskset -p 0x0000 0004 101(把101进程绑定第三号cpu上)

0100

tasket -p -c 3 101(把101进程绑定在第3号cpu上)

tasket -p -c 0,1 101(把101进程绑定在0号和1号cpu上)

tasket -p -c 0-2 101(把101进程绑定在0-2号cpu上)

tasket -p -c 0-2,7 101

Tuning run queue length with tasket

Restrict length of a CPU run queue

  Isolate a CPU from automatic scheduling in /etc/grub.conf isolcpus=cpu number,...,cpu number(将cpu从操作系统隔离出来,内核不会让已经启动的进程使用这些cpu的,就将这些cpu预留出来了)

  Pin tasks to that CPU with taskset(tasket将某个任务订在某颗cpus上)

  Consider moving IRQs off the CPU(移除cpu上的中断)

echo cpu_mask > /proc/irq/<irq_num>/smp_affinity

应该将中断绑定至那些非隔离的CPU上,从而避免那些隔离的CPU处理中断程序;

echo CPU_MASK > /proc/irq/<irq_num>/smp_affinty(CPU_MASK CPU号码,irq_num终端号码)

Viewing CPU performance data(查看cpu性能数据命令)

Load average: average length of run queues(查看cpu平均利用率)

  Considers only task in TASK_RUNNABLE and TASK_UNNTERRUPTABLE

  sar -q(查看cpu平均使用率)

  top(查看系统人物)

  w(显示登陆用户的任务)

  uptime(显示系统运行时长)

  vmstat 1 5(显示虚拟内存统计报告)

CPU utilization

  mpstat 1 2(显示对称多处理器每个cpu整个使用率)

  sar -P ALL 1 2(查看cpu使用情况)

  iostat -c 1 2

  /proc/stat

  dstat -c

sar -w

  查看上下文切换的平均次数,以及进程创建的平均值

Scheduler domains(调度器域)

Group processors into cpusets(将cpu化好组)

  Each cpuset represents a scheduler domian(将某个进程绑定在某个组内的cpu上)

  Supports both multi-core and NUMA architectures

  Simple management interface through the cpuset virtual file system

The root cpuset contains all system resources

Child cpusets

  Each cpuset must contain at least one CPU and one memory zone

  Child cpusets can be nested

  Dynamically attach tasks to a cpuset

Consequences

  Control latency due to queue length,cache,and NUMA zones

  Assign processes with different CPU characteristics to different cpusets

  Scalable for complex performance scenarios

Configuring the root cpuset

Create a mount ponit at /cpusets(创建挂载点)

Add an entry to /etc/fstab(添加fstab文件配置)

cpuset(设备) /cpusets(挂载点) cpuset(类型) defaults(选项) 0 0

Mounts the filesystem to automatically create the cpuset

/cpusets/cpus

/cpusets/mems

/cpusets/tasks

/cpusets/tasks

  All CPUs and memory zones belong to the root cpuset

  All existing PIDs are assigned to the root cpuset

 

测试:NUMA相关命令需要使用rhel的64位操作系统;

[root@node1 ~]# numa(查看numa开头相关命令)
numactl(显示命令)   numad(服务器进程)     numademo(事例命令)  numastat(状态)  
[root@node1 ~]# numastat(显示每个numa节点内存信息) 
                           node0
numa_hit                  227475
numa_miss                      0
numa_foreign                   0
interleave_hit             14980
local_node                227475
other_node                     0
[root@node1 ~]# man numastat(查看numastat的man帮助)

       numastat - Show per-NUMA-node memory statistics for processes and the operating system

       numa_hit is memory successfully allocated on this node as intended.

       numa_miss is memory allocated on this node despite the process preferring some different  node.  Each  numa_miss  has  a
       numa_foreign on another node.

       numa_foreign is memory intended for this node, but actually allocated on some different node.  Each
        .I numa_foreign has a
        .I numa_miss
         on another node.

       interleave_hit is interleaved memory successfully allocated on this node as intended.

       -p <PID> or <pattern>(查看某个特定进程内存分配)
              Show per-node memory allocation information for the specified PID or pattern.  If the -p argument is only digits,
              it  is assumed to be a numerical PID.  If the argument characters are not only digits, it is assumed to be a text
              fragment pattern to search for in process command lines.  For example, numastat -p qemu will attempt to find  and
              show information for processes with "qemu" in the command line.  Any command line arguments remaining after numa-
              stat option flag processing is completed, are assumed to be additional <PID> or <pattern> process specifiers.  In
              this sense, the -p option flag is optional: numastat qemu is equivalent to numastat -p qemu

       -s[<node>](查看某一个node)
              Sort  the  table data in descending order before displaying it, so the biggest memory consumers are listed first.
              With no specified <node>, the table will be sorted by the total column.  If the optional <node> argument is  sup-
              plied,  the  data  will  be sorted by the <node> column.  Note that <node> must follow the -s immediately with no
              intermediate white space (e.g., numastat -s2).
[root@node1 ~]# numastat -s(显示所有节点内存状态信息)

Per-node numastat info (in MBs):
                          Node 0           Total
                 --------------- ---------------
Numa_Hit                  887.74          887.74
Local_Node                887.74          887.74
Interleave_Hit             58.52           58.52
Numa_Foreign                0.00            0.00
Numa_Miss                   0.00            0.00
Other_Node                  0.00            0.00
[root@node1 ~]# numastat -s node0(显示node0节点内存信息)
Found no processes containing pattern: "node0"

Per-node numastat info (in MBs):
                          Node 0           Total
                 --------------- ---------------
Numa_Hit                  886.74          886.74
Local_Node                886.74          886.74
Interleave_Hit             58.52           58.52
Numa_Foreign                0.00            0.00
Numa_Miss                   0.00            0.00
Other_Node                  0.00            0.00
[root@node1 ~]# man numactl(查看numactl的man帮助文件)

       numactl - Control NUMA policy for processes or shared memory(numa策略进程或共享内存控制)

       --cpunodebind=nodes, -N nodes(只将cpu运行在自己所属的Node上,将cpu跟node完成绑定,不让cpu去访问其他node)
              Only execute command on the CPUs of nodes.  Note that nodes may consist of multiple CPUs.  nodes may be specified as noted above.

--physcpubind=cpus, -C cpus(将进程跟cpu绑定)
              Only execute process on cpus.  This accepts cpu numbers as shown in the processor fields of /proc/cpuinfo, or 
relative cpus as in relative to the
              current  cpuset.  You may specify "all", which means all cpus in the current cpuset.  Physical cpus may be spe
cified as N,N,N or  N-N or N,N-N or
              N-N,N-N and so forth.  Relative cpus may be specifed as +N,N,N or  +N-N or +N,N-N and so forth. The + indicate
s that the cpu numbers are relative
              to the process’ set of allowed cpus in its current cpuset.  A !N-N notation indicates the inverse of N-N, in o
ther words all cpus except N-N.  If
              used with + notation, specify !+N-N.

       --show, -s(显示当前进程运行的设定)
              Show NUMA policy settings of the current process.

[root@node1 ~]# numactl --show(显示当前进程运行的设定)
policy: default
preferred node: current
physcpubind: 0 1 
cpubind: 0 
nodebind: 0 
membind: 0 
[root@node1 ~]# cat /proc/cpuinfo(查看cpu信息) 
processor	: 0
vendor_id	: GenuineIntel
cpu family	: 6
model		: 58
model name	: Intel(R) Core(TM) i5-3210M CPU @ 2.50GHz
stepping	: 9
microcode	: 18
cpu MHz		: 2494.389
cache size	: 3072 KB
physical id	: 0
siblings	: 1
core id		: 0
cpu cores	: 1
apicid		: 0
initial apicid	: 0
fpu		: yes
fpu_exception	: yes
cpuid level	: 13
wp		: yes
flags		: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts mmx fxsr sse sse2 ss syscall nx rdtscp
 lm constant_tsc arch_perfmon pebs bts xtopology tsc_reliable nonstop_tsc aperfmperf unfair_spinlock pni pclmulqdq ssse3 cx16 pcid sse4_1 ss
e4_2 x2apic popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm ida arat epb pln pts dts fsgsbase smep
bogomips	: 4988.77
clflush size	: 64
cache_alignment	: 64
address sizes	: 42 bits physical, 48 bits virtual
power management:

processor	: 1
vendor_id	: GenuineIntel
cpu family	: 6
model		: 58
model name	: Intel(R) Core(TM) i5-3210M CPU @ 2.50GHz
stepping	: 9
microcode	: 18
cpu MHz		: 2494.389
cache size	: 3072 KB
physical id	: 2
siblings	: 1
core id		: 0
cpu cores	: 1
apicid		: 2
initial apicid	: 2
fpu		: yes
fpu_exception	: yes
cpuid level	: 13
wp		: yes
flags		: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts mmx fxsr sse sse2 ss syscall nx rdtscp
 lm constant_tsc arch_perfmon pebs bts xtopology tsc_reliable nonstop_tsc aperfmperf unfair_spinlock pni pclmulqdq ssse3 cx16 pcid sse4_1 ss
e4_2 x2apic popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm ida arat epb pln pts dts fsgsbase smep
bogomips	: 4988.77
clflush size	: 64
cache_alignment	: 64
address sizes	: 42 bits physical, 48 bits virtual
power management:
提示:numactl调整完成以后下次开机重启这些效果就没有了,因为这是命令强行将进程绑定;
[root@node1 ~]# man numad(查看numad命令的man帮助)

       numad - A user-level daemon that provides placement advice and process management for efficient use of CPUs and memory
       on systems with NUMA topology.(用户空间级别的守护进程,能够提供策略自我启动,能够观察cpu上每一个运行状况来自动的将某进程给它关联到cpu上,将那个cpu关联
到那个node上)
[root@node1 ~]# numastat(显示每个numa节点内存信息) 
                           node0
numa_hit                  230557
numa_miss                      0(查看miss量是不是很高)
numa_foreign                   0
interleave_hit             14980
local_node                230557
other_node                     0
[root@node1 ~]# man taskset(查看taskset的man帮助)

       taskset - retrieve or set a process’s CPU affinity

       0x00000001(cpu0)
              is processor #0

       0x00000003(cpu0和cpu1)
              is processors #0 and #1

       -c, --cpu-list(指定第几号cpu)
              specify a numerical list of processors instead of a bitmask.  The list may contain multiple items, separated by
              comma, and ranges.  For example, 0,5,7,9-11.
[root@node1 ~]# cat /proc/irq/0/smp_affinity(查看第0号终端线smp_affinity文件内容)
ffffffff,ffffffff,ffffffff,ffffffff(0号终端可以运行在所有cpu上)
[root@node1 ~]# rpm -qf `which sar`(查看sar文件所属的程序包)
sysstat-9.0.4-31.el6.x86_64
[root@node1 ~]# rpm -ql sysstat(查看sysstat安装生成那些文件)
/etc/cron.d/sysstat
/etc/rc.d/init.d/sysstat
/etc/sysconfig/sysstat
/etc/sysconfig/sysstat.ioconf
/usr/bin/cifsiostat
/usr/bin/iostat
/usr/bin/mpstat
/usr/bin/pidstat
/usr/bin/sadf
/usr/bin/sar
/usr/lib64/sa
/usr/lib64/sa/sa1
/usr/lib64/sa/sa2
/usr/lib64/sa/sadc
/usr/share/doc/sysstat-9.0.4
/usr/share/doc/sysstat-9.0.4/CHANGES
/usr/share/doc/sysstat-9.0.4/COPYING
/usr/share/doc/sysstat-9.0.4/CREDITS
/usr/share/doc/sysstat-9.0.4/FAQ
/usr/share/doc/sysstat-9.0.4/README
/usr/share/doc/sysstat-9.0.4/TODO
/usr/share/locale/af/LC_MESSAGES/sysstat.mo
/usr/share/locale/da/LC_MESSAGES/sysstat.mo
/usr/share/locale/de/LC_MESSAGES/sysstat.mo
/usr/share/locale/es/LC_MESSAGES/sysstat.mo
/usr/share/locale/fi/LC_MESSAGES/sysstat.mo
/usr/share/locale/fr/LC_MESSAGES/sysstat.mo
/usr/share/locale/id/LC_MESSAGES/sysstat.mo
/usr/share/locale/it/LC_MESSAGES/sysstat.mo
/usr/share/locale/ja/LC_MESSAGES/sysstat.mo
/usr/share/locale/ky/LC_MESSAGES/sysstat.mo
/usr/share/locale/lv/LC_MESSAGES/sysstat.mo
/usr/share/locale/mt/LC_MESSAGES/sysstat.mo
/usr/share/locale/nb/LC_MESSAGES/sysstat.mo
/usr/share/locale/nl/LC_MESSAGES/sysstat.mo
/usr/share/locale/nn/LC_MESSAGES/sysstat.mo
/usr/share/locale/pl/LC_MESSAGES/sysstat.mo
/usr/share/locale/pt/LC_MESSAGES/sysstat.mo
/usr/share/locale/pt_BR/LC_MESSAGES/sysstat.mo
/usr/share/locale/ro/LC_MESSAGES/sysstat.mo
/usr/share/locale/ru/LC_MESSAGES/sysstat.mo
/usr/share/locale/sk/LC_MESSAGES/sysstat.mo
/usr/share/locale/sv/LC_MESSAGES/sysstat.mo
/usr/share/locale/vi/LC_MESSAGES/sysstat.mo
/usr/share/locale/zh_CN/LC_MESSAGES/sysstat.mo
/usr/share/locale/zh_TW/LC_MESSAGES/sysstat.mo
/usr/share/man/man1/cifsiostat.1.gz
/usr/share/man/man1/iostat.1.gz
/usr/share/man/man1/mpstat.1.gz
/usr/share/man/man1/pidstat.1.gz
/usr/share/man/man1/sadf.1.gz
/usr/share/man/man1/sar.1.gz
/usr/share/man/man5/sysstat.5.gz
/usr/share/man/man8/sa1.8.gz
/usr/share/man/man8/sa2.8.gz
/usr/share/man/man8/sadc.8.gz
/var/log/sa
[root@node1 ~]# vmstat 1 7(查看当前cpu使用率)
procs -----------memory---------- ---swap-- -----io---- --system-- -----cpu-----
 r  b   swpd   free   buff  cache   si   so    bi    bo   in   cs us sy id wa st
 0  0      0 320912  10260 114364    0    0    16     1   16   26  0  0 100  0  0	
 0  0      0 320872  10260 114384    0    0     0     0   33   50  0  0 100  0  0	
 0  0      0 320872  10260 114384    0    0     0    16   36   55  0  0 100  0  0	
 0  0      0 320872  10260 114384    0    0     0     0   20   42  0  0 100  0  0	
 0  0      0 320872  10260 114384    0    0     0     0   26   49  0  0 100  0  0	
 0  0      0 320872  10260 114384    0    0     0     0   21   48  0  0 100  0  0	
 0  0      0 320872  10268 114380    0    0     0    12   40   57  0  1 100  0  0	
[root@node1 ~]# sar -q(查看当前cpu使用情况)
Linux 2.6.32-431.23.3.el6.x86_64 (iZ25j00d1grZ) 	07/26/2016 	_x86_64_	(1 CPU)

12:00:01 AM   runq-sz  plist-sz   ldavg-1   ldavg-5  ldavg-15
12:10:01 AM         0       179      0.00      0.00      0.00
12:20:01 AM         0       179      0.00      0.00      0.00
12:30:01 AM         0       179      0.00      0.00      0.00
12:40:01 AM         0       179      0.00      0.00      0.00
12:50:01 AM         0       179      0.00      0.00      0.00
01:00:01 AM         2       183      0.00      0.00      0.00
01:10:01 AM         0       179      0.00      0.00      0.00
01:20:01 AM         0       178      0.00      0.00      0.00
01:30:01 AM         0       178      0.00      0.00      0.00
01:40:01 AM         0       178      0.00      0.00      0.00
01:50:01 AM         0       178      0.00      0.00      0.00
02:00:01 AM         2       182      0.00      0.00      0.00
02:10:01 AM         0       178      0.00      0.00      0.00
02:20:01 AM         0       178      0.00      0.00      0.00
02:30:01 AM         0       178      0.00      0.00      0.00
02:40:01 AM         0       179      0.00      0.00      0.00
02:50:01 AM         0       178      0.00      0.00      0.00
03:00:01 AM         2       182      0.00      0.00      0.00
03:10:01 AM         0       179      0.00      0.00      0.00
03:20:01 AM         0       179      0.00      0.00      0.00
03:30:01 AM         0       179      0.00      0.01      0.00
03:40:01 AM         0       179      0.00      0.00      0.00
03:50:01 AM         0       179      0.00      0.00      0.00
04:00:01 AM         2       182      0.00      0.00      0.00
04:10:01 AM         0       178      0.00      0.00      0.00
04:20:01 AM         0       178      0.00      0.00      0.00
04:30:01 AM         0       178      0.00      0.00      0.00
04:40:01 AM         0       178      0.00      0.00      0.00
04:50:01 AM         0       178      0.00      0.00      0.00
05:00:01 AM         2       182      0.00      0.00      0.00
[root@node1 ~]# sar -q 1(查看当前cpu使用情况,实时采样)
Linux 2.6.32-504.el6.x86_64 (node1.Smoke.com) 	2016年07月25日 	_x86_64_	(2 CPU)

23时56分40秒   runq-sz  plist-sz   ldavg-1   ldavg-5  ldavg-15
23时56分41秒         0       114      0.00      0.00      0.00
23时56分42秒         0       114      0.00      0.00      0.00
23时56分43秒         0       114      0.00      0.00      0.00
23时56分44秒         0       114      0.00      0.00      0.00
23时56分45秒         0       114      0.00      0.00      0.00
23时56分46秒         0       114      0.00      0.00      0.00
23时56分47秒         0       114      0.00      0.00      0.00
23时56分48秒         0       114      0.00      0.00      0.00
23时56分49秒         0       114      0.00      0.00      0.00
23时56分50秒         0       114      0.00      0.00      0.00
[root@node1 ~]# man sar(查看sar命令的man帮助)

       sar - Collect, report, or save system activity information.

       -b     Report I/O and transfer rate statistics.  The following values are displayed:

       -B     Report paging statistics. Some of the metrics below are available only with post 2.5 kernels. The  following  values
              are displayed:(查看内存页面置换情况)

       -d     Report  activity  for each block device (kernels 2.4 and newer only).  When data is displayed, the device specifica-
              tion dev m-n is generally used ( DEV column).  m is the major number of the device.  With recent kernels (post 2.5),
              n  is  the  minor number of the device, but is only a sequence number with pre 2.5 kernels. Device names may also be
              pretty-printed if option -p is used or persistent device names can be printed if option -j is used (see below). Val-
              ues  for  fields  avgqu-sz,  await,  svctm and %util may be unavailable and displayed as 0.00 with some 2.4 kernels.
              Note that disk activity depends on sadc options "-S DISK" and "-S XDISK" to be collected. The following  values  are
              displayed:(查看每秒的事务数)

       -q     Report queue length and load averages. The following values are displayed:(查看队列长度和负载平均值)

              runq-sz(运行队列的长度)
                     Run queue length (number of tasks waiting for run time).

              plist-sz(task列表中进程的个数)
                     Number of tasks in the task list.

              ldavg-1(过去1分钟的平均值)
                     System load average for the last minute.  The load average is calculated as the average number of runnable or
                     running tasks (R state), and the number of tasks in uninterruptible sleep (D state) over the specified inter-
                     val.

              ldavg-5(过去5分钟的平均值)
                     System load average for the past 5 minutes.

              ldavg-15(过去15分钟的平均值)
                     System load average for the past 15 minutes.

[root@node1 ~]# sar -q 1(查看当前cpu使用情况,实时采样)
Linux 2.6.32-504.el6.x86_64 (node1.Smoke.com) 	2016年07月26日 	_x86_64_	(2 CPU)

00时05分21秒   runq-sz  plist-sz   ldavg-1   ldavg-5  ldavg-15
00时05分22秒         0       120      0.00      0.00      0.00
00时05分23秒         0       120      0.00      0.00      0.00
00时05分24秒         0       120      0.00      0.00      0.00
00时05分25秒         0       120      0.00      0.00      0.00
00时05分26秒         0       120      0.00      0.00      0.00
00时05分27秒         0       120      0.00      0.00      0.00
00时05分28秒         0       120      0.00      0.00      0.00
00时05分29秒         0       120      0.00      0.00      0.00
00时05分30秒         0       120      0.00      0.00      0.00
00时05分31秒         0       120      0.00      0.00      0.00
00时05分32秒         0       120      0.00      0.00      0.00
00时05分33秒         0       120      0.00      0.00      0.00
00时05分34秒         0       120      0.00      0.00      0.00
00时05分35秒         0       120      0.00      0.00      0.00
00时05分36秒         0       120      0.00      0.00      0.00
00时05分37秒         0       120      0.00      0.00      0.00
00时05分38秒         0       120      0.00      0.00      0.00
[root@node1 ~]# ab -n 10000 -c 300 http://127.0.0.1/index.php(压力测试,-n执行指定请求数,-c每次多少请求数量,)
This is ApacheBench, Version 2.3 <$Revision: 655654 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/

Benchmarking 127.0.0.1 (be patient)
Completed 1000 requests
Completed 2000 requests
Completed 3000 requests
Completed 4000 requests
Completed 5000 requests
Completed 6000 requests
Completed 7000 requests
Completed 8000 requests
Completed 9000 requests
Completed 10000 requests
Finished 10000 requests


Server Software:        Apache/2.2.15
Server Hostname:        127.0.0.1
Server Port:            80

Document Path:          /index.php
Document Length:        0 bytes

Concurrency Level:      300
Time taken for tests:   7.560 seconds
Complete requests:      10000
Failed requests:        0
Write errors:           0
Total transferred:      2661596 bytes
HTML transferred:       0 bytes
Requests per second:    1322.84 [#/sec] (mean)
Time per request:       226.785 [ms] (mean)
Time per request:       0.756 [ms] (mean, across all concurrent requests)
Transfer rate:          343.83 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:        0    2  37.4      1    3000
Processing:     3  159 939.0     23    7539
Waiting:        0  158 938.8     22    7537
Total:         12  161 943.5     24    7549

Percentage of the requests served within a certain time (ms)
  50%     24
  66%     26
  75%     27
  80%     27
  90%     29
  95%     34
  98%    995
  99%   7543
 100%   7549 (longest request)
[root@node1 ~]# sar -q 1(查看当前cpu使用情况,实时采样)
Linux 2.6.32-504.el6.x86_64 (node1.Smoke.com) 	2016年07月26日 	_x86_64_	(2 CPU)

00时13分55秒   runq-sz  plist-sz   ldavg-1   ldavg-5  ldavg-15
00时13分56秒         1       363      0.33      0.08      0.03
00时13分57秒         1       362      0.33      0.08      0.03
00时13分58秒        11       361      0.33      0.08      0.03
00时13分59秒         2       360      0.33      0.08      0.03
00时14分00秒         1       360      0.31      0.08      0.02
00时14分01秒         1       360      0.31      0.08      0.02
00时14分02秒         1       359      0.31      0.08      0.02
00时14分03秒         2       358      0.31      0.08      0.02
00时14分04秒         1       357      0.31      0.08      0.02
00时14分05秒         2       356      0.44      0.11      0.04
00时14分06秒         2       355      0.44      0.11      0.04
00时14分07秒        13       354      0.44      0.11      0.04
00时14分08秒         2       353      0.44      0.11      0.04
00时14分09秒         1       352      0.44      0.11      0.04
00时14分10秒         6       351      0.49      0.12      0.04
00时14分11秒         1       350      0.49      0.12      0.04
00时14分12秒         3       349      0.49      0.12      0.04
00时14分13秒         1       348      0.49      0.12      0.04
00时14分14秒         2       347      0.49      0.12      0.04
00时14分15秒         2       346      0.45      0.12      0.04
00时14分16秒         3       345      0.45      0.12      0.04
00时14分17秒         2       344      0.45      0.12      0.04
00时14分18秒         2       343      0.45      0.12      0.04
提示:runq-sz运行队列长度,
[root@node1 ~]# man mpstat(查看mpstat命令的man帮助)

       mpstat - Report processors related statistics.(cpu相关使用状况)

       mpstat [ -A ] [ -I { SUM | CPU | ALL } ] [ -u ] [ -P { cpu [,...](查看那颗cpu) | ON | ALL } ] [ -V ] [ interval [ count ] ]

       -I { SUM | CPU | ALL }(cpu处理中断的次数)
              Report interrupts statistics.

[root@node1 ~]# mpstat(查看cpu相关使用情况) 
Linux 2.6.32-504.el6.x86_64 (node1.Smoke.com) 	2016年07月26日 	_x86_64_	(2 CPU)

00时31分46秒  CPU    %usr   %nice    %sys %iowait    %irq   %soft  %steal  %guest   %idle
00时31分46秒  all    0.63    0.00    1.13    0.18    0.00    0.73    0.00    0.00   97.33
[root@node1 ~]# mpstat -P 0 1(查看第0号cpu使用情况,1表示每秒显示一次)
Linux 2.6.32-504.el6.x86_64 (node1.Smoke.com) 	2016年07月26日 	_x86_64_	(2 CPU)

00时32分21秒  CPU    %usr   %nice    %sys %iowait    %irq   %soft  %steal  %guest   %idle
00时32分22秒    0    0.00    0.00    0.99    0.00    0.00    0.00    0.00    0.00   99.01
00时32分23秒    0    0.00    0.00    0.00    0.00    0.00    0.00    0.00    0.00  100.00
00时32分24秒    0    0.00    0.00    0.00    0.00    0.00    0.00    0.00    0.00  100.00
提示:%usr用户空间占用率,%sys内核空间的,%iowait I/O等待的,%irq用于处理中断的,%soft软中断的,%steal被虚拟机偷走的,%guest由虚拟机使用的,%idle空闲的;
[root@node1 ~]# man mpstat(查看mpstat的man帮助)

              %irq(硬中断)
                     Show the percentage of time spent by the CPU or CPUs to service hardware interrupts.

              %soft(软中断)
                     Show the percentage of time spent by the CPU or CPUs to service software interrupts.

[root@node1 ~]# mpstat -I CPU 1(查看cpu上对于中断处理)
Linux 2.6.32-504.el6.x86_64 (node1.Smoke.com) 	2016年07月26日 	_x86_64_	(2 CPU)
02时02分26秒 CPU 0/s 1/s 8/s 9/s 12/s 14/s 15/s 16/s 17/s
02时02分27秒 0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 2.00 0.00
18/s 19/s 24/s 25/s 26/s 27/s 28/s 29/s 30/s 31/s 
0.00 2.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
32/s 33/s 34/s 35/s 36/s 37/s 38/s 39/s 40/s 41/s 42/s
0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
43/s 44/s 45/s 46/s 47/s 48/s 49/s 50/s 51/s 52/s 
0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
53/s 54/s 55/s NMI/s LOC/s SPU/s PMI/s IWI/s RES/s CAL/s TLB/s
0.00 0.00 0.00 0
TRM/s THR/s MCE/s MCP/s ERR/s MIS/s


[root@node1 ~]# sar -P 0 1(查看第0号cpu每秒显示一次)
Linux 2.6.32-504.el6.x86_64 (node1.Smoke.com) 	2016年07月26日 	_x86_64_	(2 CPU)

02时04分01秒     CPU     %user     %nice   %system   %iowait    %steal     %idle
02时04分02秒       0      0.00      0.00      4.00      0.00      0.00     96.00
02时04分03秒       0      0.00      0.00      3.00      0.00      0.00     97.00
02时04分04秒       0      0.00      0.00      3.00      0.00      0.00     97.00
02时04分05秒       0      0.00      0.00      3.00      0.00      0.00     97.00
[root@node1 ~]# man iostat(查看iostat的man帮助)

       iostat - Report Central Processing Unit (CPU) statistics and input/output statistics for devices, partitions and network filesystems
 (NFS).(报告cpu和i/o设备统计数据及分区和网络文件系统统计)

       -c     Display the CPU utilization report.(显示cpu)

       -d     Display the device utilization report.(显示设备)

[root@node1 ~]# iostat -c(显示cpu使用状况)
Linux 2.6.32-504.el6.x86_64 (node1.Smoke.com) 	2016年07月26日 	_x86_64_	(2 CPU)

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           0.31    0.00    0.96    0.10    0.00   98.63

[root@node1 ~]# iostat -c 1(显示cpu使用情况,每1秒刷新一次)
Linux 2.6.32-504.el6.x86_64 (node1.Smoke.com) 	2016年07月26日 	_x86_64_	(2 CPU)

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           0.31    0.00    0.96    0.10    0.00   98.63

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           0.00    0.00    0.00    0.00    0.00  100.00

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           0.00    0.00    0.00    0.00    0.00  100.00
[root@node1 ~]# iostat -c 1 6(显示cpu使用情况,每1秒刷新一次,总共刷6次)
Linux 2.6.32-504.el6.x86_64 (node1.Smoke.com) 	2016年07月26日 	_x86_64_	(2 CPU)

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           0.30    0.00    0.94    0.09    0.00   98.66

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           0.00    0.00    0.00    0.00    0.00  100.00

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           0.00    0.00    0.00    0.00    0.00  100.00

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           0.00    0.00    0.00    0.00    0.00  100.00

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           0.00    0.00    0.00    0.00    0.00  100.00

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           0.00    0.00    0.50    0.00    0.00   99.50
提示:iowait数值过高,io设备可能称为瓶颈,%system在内核空间消耗的时间太长了,
[root@node1 ~]# cat /proc/stat(查看cpu数据统计) 
cpu  8159 0 15627 2735069 2535 18 9714 0 0
cpu0 4185 0 7897 1367336 1257 18 4643 0 0
cpu1 3974 0 7730 1367733 1277 0 5071 0 0
intr 1391893 206 88 0 0 0 0 0 0 1 0 0 0 143 0 0 118 10860 7757 145 29467 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
ctxt 4647980
btime 1469457843
processes 3153
procs_running 1
procs_blocked 0
softirq 9351126 0 799440 5552 6961625 7704 0 80 363031 989 1212705
[root@node1 ~]# man dstat(查看dstat的man帮助)

       dstat - versatile tool for generating system resource statistics(系统资源统计数据)

       -c, --cpu
              enable cpu stats (system, user, idle, wait, hardware interrupt, software interrupt)

       -C 0,3,total
              include cpu0, cpu3 and total

       -d, --disk
              enable disk stats (read, write)

       -m, --mem
              enable memory stats (used, buffers, cache, free)

       -n, --net
              enable network stats (receive, send)

       -N eth1,total
              include eth1 and total

       -p, --proc
              enable process stats (runnable, uninterruptible, new)

       -s, --swap
              enable swap stats (used, free)

       -r, --io
              enable I/O request stats (read, write requests)

       --aio  enable aio stats (asynchronous I/O)

       --fs   enable filesystem stats (open files, inodes)

       --ipc  enable ipc stats (message queue, semaphores, shared memory)

       --lock enable file lock stats (posix, flock, read, write)

       --socket
              enable socket stats (total, tcp, udp, raw, ip-fragments)

       --udp  enable udp stats (listen, active)

       --unix enable unix stats (datagram, stream, listen, active)

       --vm   enable vm stats (hard pagefaults, soft pagefaults, allocated, free)

       -v, --vmstat
              equals -pmgdsc -D total

       -a, --all
              equals -cdngy (default)

       --battery-remain(电池情况)
              battery remaining in hours, minutes (needs ACPI)

       --cpufreq(cpu频率)
              CPU frequency in percentage (needs ACPI)

       --disk-util(磁盘利用率)
              per disk utilization in percentage

       --gpfs GPFS read/write I/O (needs mmpmon)

       --innodb-buffer(innodb存储引擎的相关情况)
              show innodb buffer stats

       --lustre
              show lustre I/O throughput

       --memcache-hits(memcache命中率)
              show the number of hits and misses from memcache

       --mysql5-cmds
              show the MySQL5 command stats

       --top-cpu(谁最消耗cpu)
              show most expensive CPU process

       --top-cputime(最消耗cpu时间的)
              show process using the most CPU time (in ms)

       --top-cputime-avg(最消耗cpu时间片)
              show process with the highest average timeslice (in ms)

       --top-io
              show most expensive I/O process

       --top-latency(那个进程最大的延迟)
              show process with highest total latency (in ms)

       --top-mem(谁用了最多的内存)
              show process using the most memory

[root@node1 ~]# dstat --top-cpu(显示那个进程最占用cpu)
-most-expensive-
  cpu process   
events/1     0.1
                
                
                
                
                
                
                
                
sshd: root@pt0.5
events/0     0.5
                
                
                
                
                
mysqld       0.5
[root@node1 ~]# dstat --top-mem(查看那个进程最占用内存)
--most-expensive-
  memory process 
mysqld       445M
mysqld       445M
mysqld       445M
mysqld       445M
[root@node1 ~]# dstat --top-mem --top-cpu(查看最占用内存和cpu的进程)
--most-expensive- -most-expensive-
  memory process |  cpu process   
mysqld       445M|events/1     0.1
mysqld       445M|events/1     0.5
[root@node1 ~]# dstat --top-mem --top-cpu --top-io(查看最占用内存和cpu的进程,谁最占i/o)
--most-expensive- -most-expensive- ----most-expensive----
  memory process |  cpu process   |     i/o process      
mysqld       445M|events/1     0.1|sshd         44k 6867B
mysqld       445M|events/1     0.5|crond      5595B    0 
mysqld       445M|                |sshd: root@ 155B  208B
mysqld       445M|                |sshd: root@ 155B  208B
[root@node1 ~]# dstat -c(显示cpu使用率)
----total-cpu-usage----
usr sys idl wai hiq siq
  0   1  99   0   0   0
  0   0 100   0   0   0
  0   0 100   0   0   0
  0   0 100   0   0   0
[root@node1 ~]# vmstat 1(查看进程使用硬件情况)
procs -----------memory---------- ---swap-- -----io---- --system-- -----cpu-----
 r  b   swpd   free   buff  cache   si   so    bi    bo   in   cs us sy id wa st
 0  0   2488 217176   8720 191188    0    0     5     4   47  148  0  1 99  0  0	
 0  0   2488 217168   8720 191188    0    0     0     0   44   57  0  1 100  0  0	
 0  0   2488 217168   8720 191188    0    0     0     0   34   59  0  0 100  0  0	
 0  0   2488 217168   8720 191188    0    0     0     0   29   51  0  0 100  0  0	
提示:cs是上下文切换;
[root@node1 ~]# man sar(查看sar命令的man帮助)

       -w     Report task creation and system switching activity.

              proc/s
                     Total number of tasks created per second.

              cswch/s
                     Total number of context switches per second.(查看上下文件切换)

[root@node1 ~]# sar -w 1(查看上下文件切换,每秒刷新一次)
Linux 2.6.32-504.el6.x86_64 (node1.Smoke.com) 	07/26/2016 	_x86_64_	(2 CPU)

03:21:15 AM    proc/s   cswch/s
03:21:16 AM      0.00    112.87
03:21:17 AM      0.00    116.00
03:21:18 AM      0.00    113.00
03:21:19 AM      0.00    114.14
提示:上下文切换次数过多,可能进程有点多;
[root@node1 ~]# mkdir /cpusets(创建cpusets文件)
[root@node1 ~]# vim /etc/fstab(编辑fstab文件)

#
# /etc/fstab
# Created by anaconda on Mon Jul 25 18:51:23 2016
#
# Accessible filesystems, by reference, are maintained under '/dev/disk'
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
#
UUID=cf43ec31-f59f-423f-aa0b-d091c18b2fa4 /                       ext4    defaults        1 1
UUID=8d02aaa8-1714-4377-9925-a3d6e4cdca1e /boot                   ext4    defaults        1 2
UUID=2273581c-907e-423b-95f3-cc95e035dfda swap                    swap    defaults        0 0
tmpfs                   /dev/shm                tmpfs   defaults        0 0
devpts                  /dev/pts                devpts  gid=5,mode=620  0 0
sysfs                   /sys                    sysfs   defaults        0 0
proc                    /proc                   proc    defaults        0 0
/cpuset                 /cpusets                cpuset  defaults        0 0
[root@node1 ~]# mount -a(挂载/etc/fstab所有文件系统)
[root@node1 ~]# mount(查看系统挂载的文件系统)
/dev/sda2 on / type ext4 (rw)
proc on /proc type proc (rw)
sysfs on /sys type sysfs (rw)
devpts on /dev/pts type devpts (rw,gid=5,mode=620)
tmpfs on /dev/shm type tmpfs (rw,rootcontext="system_u:object_r:tmpfs_t:s0")
/dev/sda1 on /boot type ext4 (rw)
none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw)
/cpuset on /cpusets type cpuset (rw)
[root@node1 ~]# ls /cpusets/(查看/cpusets目录文件内容)
cgroup.event_control  cpus(根域)           memory_migrate           memory_spread_page  notify_on_release   sched_relax_domain_level
cgroup.procs          mem_exclusive  memory_pressure          memory_spread_slab  release_agent       tasks
cpu_exclusive         mem_hardwall   memory_pressure_enabled  mems(关联那些内存)                sched_load_balance
[root@node1 ~]# cat /cpusets/cpus(查看cpus文件内容)
0-1
[root@node1 ~]# cat /cpusets/mems(查看那一段内存属于这里) 
0
[root@node1 ~]# cat /cpusets/tasks(查看运行在这个域的进程有那些) 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
39
40
41
42
43
44
45
46
54
55
56
58
59
60
89
90
165
166
170
171
172
302
303
387
645
656
700
701
740
979
1024
1025
1044
1045
1046
1047
1091
1092
1129
1146
1432
1440
1441
1442
1443
1444
1445
1446
1447
1448
1449
1453
1454
1455
1456
1457
1458
1459
1460
1461
1462
1481
1491
1502
1515
1517
1519
1521
1523
1525
1533
1534
1981
2399
2426
2446
2455
2479
2498
2524
2544
2560
2561
2564
2569
2573
2574
2578
2581
2583
2590
2592
2615
2916
2917
2918
2919
2920
2921
2922
2923
2924
2925
2926
2927
2928
2929
2930
2931
2932
2933
2934
2935
2936
2937
2938
2939
2940
2941
2942
2943
2944
2945
2946
2947
2948
2949
2950
2951
2952
2953
2954
2955
2956
2957
2958
2959
2960
2961
2962
2963
2964
2965
2966
2967
2968
2969
2970
2971
2972
2973
2974
2975
2976
2977
2978
2980
2983
3001
3010
12828
12857
[root@node1 ~]# cd /cpusets/(切换到/cpusets目录)
[root@node1 cpusets]# ls(查看当前目录文件及子目录)
cgroup.event_control  cpus           memory_migrate           memory_spread_page  notify_on_release   sched_relax_domain_level
cgroup.procs          mem_exclusive  memory_pressure          memory_spread_slab  release_agent       tasks
cpu_exclusive         mem_hardwall   memory_pressure_enabled  mems                sched_load_balance
[root@node1 cpusets]# mkdir domain1(创建domian1目录)
[root@node1 cpusets]# ls(查看当前目录文件及子目录)
cgroup.event_control  cpus           mem_hardwall     memory_pressure_enabled  mems               sched_load_balance
cgroup.procs          domain1        memory_migrate   memory_spread_page       notify_on_release  sched_relax_domain_level
cpu_exclusive         mem_exclusive  memory_pressure  memory_spread_slab       release_agent      tasks
[root@node1 cpusets]# cd domain1/(切换到domian1目录)
[root@node1 domain1]# ls(查看当前目录文件及子目录)
cgroup.event_control  cpu_exclusive  mem_exclusive  memory_migrate   memory_spread_page  mems               sched_load_balance        tasks
cgroup.procs          cpus           mem_hardwall   memory_pressure  memory_spread_slab  notify_on_release  sched_relax_domain_level
提示:自动创建很多文件,设定这个域内cpu有那些,内存有那些,而后将进程绑定在这个域内;
[root@node1 domain1]# cat cpus(查看cpus文件内容) 
提示:刚创建子域里面没有绑定任何cpu;
[root@node1 domain1]# cat mems(查看mems文件内容) 
[root@node1 domain1]# cat tasks(查看tasks文件内容)
[root@node1 domain1]# echo 0 > cpus(绑定0号cpu)
[root@node1 domain1]# echo 0 > mems(绑定0段内存)
提示:如果将某个进程绑定在这个域,那么这个进程就只能运行在这颗cpu和这个内存上;
[root@node1 domain1]# ps axo pid,cmd(查看所有终端进程,只显示pid和cmd字段)
   PID CMD
     1 /sbin/init
     2 [kthreadd]
     3 [migration/0]
     4 [ksoftirqd/0]
     5 [stopper/0]
     6 [watchdog/0]
     7 [migration/1]
     8 [stopper/1]
     9 [ksoftirqd/1]
    10 [watchdog/1]
    11 [events/0]
    12 [events/1]
    13 [cgroup]
    14 [khelper]
    15 [netns]
    16 [async/mgr]
    17 [pm]
    18 [sync_supers]
    19 [bdi-default]
    20 [kintegrityd/0]
    21 [kintegrityd/1]
    22 [kblockd/0]
    23 [kblockd/1]
    24 [kacpid]
    25 [kacpi_notify]
    26 [kacpi_hotplug]
    27 [ata_aux]
    28 [ata_sff/0]
    29 [ata_sff/1]
    30 [ksuspend_usbd]
    31 [khubd]
    32 [kseriod]
    33 [md/0]
    34 [md/1]
    35 [md_misc/0]
    36 [md_misc/1]
    37 [linkwatch]
    39 [khungtaskd]
    40 [kswapd0]
    41 [ksmd]
    42 [khugepaged]
    43 [aio/0]
    44 [aio/1]
    45 [crypto/0]
    46 [crypto/1]
    54 [kthrotld/0]
    55 [kthrotld/1]
    56 [pciehpd]
    58 [kpsmoused]
    59 [usbhid_resumer]
    60 [deferwq]
    89 [kdmremove]
    90 [kstriped]
   165 [scsi_eh_0]
   166 [scsi_eh_1]
   170 [mpt_poll_0]
   171 [mpt/0]
   172 [scsi_eh_2]
   302 [jbd2/sda2-8]
   303 [ext4-dio-unwrit]
   387 /sbin/udevd -d
   645 [bluetooth]
   656 [vmmemctl]
   700 [jbd2/sda1-8]
   701 [ext4-dio-unwrit]
   740 [kauditd]
   979 /sbin/dhclient -1 -q -lf /var/lib/dhclient/dhclient-eth1.leases -pf /var/run/dhclient-eth1.pid eth1
  1024 auditd
  1044 /sbin/rsyslogd -i /var/run/syslogd.pid -c 5
  1091 dbus-daemon --system
  1129 /usr/sbin/sshd
  1146 /bin/sh /usr/local/mysql/bin/mysqld_safe --datadir=/mydata/data --pid-file=/mydata/data/node1.Smoke.com.pid
  1432 /usr/local/mysql/bin/mysqld --basedir=/usr/local/mysql --datadir=/mydata/data --plugin-dir=/usr/local/mysql/lib/plugin --user=mysql -
  1481 sendmail: accepting connections
  1491 sendmail: Queue runner@01:00:00 for /var/spool/clientmqueue
  1502 crond
  1515 login -- root     
  1517 /sbin/mingetty /dev/tty2
  1519 /sbin/mingetty /dev/tty3
  1521 /sbin/mingetty /dev/tty4
  1523 /sbin/mingetty /dev/tty5
  1525 /sbin/mingetty /dev/tty6
  1533 /sbin/udevd -d
  1534 /sbin/udevd -d
  1981 /usr/sbin/httpd
  2399 /usr/sbin/httpd
  2426 /usr/sbin/httpd
  2446 /usr/sbin/httpd
  2455 /usr/sbin/httpd
  2479 /usr/sbin/httpd
  2498 /usr/sbin/httpd
  2524 /usr/sbin/httpd
  2544 /usr/sbin/httpd
  2560 /usr/sbin/httpd
  2561 /usr/sbin/httpd
  2564 /usr/sbin/httpd
  2569 /usr/sbin/httpd
  2573 /usr/sbin/httpd
  2574 /usr/sbin/httpd
  2578 /usr/sbin/httpd
  2581 /usr/sbin/httpd
  2583 /usr/sbin/httpd
  2590 /usr/sbin/httpd
  2592 /usr/sbin/httpd
  2615 /usr/sbin/httpd
  2916 /usr/sbin/console-kit-daemon --no-daemon
  2983 -bash
  3001 sshd: root@pts/0 
  3010 -bash
 12828 [flush-8:0]
 12892 ps axo pid,cmd
[root@node1 domain1]# echo 2615 > tasks(将2615进程绑定到该区域)
[root@node1 domain1]# man ps(查看ps命令的man帮助)

psr        PSR      processor that process is currently assigned to.(那个进程分配给那颗cpu)

[root@node1 domain1]# ps -e -o psr,pid,cmd | grep httpd(查看所有进程,只显示psr,pid,cmd字段将结果送给管道只显示httpd相关)
  1   1981 /usr/sbin/httpd
  1   2399 /usr/sbin/httpd
  1   2426 /usr/sbin/httpd
  1   2446 /usr/sbin/httpd
  1   2455 /usr/sbin/httpd
  1   2479 /usr/sbin/httpd
  1   2498 /usr/sbin/httpd
  1   2524 /usr/sbin/httpd
  1   2544 /usr/sbin/httpd
  1   2560 /usr/sbin/httpd
  1   2561 /usr/sbin/httpd
  1   2564 /usr/sbin/httpd
  1   2569 /usr/sbin/httpd
  1   2573 /usr/sbin/httpd
  1   2574 /usr/sbin/httpd
  0   2578 /usr/sbin/httpd
  1   2581 /usr/sbin/httpd
  1   2583 /usr/sbin/httpd
  1   2590 /usr/sbin/httpd
  1   2592 /usr/sbin/httpd
  1   2615 /usr/sbin/httpd
  1  12921 grep httpd
[root@node1 domain1]# cat tasks(查看tasks文件内容) 
2615
[root@node1 domain1]# watch -n 0.5 'ps -e -o psr,pid,cmd | grep httpd'(每0.5秒执行ps命令,查看所有进程,只显示psr,pid,cmd字段将结果送给管道只显示httpd
相关)

Every 0.5s: ps -e -o psr,pid,cmd | grep httpd                                                            Tue Jul 26 04:05:12 2016

  0   1981 /usr/sbin/httpd
  1   2399 /usr/sbin/httpd
  1   2426 /usr/sbin/httpd
  1   2446 /usr/sbin/httpd
  1   2455 /usr/sbin/httpd
  1   2479 /usr/sbin/httpd
  1   2498 /usr/sbin/httpd
  1   2524 /usr/sbin/httpd
  1   2544 /usr/sbin/httpd
  1   2560 /usr/sbin/httpd
  1   2561 /usr/sbin/httpd
  1   2564 /usr/sbin/httpd
  1   2569 /usr/sbin/httpd
  1   2573 /usr/sbin/httpd
  1   2574 /usr/sbin/httpd
  0   2578 /usr/sbin/httpd
  1   2581 /usr/sbin/httpd
[root@node1 ~]# ab -n 10000 -c 300 http://127.0.0.1/index.php(http压力测试,-n只i的那个总共请求数量,-c指定一次多少请求)
This is ApacheBench, Version 2.3 <$Revision: 655654 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/

Benchmarking 127.0.0.1 (be patient)
Completed 1000 requests
Completed 2000 requests
Completed 3000 requests
Completed 4000 requests
Completed 5000 requests
Completed 6000 requests
Completed 7000 requests
Completed 8000 requests
Completed 9000 requests
Completed 10000 requests
Finished 10000 requests


Server Software:        Apache/2.2.15
Server Hostname:        127.0.0.1
Server Port:            80

Document Path:          /index.php
Document Length:        0 bytes

Concurrency Level:      300
Time taken for tests:   3.077 seconds
Complete requests:      10000
Failed requests:        0
Write errors:           0
Total transferred:      2660000 bytes
HTML transferred:       0 bytes
Requests per second:    3250.03 [#/sec] (mean)
Time per request:       92.307 [ms] (mean)
Time per request:       0.308 [ms] (mean, across all concurrent requests)
Transfer rate:          844.25 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:        0    6  54.9      2    1028
Processing:     0   63 295.7     27    3027
Waiting:        0   62 295.8     26    3027
Total:         20   69 302.7     30    3053

Percentage of the requests served within a certain time (ms)
  50%     30
  66%     31
  75%     33
  80%     33
  90%     35
  95%     37
  98%    561
  99%   1468
 100%   3053 (longest request)
提示:2615进程不会切换,一直在0号cpu上;
[root@node1 domain1]# cd ..(切换到上级目录)
[root@node1 cpusets]# ls(查看当前目录文件及子目录)
cgroup.event_control  cpus           mem_hardwall     memory_pressure_enabled  mems               sched_load_balance
cgroup.procs          domain1        memory_migrate   memory_spread_page       notify_on_release  sched_relax_domain_level
cpu_exclusive         mem_exclusive  memory_pressure  memory_spread_slab       release_agent      tasks
[root@node1 cpusets]# cat /etc/fstab(查看fstab文件内容) 

#
# /etc/fstab
# Created by anaconda on Mon Jul 25 18:51:23 2016
#
# Accessible filesystems, by reference, are maintained under '/dev/disk'
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
#
UUID=cf43ec31-f59f-423f-aa0b-d091c18b2fa4 /                       ext4    defaults        1 1
UUID=8d02aaa8-1714-4377-9925-a3d6e4cdca1e /boot                   ext4    defaults        1 2
UUID=2273581c-907e-423b-95f3-cc95e035dfda swap                    swap    defaults        0 0
tmpfs                   /dev/shm                tmpfs   defaults        0 0
devpts                  /dev/pts                devpts  gid=5,mode=620  0 0
sysfs                   /sys                    sysfs   defaults        0 0
proc                    /proc                   proc    defaults        0 0
/cpuset			/cpusets 		cpuset	defaults 	0 0
[root@node1 cpusets]# ps -e -o psr,pid,cmd | grep httpd(查看所有进程,只显示psr,pid,cmd字段将结果送给管道只显示httpd相关)
  0   1981 /usr/sbin/httpd
  1   2524 /usr/sbin/httpd
  1  14408 /usr/sbin/httpd
  1  14447 /usr/sbin/httpd
  1  14477 /usr/sbin/httpd
  1  14496 /usr/sbin/httpd
  1  14505 /usr/sbin/httpd
  1  14512 /usr/sbin/httpd
  1  14529 /usr/sbin/httpd
  1  14532 /usr/sbin/httpd
  1  14533 /usr/sbin/httpd
  1  14535 /usr/sbin/httpd
  1  14538 /usr/sbin/httpd
  1  14547 /usr/sbin/httpd
  1  14555 /usr/sbin/httpd
  1  14558 /usr/sbin/httpd
  1  14561 /usr/sbin/httpd
  1  14563 /usr/sbin/httpd
  1  14574 /usr/sbin/httpd
  0  14575 /usr/sbin/httpd
  1  14821 /usr/sbin/httpd
  1  14870 grep httpd
[root@node1 cpusets]# taskset -p -c 0 14821(将14821进程绑定在0号cpu)
pid 14821's current affinity list: 0,1
pid 14821's new affinity list: 0
[root@node1 cpusets]# taskset -p -c 1 14821(将14821进程绑定1号cpu)
pid 14821's current affinity list: 0
pid 14821's new affinity list: 1