内存子系统组件:
slab allocator
buddy system
kswapd
pdflush
mmu
虚拟化环境:
PA --> HA --> MA
虚拟机转换:PA --> HA
GuestOS, OS
Shadow PT 影子页表
Memory:
TLB: 提升性能
Hugetable page
Hugetlbfs support is built on top of multiple page size support that is provided by most modern architectures(使用多种不同的页面大小)
Users can use the huge page support in Linux kernel by either using the mmap system call or standard Sysv shared memory system calls(shmget,shamat)
cat /proc/meminfo | grep HugePage
Tuning TLB performance
Check size of hugepages(检查大页面)
x86info -a | grep "Data TLB"(x86平台查看)
dmesg
cat /proc/meminfo
Enable hugepages(启用大页面)
In /etc/sysctl.conf(编辑sysctl.conf配置文件)
vm.nr_hugepages=n(启用几个大页面)
Kernel parameter(向内核传递参数)
hugepages=n(n大页面个数)
Configure hugetlbfs if needed by application
mmap system call requires that hugetlbfs is mounted
mkdir /hugepagws
mount -t hugetlbfs none /hugepages(挂载内存大页面)
shmat add shmget system calls do not require hugetlbfs
Viewing system calls(查看系统调用)
Trace every system call made by a program
strace -o /tmp/strace.out -p PID(追中进程系统调用以后输出到strace.out文件中)
grep mmap /tmp/strace.out
Summarize system calls(系统调用总结)
strace -c -p PID or(观察执行那些系统调用)
strace -c COMMAND
Other uses
Investigate locak contentions
Identify problems caused by improper file permissions
Pinpoint IO problems
strace:
strace COMMAND:查看命令的syscal系统调用
strace -p PID:查看已启动进程的syscal系统调用
-c:只输出其概况信息;
-o FILE: 将追踪结果保存至文件中,以供后续分析使用;
Strategies for using memory(使用内存测试)
Reduce overhead for tiny memory objects(降低微小内存对象开销)
Slab cache
Reduce or defer service time for slower subsystems(降低延迟慢速子系统的时间共享内存)
Filesystem metadata:buffer cache(slab cache)(提供buffer和cache)
Disk IO:page cache(磁盘i/o使用page cache)
Interprocess communications:shared memory(在进程间通信尽可能使用)
Network IO:buffer cache,arp cache,connection tracking(网络io比较慢借助buffer cache,arp cache以及连接追踪等功能)
Considerations when tuning memory
How should pages be reclaimed to avoid pressure?
Larger writes are usually more effcient due to re-sorting
Slab
The slab memory cache contains per-allocated memory pools that the kernel pulls memory from when it needs space to store various types of data structures

1、降低微型内存对象的系统开销
slab
2、缩减慢速子系统的服务时间
使用buffer cache缓存文件源数据;
使用page cache缓存DISK IO数据;
使用shm完成进程间通信;
使用buffer cache、arp cache和connetion tracking提升网络IO性能;
Tuning page allocation(调整页面分配)
Set using
vm.min_free_kbytes(最小空闲kb数,有关内核的内存参数)
Tuning vm.min_free_kbytes only be necessary when an application regularly needs to allocate a large block of memory,then frees that same memory
It may well be the case that the system has too little disk bandwidth,too little CPU power, or too little memory to handle its load
Consequences
Reduces service time for demand paging
Memory is not available for other useage
Can canuse pressure on ZONE_NORMAL
Tuning overcommit(调整内存过量使用)
Set using
vm.overcommit_memory(虚拟内存参数,是不是能过量使用)
0 = heuristic overcommit(启发式过量,什么时候使用交换内存以及使用多大交换内存)
1 = always overcommit(总是过量,总是使用交换内存,在数据库服务尽可能不使用swap,太慢了)
2 = commit all RAM plus a percentage of swap (may be > 100)(所有物理内存加上一部分swap,所用的空间是大于物理内存的)
vm.overcommit_ratio(可以超出物理内存百分之多少,只有vm.overcommit_memory设定为2才有意义,一般建议不要超出50%)
Specified the percentage of physical memory allowed to be overcommited when the vm.overcommit_memory is set to 2
View Committed_AS in /proc/meminfo
An estimate of how much RAM is required to avoid an out of memory(OOM) condition for the current workload on a system(千万不能超出物理内存加上swap内存总体大小,会导致内存溢出的)
过量使用:
2, 2, 2, 2: 8
物理内存的过量使用是以swap为前提的:
超出物理内存一部分:
Swap
2.5G
Slab cache(调优slab cache)
Tiny kernel objects are stored in the slab
Extra overhead of tracking is better than using 1 page/object
Example: filesystem metadata(dentry and inode caches)
Monitoring
/proc/slabinfo
slabtop(监控slab使用状况)
vmstat -m
Tuning a particular slab cache
echo "cache_name limit batchcount shared" > /proc/slabinfo
limit the maximum number of objects that will be cached for each CPU(最大对象数目,可以被cpu缓存的最大数)
batchcount the maximum number of global cache objects that will be transferred to the per-CPU cache when it becomes empty(最大数量全局缓存对象最多有多少能够实现在cpu之间传送的)
shared the sharing behavior for Symmetric MultProcessing(SMP) systems(在对称多处理器架构当中在各CPU之间可以共享多少个slab cache)
调大slab cache可以提高cpu访问内存小对象性能;
ARP cache
ARP entries map hardware addresses to protocol addresses
Cached in /proc/net/arp
By default,the cache is limited to 512 entries as a soft limit and 1024 entries as a hard limit(arp最多能缓存512个条目,软限制,最大支持1024个)
Garbage collecton removes staler or older entries(移除超出过期的条目)
Insufficient ARP cache leads to
Intermittent timeouts between hosts
ARP thrashing
Too much ARP cache puts pressure on ZONE_NORMAL
List entries
ip neighbor list(查看arp缓存表)
Flush cache
ip neighbor flush dev ethX(清空某个网卡的arp缓存表)
Tuning ARP cache
Adjust where the gc will leave arp table alone(什么时候gc回收器可以让arp表延时)
net.ipv4.neigh.default.gc_thresh1
default 128(当条目少于128个时候,里面无论有多少条目,无论这些条目是否过期都不管,也不会自动清理的)
Soft upper limit(软限制)
net.ipv4.neigh.default.gc_thresh2
default 512
Becomes hard limit after 5 seconds(清理超出的和过期的)
Hard upper limit(硬限制)
net.ipv4.neigh.default.ge_thresh3
Garbage collection frequency in seconds
net.ipv4.neigh.default.gc_interval
Page cache(页缓存,为了加速读操作的)
A large percentage of paging activity is due to I/O(页缓存主要降低磁盘IO)
File reads: each page of file read from disk into memory(减少文件读取所依赖的磁盘IO,将文件读取过来缓存到内存当中,多次访问同一个文件的时候就直接在内存中进行访问了)
These pages form the page cache(这些页面主要来自page cache的)
Page cache is always checked for IO requests(如何实现降低IO请求的)
Directory reads(缓存目录读取)
Reading and writing regular files(普通文件读写都可以使用page cache)
Reading and writing via block device files,DISK IO(读和写是通过块设备的disk io来实现)
Accessing memory mapped files,mmap(内存映射)
Accessing swapped out pages(将一些页面到交换分区)
Pages in the page cache are associated with file data
Tuning page cache(调整page cache)
View page cache allocation in /proc/meminfo
Tune length/size of memory
vm.lowmen_reserve_ratio(当内存很底预留的比例)
vm.vfs_cache_pressure
Tune arrival/completion rate
vm.page-cluster
vm.zone_reclaim_mode
vm.lowmen_reserve_ratio(低端内存预留多少空间)
For some specialised workloads on highmem machines it is dangerous for the kernel to allow process memory to be allocated from the "lowmem" zone
Linux page allocator has a mechanism which prevents allocations which could use highmem from using too much lowmem
The 'lowmem_reserve_ratio'tunable determines how aggressive how aggressive the kernel is in defending these lower zones
If you have a machine which uses highmem or ISA DMA and your applications are using mlock(),or if you are running with no swap then you probably should change the lowmem_reserve_ratio setting
vfs_cache_pressure(虚拟文件系统缓存)
Controls the tendency of the kernel to reclaim the memory which is used for caching of directory and inode objects(控制内核回收内存的趋势,内核在什么场景下回收用于缓存目录一级inode对象的内存占用空间)
At the default value of vfs_cache_pressure=100 the kernel will attempt to reclaim dentries and inodes at a "fair" rate with respect to pagecache and swapcache reclaim(默认为100,使用一种公平的策略相对于pagecache和swapcache而言,所以只要能够回收pagecache,它就能够回收dentries)
Decreasing vfs_cache_pressure causes the kernel to prefer to retain dentry and inode caches(降低这个值会导致内核不倾向回收这directory和inode)
When vfs_cache_pressure=0,the kernel will never reclaim dentries and inodes due to memory pressure and this can easily lead to out-of-memory conditions(等于0绝对不会回收directory和inode,这有可能导致内存溢出的)
Increasing vfs_cache_pressure beyond 100 causes the kernel to prefer to reclaim dentries and inodes(大于100更倾向于回收directory和inode)
vfs_cache_pressure:
0: 不回收dentries和innodes;
1-99: 倾向于不回首dentries和innodes;
100: 倾向于page cache和swap cache相同;
100+:倾向于回收;
page-cluster(页面簇,需要将数据从内存放到交换内存一次拿多少页面)
page-cluster controls the number of pages which are written to swap in a single attempt
It is a logarithmic value - setting it to zero means "1 page", setting it to 1 means "2 pages",setting it to 2 means "4 pages", etc
The default value is three(eight pages at a time)
There may be some small benefits in tuning this to a different value if your workload is swap-intensive
zone_reclaim_mode(更倾向于回收那一段内存)
Zone_reclaim_mode allows someone to set more or less aggressive approaches to reclaim memory when a zone runs out of memory
If it is set to zero then no zone reclaim occurs
Allocations will be satisfied from other zones / nodes in the system
Tis is value ORed together of
1 = Zone reclaim on(打开内存区域回收)
2 = Zone reclaim writes dirty pages out(回收写操作脏页)
4 = Zone reclaim swaps pages(回收用于swaps的页面)
Anonymous pages(匿名页,通常里面没有包含文件内容)
Anonymous pages can be another large consumer of data
Are not associated with a file,but instead contain:
Program data-arrays,heap allocations,etc(程序自身产生数据)
Anonymous memory regions(匿名内存区域)
Dirty memory mapped process private pages(脏内存页面,有进程私有页面)
IPC shared memory region pages(进程间通信的共享页面)
View summary usage
grep Anon /proc/meminfo
cat /proc/PID/stam
Anonymous pages=RSS-Shared(匿名页大小)
Anonymous pages are eligible for swap
匿名也不能被交换出去额,只有两个进程要交换数据才需要用到共享内存;
[root@Smoke ~]# cat /proc/meminfo | grep -i Huge(查看meminfo文件内容将结果送给管道只显示Huge相关)
HugePages_Total: 0(没有启用大页面)
HugePages_Free: 0
HugePages_Rsvd: 0
HugePages_Surp: 0
Hugepagesize: 2048 kB(大页面大小2M)
提示:64位系统它所支持的格式比32位系统要多一些,32位小页面只有4K,大页面只有4M,而对于64位系统可能略有不同;
[root@Smoke ~]# sysctl -w vm.nr_hugepages=10(改变内核运行参数,向内核传递参数大页面个数为10)
vm.nr_hugepages = 10
[root@Smoke ~]# cat /proc/meminfo | grep -i Huge(查看meminfo文件内容将结果送给管道只显示Huge相关)
HugePages_Total: 10(10个大页面)
HugePages_Free: 10(空闲10个没使用)
HugePages_Rsvd: 0
HugePages_Surp: 0
Hugepagesize: 2048 kB
提示:当创建大页面以后可以指定可以被那些应用程序可以使用,甚至我们也可以把大页面当作一个文件系统来直接使用,可以把它挂载到某个目录当作文件系统来用;
[root@Smoke ~]# mkdir /hugepages(创建hugepages目录)
[root@Smoke ~]# mount -t hugetlbfs none /hugepages/(挂载内存大页面文件系统到/hugepages目录)
[root@Smoke ~]# ls /hugepages/(查看/hugepages目录)
[root@Smoke ~]# dd if=/dev/zero of=/hugepages/a.test bs=1M count=5(从/dev/zero复制到a.test文件,大小为1M,总共复制5个)
dd: 正在写入"/hugepages/a.test": 无效的参数
记录了1+0 的读入
记录了0+0 的写出
0字节(0 B)已复制,0.0026437 秒,0.0 kB/秒
[root@Smoke ~]# ll /hugepages/(查看/hugepages目录文件及子目录)
总用量 0
-rw-r--r--. 1 root root 0 6月 23 16:21 a.test
提示:大小为0,因为这里是内存;
[root@Smoke ~]# ll -h /hugepages/(查看/hugepages目录文件及子目录详细信息,并做单位换算)
总用量 0
-rw-r--r--. 1 root root 0 6月 23 16:21 a.test
[root@Smoke ~]# cp /etc/issue /hugepages/(复制issue文件到/hugepages目录)
cp: writing `/hugepages/issue': Invalid argument
提示:不允许直接使用得让某个应用程序使用;
[root@Smoke ~]# umount /hugepages(卸载/hugepages目录文件系统)
[root@Smoke ~]# ps aux(以寄存器格式显示所有中断所有用户进程)
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
root 1 0.0 0.0 2900 1436 ? Ss 15:05 0:01 /sbin/init
root 2 0.0 0.0 0 0 ? S 15:05 0:00 [kthreadd]
root 3 0.0 0.0 0 0 ? S 15:05 0:00 [migration/0]
root 4 0.0 0.0 0 0 ? S 15:05 0:00 [ksoftirqd/0]
root 5 0.0 0.0 0 0 ? S 15:05 0:00 [migration/0]
root 6 0.0 0.0 0 0 ? S 15:05 0:02 [watchdog/0]
root 7 0.0 0.0 0 0 ? S 15:05 0:00 [events/0]
root 8 0.0 0.0 0 0 ? S 15:05 0:00 [cgroup]
root 9 0.0 0.0 0 0 ? S 15:05 0:00 [khelper]
root 10 0.0 0.0 0 0 ? S 15:05 0:00 [netns]
root 11 0.0 0.0 0 0 ? S 15:05 0:00 [async/mgr]
root 12 0.0 0.0 0 0 ? S 15:05 0:00 [pm]
root 13 0.0 0.0 0 0 ? S 15:05 0:00 [sync_supers]
root 14 0.0 0.0 0 0 ? S 15:05 0:00 [bdi-default]
root 15 0.0 0.0 0 0 ? S 15:05 0:00 [kintegrityd/0]
root 16 0.0 0.0 0 0 ? S 15:05 0:02 [kblockd/0]
root 17 0.0 0.0 0 0 ? S 15:05 0:00 [kacpid]
root 18 0.0 0.0 0 0 ? S 15:05 0:00 [kacpi_notify]
root 19 0.0 0.0 0 0 ? S 15:05 0:00 [kacpi_hotplug]
root 20 0.0 0.0 0 0 ? S 15:05 0:00 [ata/0]
root 21 0.0 0.0 0 0 ? S 15:05 0:00 [ata_aux]
root 22 0.0 0.0 0 0 ? S 15:05 0:00 [ksuspend_usbd]
root 23 0.0 0.0 0 0 ? S 15:05 0:00 [khubd]
root 24 0.0 0.0 0 0 ? S 15:05 0:00 [kseriod]
root 25 0.0 0.0 0 0 ? S 15:05 0:00 [md/0]
root 26 0.0 0.0 0 0 ? S 15:05 0:00 [md_misc/0]
root 27 0.0 0.0 0 0 ? S 15:05 0:00 [khungtaskd]
root 28 0.0 0.0 0 0 ? S 15:05 0:00 [kswapd0]
root 29 0.0 0.0 0 0 ? SN 15:05 0:00 [ksmd]
root 30 0.0 0.0 0 0 ? S 15:05 0:00 [aio/0]
root 31 0.0 0.0 0 0 ? S 15:05 0:00 [crypto/0]
root 36 0.0 0.0 0 0 ? S 15:05 0:00 [kthrotld/0]
root 37 0.0 0.0 0 0 ? S 15:05 0:00 [pciehpd]
root 39 0.0 0.0 0 0 ? S 15:05 0:00 [kpsmoused]
root 40 0.0 0.0 0 0 ? S 15:05 0:00 [usbhid_resumer]
root 210 0.0 0.0 0 0 ? S 15:05 0:00 [scsi_eh_0]
root 212 0.0 0.0 0 0 ? S 15:05 0:00 [scsi_eh_1]
root 218 0.0 0.0 0 0 ? S 15:05 0:00 [mpt_poll_0]
root 219 0.0 0.0 0 0 ? S 15:05 0:00 [mpt/0]
root 220 0.0 0.0 0 0 ? S 15:05 0:00 [scsi_eh_2]
root 238 0.0 0.0 0 0 ? S 15:05 0:00 [scsi_eh_3]
root 239 0.0 0.0 0 0 ? S 15:05 0:00 [scsi_eh_4]
root 240 0.0 0.0 0 0 ? S 15:05 0:00 [scsi_eh_5]
root 241 0.0 0.0 0 0 ? S 15:05 0:00 [scsi_eh_6]
root 242 0.0 0.0 0 0 ? S 15:05 0:00 [scsi_eh_7]
root 243 0.0 0.0 0 0 ? S 15:05 0:00 [scsi_eh_8]
root 244 0.0 0.0 0 0 ? S 15:05 0:00 [scsi_eh_9]
root 245 0.0 0.0 0 0 ? S 15:05 0:00 [scsi_eh_10]
root 246 0.0 0.0 0 0 ? S 15:05 0:00 [scsi_eh_11]
root 247 0.0 0.0 0 0 ? S 15:05 0:00 [scsi_eh_12]
root 248 0.0 0.0 0 0 ? S 15:05 0:00 [scsi_eh_13]
root 249 0.0 0.0 0 0 ? S 15:05 0:00 [scsi_eh_14]
root 250 0.0 0.0 0 0 ? S 15:05 0:00 [scsi_eh_15]
root 251 0.0 0.0 0 0 ? S 15:05 0:00 [scsi_eh_16]
root 252 0.0 0.0 0 0 ? S 15:05 0:00 [scsi_eh_17]
root 253 0.0 0.0 0 0 ? S 15:05 0:00 [scsi_eh_18]
root 254 0.0 0.0 0 0 ? S 15:05 0:00 [scsi_eh_19]
root 255 0.0 0.0 0 0 ? S 15:05 0:00 [scsi_eh_20]
root 256 0.0 0.0 0 0 ? S 15:05 0:00 [scsi_eh_21]
root 257 0.0 0.0 0 0 ? S 15:05 0:00 [scsi_eh_22]
root 258 0.0 0.0 0 0 ? S 15:05 0:00 [scsi_eh_23]
root 259 0.0 0.0 0 0 ? S 15:05 0:00 [scsi_eh_24]
root 260 0.0 0.0 0 0 ? S 15:05 0:00 [scsi_eh_25]
root 261 0.0 0.0 0 0 ? S 15:05 0:00 [scsi_eh_26]
root 262 0.0 0.0 0 0 ? S 15:05 0:00 [scsi_eh_27]
root 263 0.0 0.0 0 0 ? S 15:05 0:00 [scsi_eh_28]
root 264 0.0 0.0 0 0 ? S 15:05 0:00 [scsi_eh_29]
root 265 0.0 0.0 0 0 ? S 15:05 0:00 [scsi_eh_30]
root 266 0.0 0.0 0 0 ? S 15:05 0:00 [scsi_eh_31]
root 267 0.0 0.0 0 0 ? S 15:05 0:00 [scsi_eh_32]
root 372 0.0 0.0 0 0 ? S 15:05 0:00 [kjournald]
root 451 0.0 0.0 2692 988 ? S<s 15:05 0:00 /sbin/udevd -d
root 626 0.0 0.0 0 0 ? S 15:05 0:00 [vmmemctl]
root 649 0.0 0.0 0 0 ? S 15:05 0:02 [flush-8:0]
root 714 0.0 0.0 0 0 ? S 15:05 0:00 [bluetooth]
root 782 0.0 0.0 0 0 ? S 15:05 0:00 [kstriped]
root 811 0.0 0.0 0 0 ? S 15:05 0:00 [kjournald]
root 849 0.0 0.0 0 0 ? S 15:05 0:00 [kauditd]
root 1049 0.0 0.0 12932 812 ? S<sl 15:05 0:00 auditd
root 1065 0.0 0.0 35972 1512 ? Sl 15:05 0:00 /sbin/rsyslogd -i /var/run/syslogd.pid -c 5
dbus 1078 0.0 0.0 13244 1048 ? Ssl 15:05 0:00 dbus-daemon --system
root 1111 0.0 0.0 8576 1028 ? Ss 15:05 0:00 /usr/sbin/sshd
root 1187 0.0 0.1 12524 2520 ? Ss 15:05 0:00 /usr/libexec/postfix/master
root 1195 0.0 0.0 7080 1272 ? Ss 15:05 0:00 crond
postfix 1202 0.0 0.1 12668 2512 ? S 15:05 0:00 qmgr -l -t fifo -u
root 1210 0.0 0.0 6156 668 ? Ss 15:05 0:00 /usr/bin/rhsmcertd
root 1224 0.0 0.0 2008 508 tty1 Ss+ 15:05 0:00 /sbin/mingetty /dev/tty1
root 1226 0.0 0.0 2008 508 tty2 Ss+ 15:05 0:00 /sbin/mingetty /dev/tty2
root 1228 0.0 0.0 2008 508 tty3 Ss+ 15:05 0:00 /sbin/mingetty /dev/tty3
root 1230 0.0 0.0 2008 508 tty4 Ss+ 15:05 0:00 /sbin/mingetty /dev/tty4
root 1233 0.0 0.0 3348 1872 ? S< 15:05 0:00 /sbin/udevd -d
root 1234 0.0 0.0 3348 1792 ? S< 15:05 0:00 /sbin/udevd -d
root 1235 0.0 0.0 2008 512 tty5 Ss+ 15:05 0:00 /sbin/mingetty /dev/tty5
root 1237 0.0 0.0 2008 512 tty6 Ss+ 15:05 0:00 /sbin/mingetty /dev/tty6
root 1301 0.0 0.1 11652 3336 ? Rs 15:57 0:02 sshd: root@pts/0
root 1305 0.0 0.0 6852 1804 pts/0 Ss 15:57 0:00 -bash
postfix 1395 0.0 0.1 12600 2472 ? S 16:45 0:00 pickup -l -t fifo -u
root 1539 0.0 0.1 11200 3236 ? Ss 16:56 0:00 /usr/sbin/httpd
apache 1541 0.0 0.1 11200 2072 ? S 16:56 0:00 /usr/sbin/httpd
apache 1542 0.0 0.1 11200 2072 ? S 16:56 0:00 /usr/sbin/httpd
apache 1543 0.0 0.1 11200 2072 ? S 16:56 0:00 /usr/sbin/httpd
apache 1544 0.0 0.1 11200 2072 ? S 16:56 0:00 /usr/sbin/httpd
apache 1545 0.0 0.1 11200 2072 ? S 16:56 0:00 /usr/sbin/httpd
apache 1546 0.0 0.1 11200 2072 ? S 16:56 0:00 /usr/sbin/httpd
apache 1547 0.0 0.1 11200 2072 ? S 16:56 0:00 /usr/sbin/httpd
apache 1548 0.0 0.1 11200 2072 ? S 16:56 0:00 /usr/sbin/httpd
root 1549 0.0 0.0 4904 1000 pts/0 R+ 16:56 0:00 ps aux
[root@Smoke ~]# strace -p 1541(查看1541进程产生的系统调用)
Process 1541 attached - interrupt to quit
accept(4,
提示:现在被阻塞了,因为当前http进程没有接收任何请求,当接收请求这里就会产生系统调用,如果用户请求一个页面文件,这个页面文件在我们主机上,当前系统必须要打开这个页面,
要打开页面文件必须要跟硬件打交道,所以必须实现系统调用;
[root@Smoke ~]# cp /etc/fstab /var/www/html/index.html(复制fstab到/var/www/html叫index.html)
[root@Smoke ~]# ab -n 200 -c 10 http://172.16.100.106/index.html(http压力测试,-n请求总数,-c一次多少请求)
This is ApacheBench, Version 2.3 <$Revision: 655654 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/
Benchmarking 172.16.100.106 (be patient)
Completed 100 requests
Completed 200 requests
Finished 200 requests
Server Software: Apache/2.2.15
Server Hostname: 172.16.100.106
Server Port: 80
Document Path: /index.html
Document Length: 805 bytes
Concurrency Level: 10
Time taken for tests: 0.131 seconds
Complete requests: 200
Failed requests: 0
Write errors: 0
Total transferred: 221450 bytes
HTML transferred: 165830 bytes
Requests per second: 1529.84 [#/sec] (mean)
Time per request: 6.537 [ms] (mean)
Time per request: 0.654 [ms] (mean, across all concurrent requests)
Transfer rate: 1654.21 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Connect: 1 3 1.2 3 7
Processing: 1 3 1.2 3 7
Waiting: 0 3 1.4 2 7
Total: 4 6 1.4 6 10
Percentage of the requests served within a certain time (ms)
50% 6
66% 7
75% 7
80% 7
90% 8
95% 9
98% 9
99% 9
100% 10 (longest request)
[root@Smoke ~]# strace -p 1541(查看1541进程产生的系统调用)
Process 1541 attached - interrupt to quit
accept(4, {sa_family=AF_INET6, sin6_port=htons(46057), inet_pton(AF_INET6, "::ffff:172.16.100.106", &sin6_addr), sin6_flowinfo=0, sin6_sc
ope_id=0}, [28]) = 10
fcntl64(10, F_GETFD) = 0
fcntl64(10, F_SETFD, FD_CLOEXEC) = 0
getsockname(10, {sa_family=AF_INET6, sin6_port=htons(80), inet_pton(AF_INET6, "::ffff:172.16.100.106", &sin6_addr), sin6_flowinfo=0, sin6
_scope_id=0}, [28]) = 0
fcntl64(10, F_GETFL) = 0x2 (flags O_RDWR)
fcntl64(10, F_SETFL, O_RDWR|O_NONBLOCK) = 0
read(10, "GET /index.html HTTP/1.0\r\nHost: "..., 8000) = 92
gettimeofday({1466672824, 957795}, NULL) = 0
stat64("/var/www/html/index.html", {st_mode=S_IFREG|0644, st_size=805, ...}) = 0
open("/var/www/html/index.html", O_RDONLY|O_LARGEFILE|O_CLOEXEC) = 11
fcntl64(11, F_GETFD) = 0x1 (flags FD_CLOEXEC)
fcntl64(11, F_SETFD, FD_CLOEXEC) = 0
setsockopt(10, SOL_TCP, TCP_CORK, [1], 4) = 0
writev(10, [{"HTTP/1.1 200 OK\r\nDate: Thu, 23 J"..., 270}], 1) = 270
sendfile64(10, 11, [0], 805) = 805
setsockopt(10, SOL_TCP, TCP_CORK, [0], 4) = 0
write(7, "172.16.100.106 - - [23/Jun/2016:"..., 105) = 105
shutdown(10, 1 /* send */) = 0
poll([{fd=10, events=POLLIN}], 1, 2000) = 1 ([{fd=10, revents=POLLIN|POLLHUP}])
read(10, "", 512) = 0
close(10) = 0
[root@Smoke ~]# strace cat /etc/fstab(cat命令所产生的系统调用)
execve("/bin/cat", ["cat", "/etc/fstab"], [/* 24 vars */]) = 0
brk(0) = 0x97ab000
mmap2(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0xb776d000
access("/etc/ld.so.preload", R_OK) = -1 ENOENT (No such file or directory)
open("/etc/ld.so.cache", O_RDONLY) = 3
fstat64(3, {st_mode=S_IFREG|0644, st_size=24851, ...}) = 0
mmap2(NULL, 24851, PROT_READ, MAP_PRIVATE, 3, 0) = 0xb7766000
close(3) = 0
open("/lib/libc.so.6", O_RDONLY) = 3
read(3, "\177ELF\1\1\1\3\0\0\0\0\0\0\0\0\3\0\3\0\1\0\0\0@n\1\0004\0\0\0"..., 512) = 512
fstat64(3, {st_mode=S_IFREG|0755, st_size=1902708, ...}) = 0
mmap2(NULL, 1665416, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0x124000
mprotect(0x2b4000, 4096, PROT_NONE) = 0
mmap2(0x2b5000, 12288, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x190) = 0x2b5000
mmap2(0x2b8000, 10632, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_ANONYMOUS, -1, 0) = 0x2b8000
close(3) = 0
mmap2(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0xb7765000
set_thread_area({entry_number:-1 -> 6, base_addr:0xb77656c0, limit:1048575, seg_32bit:1, contents:0, read_exec_only:0, limit_in_pages:1,
seg_not_present:0, useable:1}) = 0
mprotect(0x2b5000, 8192, PROT_READ) = 0
mprotect(0xb2d000, 4096, PROT_READ) = 0
munmap(0xb7766000, 24851) = 0
open("/usr/lib/locale/locale-archive", O_RDONLY|O_LARGEFILE) = 3
fstat64(3, {st_mode=S_IFREG|0644, st_size=99154448, ...}) = 0
mmap2(NULL, 2097152, PROT_READ, MAP_PRIVATE, 3, 0) = 0xb7565000
close(3) = 0
brk(0) = 0x97ab000
brk(0x97cc000) = 0x97cc000
fstat64(1, {st_mode=S_IFCHR|0620, st_rdev=makedev(136, 0), ...}) = 0
open("/etc/fstab", O_RDONLY|O_LARGEFILE) = 3
fstat64(3, {st_mode=S_IFREG|0644, st_size=805, ...}) = 0
read(3, "\n#\n# /etc/fstab\n# Created by ana"..., 32768) = 805
write(1, "\n#\n# /etc/fstab\n# Created by ana"..., 805
#
# /etc/fstab
# Created by anaconda on Thu Jun 23 08:08:17 2016
#
# Accessible filesystems, by reference, are maintained under '/dev/disk'
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
#
UUID=839d0d66-fd15-4f1d-8e9b-bc0721993249 / ext3 defaults 1 1
UUID=a4e9c558-055a-4c07-9c86-c755441c5fa5 /boot ext3 defaults 1 2
UUID=75d91000-509f-40aa-9407-ca377b5d1066 swap swap defaults 0 0
tmpfs /dev/shm tmpfs defaults 0 0
devpts /dev/pts devpts gid=5,mode=620 0 0
sysfs /sys sysfs defaults 0 0
proc /proc proc defaults 0 0
) = 805
read(3, "", 32768) = 0
close(3) = 0
close(1) = 0
close(2) = 0
exit_group(0) = ?
[root@Smoke ~]# tty(查看当前所使用的终端)
/dev/pts/0
[root@Smoke ~]# strace -c cat /etc/fstab(查看cat命令所产生的系统调用,-c追踪整个结果)
#
# /etc/fstab
# Created by anaconda on Thu Jun 23 08:08:17 2016
#
# Accessible filesystems, by reference, are maintained under '/dev/disk'
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
#
UUID=839d0d66-fd15-4f1d-8e9b-bc0721993249 / ext3 defaults 1 1
UUID=a4e9c558-055a-4c07-9c86-c755441c5fa5 /boot ext3 defaults 1 2
UUID=75d91000-509f-40aa-9407-ca377b5d1066 swap swap defaults 0 0
tmpfs /dev/shm tmpfs defaults 0 0
devpts /dev/pts devpts gid=5,mode=620 0 0
sysfs /sys sysfs defaults 0 0
proc /proc proc defaults 0 0
% time seconds usecs/call calls errors syscall
------ ----------- ----------- --------- --------- ----------------
100.00 0.000229 229 1 execve
0.00 0.000000 0 3 read
0.00 0.000000 0 1 write
0.00 0.000000 0 4 open
0.00 0.000000 0 6 close
0.00 0.000000 0 1 1 access
0.00 0.000000 0 3 brk
0.00 0.000000 0 1 munmap
0.00 0.000000 0 3 mprotect
0.00 0.000000 0 7 mmap2
0.00 0.000000 0 5 fstat64
0.00 0.000000 0 1 set_thread_area
------ ----------- ----------- --------- --------- ----------------
100.00 0.000229 36 1 total
[root@Smoke ~]# cat /proc/slabinfo(查看slabinfo文件内容)
slabinfo - version: 2.1
# name <active_objs> <num_objs> <objsize> <objperslab> <pagesperslab> : tunables <limit> <batchcount> <sharedfactor> : slabdata
<active_slabs> <num_slabs> <sharedavail>
fib6_nodes 24 59 64 59 1 : tunables 120 60 8 : slabdata 1 1 0
ip6_dst_cache 16 30 384 10 1 : tunables 54 27 8 : slabdata 3 3 0
ndisc_cache 1 15 256 15 1 : tunables 120 60 8 : slabdata 1 1 0
ip6_mrt_cache 0 0 128 30 1 : tunables 120 60 8 : slabdata 0 0 0
RAWv6 131 132 1024 4 1 : tunables 54 27 8 : slabdata 33 33 0
UDPLITEv6 0 0 1024 4 1 : tunables 54 27 8 : slabdata 0 0 0
UDPv6 0 0 1024 4 1 : tunables 54 27 8 : slabdata 0 0 0
tw_sock_TCPv6 0 0 320 12 1 : tunables 54 27 8 : slabdata 0 0 0
request_sock_TCPv6 0 0 192 20 1 : tunables 120 60 8 : slabdata 0 0 0
TCPv6 4 4 1920 2 1 : tunables 24 12 8 : slabdata 2 2 0
jbd2_1k 0 0 1024 4 1 : tunables 54 27 8 : slabdata 0 0 0
avtab_node 610413 610560 24 144 1 : tunables 120 60 8 : slabdata 4240 4240 0
ext4_inode_cache 6678 6684 1000 4 1 : tunables 54 27 8 : slabdata 1671 1671 0
ext4_xattr 1 44 88 44 1 : tunables 120 60 8 : slabdata 1 1 0
ext4_free_block_extents 0 0 56 67 1 : tunables 120 60 8 : slabdata 0 0 0
ext4_alloc_context 0 0 136 28 1 : tunables 120 60 8 : slabdata 0 0 0
ext4_prealloc_space 6 37 104 37 1 : tunables 120 60 8 : slabdata 1 1 0
ext4_system_zone 0 0 40 92 1 : tunables 120 60 8 : slabdata 0 0 0
jbd2_journal_handle 0 0 24 144 1 : tunables 120 60 8 : slabdata 0 0 0
jbd2_journal_head 16 34 112 34 1 : tunables 120 60 8 : slabdata 1 1 0
jbd2_revoke_table 4 202 16 202 1 : tunables 120 60 8 : slabdata 1 1 0
jbd2_revoke_record 0 0 32 112 1 : tunables 120 60 8 : slabdata 0 0 0
sd_ext_cdb 2 112 32 112 1 : tunables 120 60 8 : slabdata 1 1 0
scsi_sense_cache 4 30 128 30 1 : tunables 120 60 8 : slabdata 1 1 0
scsi_cmd_cache 6 15 256 15 1 : tunables 120 60 8 : slabdata 1 1 0
dm_raid1_read_record 0 0 1064 7 2 : tunables 24 12 8 : slabdata 0 0 0
kcopyd_job 0 0 3240 2 2 : tunables 24 12 8 : slabdata 0 0 0
io 0 0 64 59 1 : tunables 120 60 8 : slabdata 0 0 0
dm_uevent 0 0 2608 3 2 : tunables 24 12 8 : slabdata 0 0 0
dm_rq_clone_bio_info 0 0 16 202 1 : tunables 120 60 8 : slabdata 0 0 0
dm_rq_target_io 0 0 392 10 1 : tunables 54 27 8 : slabdata 0 0 0
dm_target_io 0 0 24 144 1 : tunables 120 60 8 : slabdata 0 0 0
dm_io 0 0 40 92 1 : tunables 120 60 8 : slabdata 0 0 0
flow_cache 0 0 104 37 1 : tunables 120 60 8 : slabdata 0 0 0
uhci_urb_priv 1 67 56 67 1 : tunables 120 60 8 : slabdata 1 1 0
cfq_io_context 27 84 136 28 1 : tunables 120 60 8 : slabdata 3 3 0
cfq_queue 26 48 240 16 1 : tunables 120 60 8 : slabdata 3 3 0
bsg_cmd 0 0 312 12 1 : tunables 54 27 8 : slabdata 0 0 0
mqueue_inode_cache 1 4 896 4 1 : tunables 54 27 8 : slabdata 1 1 0
isofs_inode_cache 0 0 640 6 1 : tunables 54 27 8 : slabdata 0 0 0
hugetlbfs_inode_cache 1 6 608 6 1 : tunables 54 27 8 : slabdata 1 1 0
dquot 0 0 256 15 1 : tunables 120 60 8 : slabdata 0 0 0
kioctx 12 20 384 10 1 : tunables 54 27 8 : slabdata 2 2 0
kiocb 0 0 256 15 1 : tunables 120 60 8 : slabdata 0 0 0
inotify_event_private_data 0 0 32 112 1 : tunables 120 60 8 : slabdata 0 0 0
inotify_inode_mark_entry 37 64 120 32 1 : tunables 120 60 8 : slabdata 2 2 0
dnotify_mark_entry 0 0 120 32 1 : tunables 120 60 8 : slabdata 0 0 0
dnotify_struct 0 0 32 112 1 : tunables 120 60 8 : slabdata 0 0 0
dio 0 0 640 6 1 : tunables 54 27 8 : slabdata 0 0 0
fasync_cache 0 0 24 144 1 : tunables 120 60 8 : slabdata 0 0 0
khugepaged_mm_slot 1 92 40 92 1 : tunables 120 60 8 : slabdata 1 1 0
ksm_mm_slot 0 0 48 77 1 : tunables 120 60 8 : slabdata 0 0 0
ksm_stable_node 0 0 48 77 1 : tunables 120 60 8 : slabdata 0 0 0
ksm_rmap_item 0 0 64 59 1 : tunables 120 60 8 : slabdata 0 0 0
utrace_engine 0 0 56 67 1 : tunables 120 60 8 : slabdata 0 0 0
utrace 0 0 64 59 1 : tunables 120 60 8 : slabdata 0 0 0
pid_namespace 0 0 2168 3 2 : tunables 24 12 8 : slabdata 0 0 0
posix_timers_cache 0 0 176 22 1 : tunables 120 60 8 : slabdata 0 0 0
uid_cache 4 30 128 30 1 : tunables 120 60 8 : slabdata 1 1 0
UNIX 22 45 832 9 2 : tunables 54 27 8 : slabdata 5 5 0
ip_mrt_cache 0 0 128 30 1 : tunables 120 60 8 : slabdata 0 0 0
UDP-Lite 0 0 896 4 1 : tunables 54 27 8 : slabdata 0 0 0
tcp_bind_bucket 5 59 64 59 1 : tunables 120 60 8 : slabdata 1 1 0
inet_peer_cache 0 0 64 59 1 : tunables 120 60 8 : slabdata 0 0 0
secpath_cache 0 0 64 59 1 : tunables 120 60 8 : slabdata 0 0 0
xfrm_dst_cache 0 0 448 8 1 : tunables 54 27 8 : slabdata 0 0 0
ip_fib_alias 1 112 32 112 1 : tunables 120 60 8 : slabdata 1 1 0
ip_fib_hash 14 53 72 53 1 : tunables 120 60 8 : slabdata 1 1 0
ip_dst_cache 11 20 384 10 1 : tunables 54 27 8 : slabdata 2 2 0
arp_cache 4 15 256 15 1 : tunables 120 60 8 : slabdata 1 1 0
PING 0 0 832 9 2 : tunables 54 27 8 : slabdata 0 0 0
RAW 129 135 832 9 2 : tunables 54 27 8 : slabdata 15 15 0
UDP 1 4 896 4 1 : tunables 54 27 8 : slabdata 1 1 0
tw_sock_TCP 0 0 256 15 1 : tunables 120 60 8 : slabdata 0 0 0
request_sock_TCP 0 0 128 30 1 : tunables 120 60 8 : slabdata 0 0 0
TCP 5 12 1728 4 2 : tunables 24 12 8 : slabdata 3 3 0
eventpoll_pwq 22 424 72 53 1 : tunables 120 60 8 : slabdata 8 8 0
eventpoll_epi 22 360 128 30 1 : tunables 120 60 8 : slabdata 12 12 0
sgpool-128 2 2 4096 1 1 : tunables 24 12 8 : slabdata 2 2 0
sgpool-64 2 2 2048 2 1 : tunables 24 12 8 : slabdata 1 1 0
sgpool-32 2 4 1024 4 1 : tunables 54 27 8 : slabdata 1 1 0
sgpool-16 2 8 512 8 1 : tunables 54 27 8 : slabdata 1 1 0
sgpool-8 5 15 256 15 1 : tunables 120 60 8 : slabdata 1 1 0
scsi_data_buffer 0 0 24 144 1 : tunables 120 60 8 : slabdata 0 0 0
blkdev_integrity 0 0 112 34 1 : tunables 120 60 8 : slabdata 0 0 0
blkdev_queue 26 28 2864 2 2 : tunables 24 12 8 : slabdata 14 14 0
blkdev_requests 9 33 352 11 1 : tunables 54 27 8 : slabdata 2 3 0
blkdev_ioc 28 48 80 48 1 : tunables 120 60 8 : slabdata 1 1 0
fsnotify_event_holder 0 0 24 144 1 : tunables 120 60 8 : slabdata 0 0 0
fsnotify_event 0 0 104 37 1 : tunables 120 60 8 : slabdata 0 0 0
bio-0 6 20 192 20 1 : tunables 120 60 8 : slabdata 1 1 0
biovec-256 2 2 4096 1 1 : tunables 24 12 8 : slabdata 2 2 0
biovec-128 0 0 2048 2 1 : tunables 24 12 8 : slabdata 0 0 0
biovec-64 0 0 1024 4 1 : tunables 54 27 8 : slabdata 0 0 0
biovec-16 0 0 256 15 1 : tunables 120 60 8 : slabdata 0 0 0
bip-256 2 2 4224 1 2 : tunables 8 4 0 : slabdata 2 2 0
bip-128 0 0 2176 3 2 : tunables 24 12 8 : slabdata 0 0 0
bip-64 0 0 1152 7 2 : tunables 24 12 8 : slabdata 0 0 0
bip-16 0 0 384 10 1 : tunables 54 27 8 : slabdata 0 0 0
bip-4 0 0 192 20 1 : tunables 120 60 8 : slabdata 0 0 0
bip-1 0 0 128 30 1 : tunables 120 60 8 : slabdata 0 0 0
sock_inode_cache 330 400 704 5 1 : tunables 54 27 8 : slabdata 80 80 0
skbuff_fclone_cache 3 7 512 7 1 : tunables 54 27 8 : slabdata 1 1 0
skbuff_head_cache 517 630 256 15 1 : tunables 120 60 8 : slabdata 42 42 0
file_lock_cache 9 44 176 22 1 : tunables 120 60 8 : slabdata 2 2 0
net_namespace 0 0 2432 3 2 : tunables 24 12 8 : slabdata 0 0 0
shmem_inode_cache 677 685 784 5 1 : tunables 54 27 8 : slabdata 137 137 0
Acpi-Operand 5674 5777 72 53 1 : tunables 120 60 8 : slabdata 109 109 0
Acpi-ParseExt 0 0 72 53 1 : tunables 120 60 8 : slabdata 0 0 0
Acpi-Parse 0 0 48 77 1 : tunables 120 60 8 : slabdata 0 0 0
Acpi-State 0 0 80 48 1 : tunables 120 60 8 : slabdata 0 0 0
Acpi-Namespace 4492 4508 40 92 1 : tunables 120 60 8 : slabdata 49 49 0
task_delay_info 216 374 112 34 1 : tunables 120 60 8 : slabdata 11 11 0
taskstats 1 12 328 12 1 : tunables 54 27 8 : slabdata 1 1 0
proc_inode_cache 653 876 656 6 1 : tunables 54 27 8 : slabdata 146 146 0
sigqueue 1 24 160 24 1 : tunables 120 60 8 : slabdata 1 1 0
bdev_cache 14 16 832 4 1 : tunables 54 27 8 : slabdata 4 4 0
sysfs_dir_cache 10122 10233 144 27 1 : tunables 120 60 8 : slabdata 379 379 0
mnt_cache 28 45 256 15 1 : tunables 120 60 8 : slabdata 3 3 0
filp 588 1300 192 20 1 : tunables 120 60 8 : slabdata 65 65 0
inode_cache 7713 7998 592 6 1 : tunables 54 27 8 : slabdata 1333 1333 0
dentry 18571 19200 192 20 1 : tunables 120 60 8 : slabdata 960 960 0
names_cache 2 2 4096 1 1 : tunables 24 12 8 : slabdata 2 2 0
avc_node 502 885 64 59 1 : tunables 120 60 8 : slabdata 15 15 0
selinux_inode_security 16038 16907 72 53 1 : tunables 120 60 8 : slabdata 319 319 0
radix_tree_node 2467 2485 560 7 1 : tunables 54 27 8 : slabdata 355 355 0
key_jar 5 20 192 20 1 : tunables 120 60 8 : slabdata 1 1 0
buffer_head 25886 27602 104 37 1 : tunables 120 60 8 : slabdata 746 746 0
nsproxy 0 0 48 77 1 : tunables 120 60 8 : slabdata 0 0 0
vm_area_struct 8780 10678 200 19 1 : tunables 120 60 8 : slabdata 562 562 0
mm_struct 57 115 1408 5 2 : tunables 24 12 8 : slabdata 23 23 0
fs_cache 61 295 64 59 1 : tunables 120 60 8 : slabdata 5 5 0
files_cache 62 198 704 11 2 : tunables 54 27 8 : slabdata 18 18 0
signal_cache 123 203 1088 7 2 : tunables 24 12 8 : slabdata 29 29 0
sighand_cache 123 141 2112 3 2 : tunables 24 12 8 : slabdata 47 47 0
task_xstate 149 270 832 9 2 : tunables 54 27 8 : slabdata 30 30 0
task_struct 211 246 2656 3 2 : tunables 24 12 8 : slabdata 82 82 0
cred_jar 239 460 192 20 1 : tunables 120 60 8 : slabdata 23 23 0
anon_vma_chain 11838 18865 48 77 1 : tunables 120 60 8 : slabdata 245 245 20
anon_vma 4600 8464 40 92 1 : tunables 120 60 8 : slabdata 92 92 0
pid 221 390 128 30 1 : tunables 120 60 8 : slabdata 13 13 0
shared_policy_node 0 0 48 77 1 : tunables 120 60 8 : slabdata 0 0 0
numa_policy 1 28 136 28 1 : tunables 120 60 8 : slabdata 1 1 0
idr_layer_cache 234 238 544 7 1 : tunables 54 27 8 : slabdata 34 34 0
size-4194304(DMA) 0 0 4194304 1 1024 : tunables 1 1 0 : slabdata 0 0 0
size-4194304 0 0 4194304 1 1024 : tunables 1 1 0 : slabdata 0 0 0
size-2097152(DMA) 0 0 2097152 1 512 : tunables 1 1 0 : slabdata 0 0 0
size-2097152 0 0 2097152 1 512 : tunables 1 1 0 : slabdata 0 0 0
size-1048576(DMA) 0 0 1048576 1 256 : tunables 1 1 0 : slabdata 0 0 0
size-1048576 0 0 1048576 1 256 : tunables 1 1 0 : slabdata 0 0 0
size-524288(DMA) 0 0 524288 1 128 : tunables 1 1 0 : slabdata 0 0 0
size-524288 0 0 524288 1 128 : tunables 1 1 0 : slabdata 0 0 0
size-262144(DMA) 0 0 262144 1 64 : tunables 1 1 0 : slabdata 0 0 0
size-262144 0 0 262144 1 64 : tunables 1 1 0 : slabdata 0 0 0
size-131072(DMA) 0 0 131072 1 32 : tunables 8 4 0 : slabdata 0 0 0
size-131072 2 2 131072 1 32 : tunables 8 4 0 : slabdata 2 2 0
size-65536(DMA) 0 0 65536 1 16 : tunables 8 4 0 : slabdata 0 0 0
size-65536 3 3 65536 1 16 : tunables 8 4 0 : slabdata 3 3 0
size-32768(DMA) 0 0 32768 1 8 : tunables 8 4 0 : slabdata 0 0 0
size-32768 1 1 32768 1 8 : tunables 8 4 0 : slabdata 1 1 0
size-16384(DMA) 0 0 16384 1 4 : tunables 8 4 0 : slabdata 0 0 0
size-16384 10 10 16384 1 4 : tunables 8 4 0 : slabdata 10 10 0
size-8192(DMA) 0 0 8192 1 2 : tunables 8 4 0 : slabdata 0 0 0
size-8192 16 16 8192 1 2 : tunables 8 4 0 : slabdata 16 16 0
size-4096(DMA) 0 0 4096 1 1 : tunables 24 12 8 : slabdata 0 0 0
size-4096 239 239 4096 1 1 : tunables 24 12 8 : slabdata 239 239 0
size-2048(DMA) 0 0 2048 2 1 : tunables 24 12 8 : slabdata 0 0 0
size-2048 730 730 2048 2 1 : tunables 24 12 8 : slabdata 365 365 0
size-1024(DMA) 0 0 1024 4 1 : tunables 54 27 8 : slabdata 0 0 0
size-1024 880 884 1024 4 1 : tunables 54 27 8 : slabdata 221 221 0
size-512(DMA) 0 0 512 8 1 : tunables 54 27 8 : slabdata 0 0 0
size-512 1300 1336 512 8 1 : tunables 54 27 8 : slabdata 167 167 0
size-256(DMA) 0 0 256 15 1 : tunables 120 60 8 : slabdata 0 0 0
size-256 904 915 256 15 1 : tunables 120 60 8 : slabdata 61 61 0
size-192(DMA) 0 0 192 20 1 : tunables 120 60 8 : slabdata 0 0 0
size-192 1810 2020 192 20 1 : tunables 120 60 8 : slabdata 101 101 0
size-128(DMA) 0 0 128 30 1 : tunables 120 60 8 : slabdata 0 0 0
size-64(DMA) 0 0 64 59 1 : tunables 120 60 8 : slabdata 0 0 0
size-64 14902 15635 64 59 1 : tunables 120 60 8 : slabdata 265 265 0
size-32(DMA) 0 0 32 112 1 : tunables 120 60 8 : slabdata 0 0 0
size-128 3681 3780 128 30 1 : tunables 120 60 8 : slabdata 126 126 0
size-32 402150 403760 32 112 1 : tunables 120 60 8 : slabdata 3605 3605 0
kmem_cache 182 182 32896 1 16 : tunables 8 4 0 : slabdata 182 182 0
[root@Smoke ~]# cat /proc/sys/vm/overcommit_memory(查看overcommit_memory文件内容)
0
提示:默认是启发式过量;
[root@Smoke ~]# ls /proc/1(查看/proc/1目录文件及子目录)
attr cgroup coredump_filter environ fdinfo loginuid mountinfo net oom_score_adj root sessionid stat syscall
autogroup clear_refs cpuset exe io maps mounts oom_adj pagemap sched smaps statm task
auxv cmdline cwd fd limits mem mountstats oom_score personality schedstat stack status wchan
提示:oom_score oom分数,oom_score_adj用来调整评分参数;
[root@Smoke ~]# slabtop(监控当前系统上那一种slabtop对象数目比较多)
Active / Total Objects (% used) : 1004808 / 1009100 (99.6%)(一共有多少活动对象)
Active / Total Slabs (% used) : 9702 / 9704 (100.0%)(已经占用多少个)
Active / Total Caches (% used) : 93 / 177 (52.5%)
Active / Total Size (% used) : 34345.42K / 34646.55K (99.1%)
Minimum / Average / Maximum Object : 0.01K / 0.03K / 4096.00K
OBJS ACTIVE USE OBJ SIZE SLABS OBJ/SLAB CACHE SIZE NAME
548100 547979 99% 0.02K 2700 203 10800K avtab_node
358662 358552 99% 0.03K 3174 113 12696K size-32
19343 19328 99% 0.13K 667 29 2668K dentry
13692 13653 99% 0.04K 163 84 652K selinux_inode_security
11050 11011 99% 0.07K 221 50 884K sysfs_dir_cache
8151 8143 99% 0.34K 741 11 2964K inode_cache
7504 7461 99% 0.05K 112 67 448K buffer_head
5655 5362 94% 0.02K 39 145 156K anon_vma_chain
4602 4537 98% 0.06K 78 59 312K size-64
4508 4428 98% 0.04K 49 92 196K Acpi-Operand
3997 3993 99% 0.50K 571 7 2284K ext3_inode_cache
3861 3688 95% 0.10K 99 39 396K vm_area_struct
3480 3431 98% 0.02K 24 145 96K Acpi-Namespace
2535 2242 88% 0.02K 15 169 60K anon_vma
1859 1848 99% 0.29K 143 13 572K radix_tree_node
1560 1560 100% 0.12K 52 30 208K size-96
1170 501 42% 0.05K 15 78 60K avc_node
820 802 97% 0.19K 41 20 164K size-192
690 604 87% 0.12K 23 30 92K filp
684 679 99% 0.43K 76 9 304K shmem_inode_cache
提示:当使用百分比过高的时候,如果再次打开类似对象,就只能清除一些缓存,腾出空间,因为每一个缓存多少数目是有限定的;
[root@Smoke ~]# cat /proc/slabinfo(查看slabinfo文件内容)
slabinfo - version: 2.1
# name <active_objs> <num_objs> <objsize> <objperslab> <pagesperslab> : tunables <limit> <batchcount> <sharedfactor> : slabdata
<active_slabs> <num_slabs> <sharedavail>
fib6_nodes 24 59 64 59 1 : tunables 120 60 8 : slabdata 1 1 0
ip6_dst_cache 16 30 384 10 1 : tunables 54 27 8 : slabdata 3 3 0
ndisc_cache 1 15 256 15 1 : tunables 120 60 8 : slabdata 1 1 0
ip6_mrt_cache 0 0 128 30 1 : tunables 120 60 8 : slabdata 0 0 0
RAWv6 131 132 1024 4 1 : tunables 54 27 8 : slabdata 33 33 0
UDPLITEv6 0 0 1024 4 1 : tunables 54 27 8 : slabdata 0 0 0
UDPv6 0 0 1024 4 1 : tunables 54 27 8 : slabdata 0 0 0
tw_sock_TCPv6 0 0 320 12 1 : tunables 54 27 8 : slabdata 0 0 0
request_sock_TCPv6 0 0 192 20 1 : tunables 120 60 8 : slabdata 0 0 0
TCPv6 4 4 1920 2 1 : tunables 24 12 8 : slabdata 2 2 0
jbd2_1k 0 0 1024 4 1 : tunables 54 27 8 : slabdata 0 0 0
avtab_node 610413 610560 24 144 1 : tunables 120 60 8 : slabdata 4240 4240 0
ext4_inode_cache 6678 6684 1000 4 1 : tunables 54 27 8 : slabdata 1671 1671 0
ext4_xattr 1 44 88 44 1 : tunables 120 60 8 : slabdata 1 1 0
ext4_free_block_extents 0 0 56 67 1 : tunables 120 60 8 : slabdata 0 0 0
ext4_alloc_context 0 0 136 28 1 : tunables 120 60 8 : slabdata 0 0 0
ext4_prealloc_space 6 37 104 37 1 : tunables 120 60 8 : slabdata 1 1 0
ext4_system_zone 0 0 40 92 1 : tunables 120 60 8 : slabdata 0 0 0
jbd2_journal_handle 0 0 24 144 1 : tunables 120 60 8 : slabdata 0 0 0
jbd2_journal_head 16 34 112 34 1 : tunables 120 60 8 : slabdata 1 1 0
jbd2_revoke_table 4 202 16 202 1 : tunables 120 60 8 : slabdata 1 1 0
jbd2_revoke_record 0 0 32 112 1 : tunables 120 60 8 : slabdata 0 0 0
sd_ext_cdb 2 112 32 112 1 : tunables 120 60 8 : slabdata 1 1 0
scsi_sense_cache 4 30 128 30 1 : tunables 120 60 8 : slabdata 1 1 0
scsi_cmd_cache 6 15 256 15 1 : tunables 120 60 8 : slabdata 1 1 0
dm_raid1_read_record 0 0 1064 7 2 : tunables 24 12 8 : slabdata 0 0 0
kcopyd_job 0 0 3240 2 2 : tunables 24 12 8 : slabdata 0 0 0
io 0 0 64 59 1 : tunables 120 60 8 : slabdata 0 0 0
dm_uevent 0 0 2608 3 2 : tunables 24 12 8 : slabdata 0 0 0
dm_rq_clone_bio_info 0 0 16 202 1 : tunables 120 60 8 : slabdata 0 0 0
dm_rq_target_io 0 0 392 10 1 : tunables 54 27 8 : slabdata 0 0 0
dm_target_io 0 0 24 144 1 : tunables 120 60 8 : slabdata 0 0 0
dm_io 0 0 40 92 1 : tunables 120 60 8 : slabdata 0 0 0
flow_cache 0 0 104 37 1 : tunables 120 60 8 : slabdata 0 0 0
uhci_urb_priv 1 67 56 67 1 : tunables 120 60 8 : slabdata 1 1 0
cfq_io_context 27 84 136 28 1 : tunables 120 60 8 : slabdata 3 3 0
cfq_queue 26 48 240 16 1 : tunables 120 60 8 : slabdata 3 3 0
bsg_cmd 0 0 312 12 1 : tunables 54 27 8 : slabdata 0 0 0
mqueue_inode_cache 1 4 896 4 1 : tunables 54 27 8 : slabdata 1 1 0
isofs_inode_cache 0 0 640 6 1 : tunables 54 27 8 : slabdata 0 0 0
hugetlbfs_inode_cache 1 6 608 6 1 : tunables 54 27 8 : slabdata 1 1 0
dquot 0 0 256 15 1 : tunables 120 60 8 : slabdata 0 0 0
kioctx 12 20 384 10 1 : tunables 54 27 8 : slabdata 2 2 0
kiocb 0 0 256 15 1 : tunables 120 60 8 : slabdata 0 0 0
inotify_event_private_data 0 0 32 112 1 : tunables 120 60 8 : slabdata 0 0 0
inotify_inode_mark_entry 37 64 120 32 1 : tunables 120 60 8 : slabdata 2 2 0
dnotify_mark_entry 0 0 120 32 1 : tunables 120 60 8 : slabdata 0 0 0
dnotify_struct 0 0 32 112 1 : tunables 120 60 8 : slabdata 0 0 0
dio 0 0 640 6 1 : tunables 54 27 8 : slabdata 0 0 0
fasync_cache 0 0 24 144 1 : tunables 120 60 8 : slabdata 0 0 0
khugepaged_mm_slot 1 92 40 92 1 : tunables 120 60 8 : slabdata 1 1 0
ksm_mm_slot 0 0 48 77 1 : tunables 120 60 8 : slabdata 0 0 0
ksm_stable_node 0 0 48 77 1 : tunables 120 60 8 : slabdata 0 0 0
ksm_rmap_item 0 0 64 59 1 : tunables 120 60 8 : slabdata 0 0 0
utrace_engine 0 0 56 67 1 : tunables 120 60 8 : slabdata 0 0 0
utrace 0 0 64 59 1 : tunables 120 60 8 : slabdata 0 0 0
pid_namespace 0 0 2168 3 2 : tunables 24 12 8 : slabdata 0 0 0
posix_timers_cache 0 0 176 22 1 : tunables 120 60 8 : slabdata 0 0 0
uid_cache 4 30 128 30 1 : tunables 120 60 8 : slabdata 1 1 0
UNIX 22 45 832 9 2 : tunables 54 27 8 : slabdata 5 5 0
ip_mrt_cache 0 0 128 30 1 : tunables 120 60 8 : slabdata 0 0 0
UDP-Lite 0 0 896 4 1 : tunables 54 27 8 : slabdata 0 0 0
tcp_bind_bucket 5 59 64 59 1 : tunables 120 60 8 : slabdata 1 1 0
inet_peer_cache 0 0 64 59 1 : tunables 120 60 8 : slabdata 0 0 0
secpath_cache 0 0 64 59 1 : tunables 120 60 8 : slabdata 0 0 0
xfrm_dst_cache 0 0 448 8 1 : tunables 54 27 8 : slabdata 0 0 0
ip_fib_alias 1 112 32 112 1 : tunables 120 60 8 : slabdata 1 1 0
ip_fib_hash 14 53 72 53 1 : tunables 120 60 8 : slabdata 1 1 0
ip_dst_cache 11 20 384 10 1 : tunables 54 27 8 : slabdata 2 2 0
arp_cache 4 15 256 15 1 : tunables 120 60 8 : slabdata 1 1 0
PING 0 0 832 9 2 : tunables 54 27 8 : slabdata 0 0 0
RAW 129 135 832 9 2 : tunables 54 27 8 : slabdata 15 15 0
UDP 1 4 896 4 1 : tunables 54 27 8 : slabdata 1 1 0
tw_sock_TCP 0 0 256 15 1 : tunables 120 60 8 : slabdata 0 0 0
request_sock_TCP 0 0 128 30 1 : tunables 120 60 8 : slabdata 0 0 0
TCP 5 12 1728 4 2 : tunables 24 12 8 : slabdata 3 3 0
eventpoll_pwq 22 424 72 53 1 : tunables 120 60 8 : slabdata 8 8 0
eventpoll_epi 22 360 128 30 1 : tunables 120 60 8 : slabdata 12 12 0
sgpool-128 2 2 4096 1 1 : tunables 24 12 8 : slabdata 2 2 0
sgpool-64 2 2 2048 2 1 : tunables 24 12 8 : slabdata 1 1 0
sgpool-32 2 4 1024 4 1 : tunables 54 27 8 : slabdata 1 1 0
sgpool-16 2 8 512 8 1 : tunables 54 27 8 : slabdata 1 1 0
sgpool-8 5 15 256 15 1 : tunables 120 60 8 : slabdata 1 1 0
scsi_data_buffer 0 0 24 144 1 : tunables 120 60 8 : slabdata 0 0 0
blkdev_integrity 0 0 112 34 1 : tunables 120 60 8 : slabdata 0 0 0
blkdev_queue 26 28 2864 2 2 : tunables 24 12 8 : slabdata 14 14 0
blkdev_requests 9 33 352 11 1 : tunables 54 27 8 : slabdata 2 3 0
blkdev_ioc 28 48 80 48 1 : tunables 120 60 8 : slabdata 1 1 0
fsnotify_event_holder 0 0 24 144 1 : tunables 120 60 8 : slabdata 0 0 0
fsnotify_event 0 0 104 37 1 : tunables 120 60 8 : slabdata 0 0 0
bio-0 6 20 192 20 1 : tunables 120 60 8 : slabdata 1 1 0
biovec-256 2 2 4096 1 1 : tunables 24 12 8 : slabdata 2 2 0
biovec-128 0 0 2048 2 1 : tunables 24 12 8 : slabdata 0 0 0
biovec-64 0 0 1024 4 1 : tunables 54 27 8 : slabdata 0 0 0
biovec-16 0 0 256 15 1 : tunables 120 60 8 : slabdata 0 0 0
bip-256 2 2 4224 1 2 : tunables 8 4 0 : slabdata 2 2 0
bip-128 0 0 2176 3 2 : tunables 24 12 8 : slabdata 0 0 0
bip-64 0 0 1152 7 2 : tunables 24 12 8 : slabdata 0 0 0
bip-16 0 0 384 10 1 : tunables 54 27 8 : slabdata 0 0 0
bip-4 0 0 192 20 1 : tunables 120 60 8 : slabdata 0 0 0
bip-1 0 0 128 30 1 : tunables 120 60 8 : slabdata 0 0 0
sock_inode_cache 330 400 704 5 1 : tunables 54 27 8 : slabdata 80 80 0
skbuff_fclone_cache 3 7 512 7 1 : tunables 54 27 8 : slabdata 1 1 0
skbuff_head_cache 517 630 256 15 1 : tunables 120 60 8 : slabdata 42 42 0
file_lock_cache 9 44 176 22 1 : tunables 120 60 8 : slabdata 2 2 0
net_namespace 0 0 2432 3 2 : tunables 24 12 8 : slabdata 0 0 0
shmem_inode_cache 677 685 784 5 1 : tunables 54 27 8 : slabdata 137 137 0
Acpi-Operand 5674 5777 72 53 1 : tunables 120 60 8 : slabdata 109 109 0
Acpi-ParseExt 0 0 72 53 1 : tunables 120 60 8 : slabdata 0 0 0
Acpi-Parse 0 0 48 77 1 : tunables 120 60 8 : slabdata 0 0 0
Acpi-State 0 0 80 48 1 : tunables 120 60 8 : slabdata 0 0 0
Acpi-Namespace 4492 4508 40 92 1 : tunables 120 60 8 : slabdata 49 49 0
task_delay_info 216 374 112 34 1 : tunables 120 60 8 : slabdata 11 11 0
taskstats 1 12 328 12 1 : tunables 54 27 8 : slabdata 1 1 0
proc_inode_cache 653 876 656 6 1 : tunables 54 27 8 : slabdata 146 146 0
sigqueue 1 24 160 24 1 : tunables 120 60 8 : slabdata 1 1 0
bdev_cache 14 16 832 4 1 : tunables 54 27 8 : slabdata 4 4 0
sysfs_dir_cache 10122 10233 144 27 1 : tunables 120 60 8 : slabdata 379 379 0
mnt_cache 28 45 256 15 1 : tunables 120 60 8 : slabdata 3 3 0
filp 588 1300 192 20 1 : tunables 120 60 8 : slabdata 65 65 0
inode_cache 7713 7998 592 6 1 : tunables 54 27 8 : slabdata 1333 1333 0
dentry 18571 19200 192 20 1 : tunables 120 60 8 : slabdata 960 960 0
names_cache 2 2 4096 1 1 : tunables 24 12 8 : slabdata 2 2 0
avc_node 502 885 64 59 1 : tunables 120 60 8 : slabdata 15 15 0
selinux_inode_security 16038 16907 72 53 1 : tunables 120 60 8 : slabdata 319 319 0
radix_tree_node 2467 2485 560 7 1 : tunables 54 27 8 : slabdata 355 355 0
key_jar 5 20 192 20 1 : tunables 120 60 8 : slabdata 1 1 0
buffer_head 25886 27602 104 37 1 : tunables 120 60 8 : slabdata 746 746 0
nsproxy 0 0 48 77 1 : tunables 120 60 8 : slabdata 0 0 0
vm_area_struct 8780 10678 200 19 1 : tunables 120 60 8 : slabdata 562 562 0
mm_struct 57 115 1408 5 2 : tunables 24 12 8 : slabdata 23 23 0
fs_cache 61 295 64 59 1 : tunables 120 60 8 : slabdata 5 5 0
files_cache 62 198 704 11 2 : tunables 54 27 8 : slabdata 18 18 0
signal_cache 123 203 1088 7 2 : tunables 24 12 8 : slabdata 29 29 0
sighand_cache 123 141 2112 3 2 : tunables 24 12 8 : slabdata 47 47 0
task_xstate 149 270 832 9 2 : tunables 54 27 8 : slabdata 30 30 0
task_struct 211 246 2656 3 2 : tunables 24 12 8 : slabdata 82 82 0
cred_jar 239 460 192 20 1 : tunables 120 60 8 : slabdata 23 23 0
anon_vma_chain 11838 18865 48 77 1 : tunables 120 60 8 : slabdata 245 245 20
anon_vma 4600 8464 40 92 1 : tunables 120 60 8 : slabdata 92 92 0
pid 221 390 128 30 1 : tunables 120 60 8 : slabdata 13 13 0
shared_policy_node 0 0 48 77 1 : tunables 120 60 8 : slabdata 0 0 0
numa_policy 1 28 136 28 1 : tunables 120 60 8 : slabdata 1 1 0
idr_layer_cache 234 238 544 7 1 : tunables 54 27 8 : slabdata 34 34 0
size-4194304(DMA) 0 0 4194304 1 1024 : tunables 1 1 0 : slabdata 0 0 0
size-4194304 0 0 4194304 1 1024 : tunables 1 1 0 : slabdata 0 0 0
size-2097152(DMA) 0 0 2097152 1 512 : tunables 1 1 0 : slabdata 0 0 0
size-2097152 0 0 2097152 1 512 : tunables 1 1 0 : slabdata 0 0 0
size-1048576(DMA) 0 0 1048576 1 256 : tunables 1 1 0 : slabdata 0 0 0
size-1048576 0 0 1048576 1 256 : tunables 1 1 0 : slabdata 0 0 0
size-524288(DMA) 0 0 524288 1 128 : tunables 1 1 0 : slabdata 0 0 0
size-524288 0 0 524288 1 128 : tunables 1 1 0 : slabdata 0 0 0
size-262144(DMA) 0 0 262144 1 64 : tunables 1 1 0 : slabdata 0 0 0
size-262144 0 0 262144 1 64 : tunables 1 1 0 : slabdata 0 0 0
size-131072(DMA) 0 0 131072 1 32 : tunables 8 4 0 : slabdata 0 0 0
size-131072 2 2 131072 1 32 : tunables 8 4 0 : slabdata 2 2 0
size-65536(DMA) 0 0 65536 1 16 : tunables 8 4 0 : slabdata 0 0 0
size-65536 3 3 65536 1 16 : tunables 8 4 0 : slabdata 3 3 0
size-32768(DMA) 0 0 32768 1 8 : tunables 8 4 0 : slabdata 0 0 0
size-32768 1 1 32768 1 8 : tunables 8 4 0 : slabdata 1 1 0
size-16384(DMA) 0 0 16384 1 4 : tunables 8 4 0 : slabdata 0 0 0
size-16384 10 10 16384 1 4 : tunables 8 4 0 : slabdata 10 10 0
size-8192(DMA) 0 0 8192 1 2 : tunables 8 4 0 : slabdata 0 0 0
size-8192 16 16 8192 1 2 : tunables 8 4 0 : slabdata 16 16 0
size-4096(DMA) 0 0 4096 1 1 : tunables 24 12 8 : slabdata 0 0 0
size-4096 239 239 4096 1 1 : tunables 24 12 8 : slabdata 239 239 0
size-2048(DMA) 0 0 2048 2 1 : tunables 24 12 8 : slabdata 0 0 0
size-2048 730 730 2048 2 1 : tunables 24 12 8 : slabdata 365 365 0
size-1024(DMA) 0 0 1024 4 1 : tunables 54 27 8 : slabdata 0 0 0
size-1024 880 884 1024 4 1 : tunables 54 27 8 : slabdata 221 221 0
size-512(DMA) 0 0 512 8 1 : tunables 54 27 8 : slabdata 0 0 0
size-512 1300 1336 512 8 1 : tunables 54 27 8 : slabdata 167 167 0
size-256(DMA) 0 0 256 15 1 : tunables 120 60 8 : slabdata 0 0 0
size-256 904 915 256 15 1 : tunables 120 60 8 : slabdata 61 61 0
size-192(DMA) 0 0 192 20 1 : tunables 120 60 8 : slabdata 0 0 0
size-192 1810 2020 192 20 1 : tunables 120 60 8 : slabdata 101 101 0
size-128(DMA) 0 0 128 30 1 : tunables 120 60 8 : slabdata 0 0 0
size-64(DMA) 0 0 64 59 1 : tunables 120 60 8 : slabdata 0 0 0
size-64 14902 15635 64 59 1 : tunables 120 60 8 : slabdata 265 265 0
size-32(DMA) 0 0 32 112 1 : tunables 120 60 8 : slabdata 0 0 0
size-128 3681 3780 128 30 1 : tunables 120 60 8 : slabdata 126 126 0
size-32 402150 403760 32 112 1 : tunables 120 60 8 : slabdata 3605 3605 0
kmem_cache 182 182 32896 1 16 : tunables 8 4 0 : slabdata 182 182 0
提示:tunables就是可调整的参数;
[root@node1 ~]# echo 'ext4_inode_cache 108 54 8' > /proc/slabinfo(调整ext4_inode_cache值)
[root@node1 ~]# cat /proc/slabinfo(查看/proc/slabinfo文件内容)
slabinfo - version: 2.1
# name <active_objs> <num_objs> <objsize> <objperslab> <pagesperslab> : tunables <limit> <batchcount> <sharedfactor> : slabdata
<active_slabs> <num_slabs> <sharedavail>
fib6_nodes 24 59 64 59 1 : tunables 120 60 8 : slabdata 1 1 0
ip6_dst_cache 16 30 384 10 1 : tunables 54 27 8 : slabdata 3 3 0
ndisc_cache 1 15 256 15 1 : tunables 120 60 8 : slabdata 1 1 0
ip6_mrt_cache 0 0 128 30 1 : tunables 120 60 8 : slabdata 0 0 0
RAWv6 131 132 1024 4 1 : tunables 54 27 8 : slabdata 33 33 0
UDPLITEv6 0 0 1024 4 1 : tunables 54 27 8 : slabdata 0 0 0
UDPv6 0 0 1024 4 1 : tunables 54 27 8 : slabdata 0 0 0
tw_sock_TCPv6 0 0 320 12 1 : tunables 54 27 8 : slabdata 0 0 0
request_sock_TCPv6 0 0 192 20 1 : tunables 120 60 8 : slabdata 0 0 0
TCPv6 4 4 1920 2 1 : tunables 24 12 8 : slabdata 2 2 0
jbd2_1k 0 0 1024 4 1 : tunables 54 27 8 : slabdata 0 0 0
avtab_node 610413 610560 24 144 1 : tunables 120 60 8 : slabdata 4240 4240 0
ext4_inode_cache 6678 6684 1000 4 1 : tunables 108 54 8 : slabdata 1671 1671 0
ext4_xattr 1 44 88 44 1 : tunables 120 60 8 : slabdata 1 1 0
ext4_free_block_extents 0 0 56 67 1 : tunables 120 60 8 : slabdata 0 0 0
ext4_alloc_context 0 0 136 28 1 : tunables 120 60 8 : slabdata 0 0 0
ext4_prealloc_space 6 37 104 37 1 : tunables 120 60 8 : slabdata 1 1 0
ext4_system_zone 0 0 40 92 1 : tunables 120 60 8 : slabdata 0 0 0
jbd2_journal_handle 0 0 24 144 1 : tunables 120 60 8 : slabdata 0 0 0
jbd2_journal_head 4 34 112 34 1 : tunables 120 60 8 : slabdata 1 1 0
jbd2_revoke_table 4 202 16 202 1 : tunables 120 60 8 : slabdata 1 1 0
jbd2_revoke_record 0 0 32 112 1 : tunables 120 60 8 : slabdata 0 0 0
sd_ext_cdb 2 112 32 112 1 : tunables 120 60 8 : slabdata 1 1 0
scsi_sense_cache 3 30 128 30 1 : tunables 120 60 8 : slabdata 1 1 0
scsi_cmd_cache 3 15 256 15 1 : tunables 120 60 8 : slabdata 1 1 0
dm_raid1_read_record 0 0 1064 7 2 : tunables 24 12 8 : slabdata 0 0 0
kcopyd_job 0 0 3240 2 2 : tunables 24 12 8 : slabdata 0 0 0
io 0 0 64 59 1 : tunables 120 60 8 : slabdata 0 0 0
dm_uevent 0 0 2608 3 2 : tunables 24 12 8 : slabdata 0 0 0
dm_rq_clone_bio_info 0 0 16 202 1 : tunables 120 60 8 : slabdata 0 0 0
dm_rq_target_io 0 0 392 10 1 : tunables 54 27 8 : slabdata 0 0 0
dm_target_io 0 0 24 144 1 : tunables 120 60 8 : slabdata 0 0 0
dm_io 0 0 40 92 1 : tunables 120 60 8 : slabdata 0 0 0
flow_cache 0 0 104 37 1 : tunables 120 60 8 : slabdata 0 0 0
uhci_urb_priv 1 67 56 67 1 : tunables 120 60 8 : slabdata 1 1 0
cfq_io_context 27 84 136 28 1 : tunables 120 60 8 : slabdata 3 3 0
cfq_queue 26 48 240 16 1 : tunables 120 60 8 : slabdata 3 3 0
bsg_cmd 0 0 312 12 1 : tunables 54 27 8 : slabdata 0 0 0
mqueue_inode_cache 1 4 896 4 1 : tunables 54 27 8 : slabdata 1 1 0
isofs_inode_cache 0 0 640 6 1 : tunables 54 27 8 : slabdata 0 0 0
hugetlbfs_inode_cache 1 6 608 6 1 : tunables 54 27 8 : slabdata 1 1 0
dquot 0 0 256 15 1 : tunables 120 60 8 : slabdata 0 0 0
kioctx 12 20 384 10 1 : tunables 54 27 8 : slabdata 2 2 0
kiocb 0 0 256 15 1 : tunables 120 60 8 : slabdata 0 0 0
inotify_event_private_data 0 0 32 112 1 : tunables 120 60 8 : slabdata 0 0 0
inotify_inode_mark_entry 37 64 120 32 1 : tunables 120 60 8 : slabdata 2 2 0
dnotify_mark_entry 0 0 120 32 1 : tunables 120 60 8 : slabdata 0 0 0
dnotify_struct 0 0 32 112 1 : tunables 120 60 8 : slabdata 0 0 0
dio 0 0 640 6 1 : tunables 54 27 8 : slabdata 0 0 0
fasync_cache 0 0 24 144 1 : tunables 120 60 8 : slabdata 0 0 0
khugepaged_mm_slot 1 92 40 92 1 : tunables 120 60 8 : slabdata 1 1 0
ksm_mm_slot 0 0 48 77 1 : tunables 120 60 8 : slabdata 0 0 0
ksm_stable_node 0 0 48 77 1 : tunables 120 60 8 : slabdata 0 0 0
ksm_rmap_item 0 0 64 59 1 : tunables 120 60 8 : slabdata 0 0 0
utrace_engine 0 0 56 67 1 : tunables 120 60 8 : slabdata 0 0 0
utrace 0 0 64 59 1 : tunables 120 60 8 : slabdata 0 0 0
pid_namespace 0 0 2168 3 2 : tunables 24 12 8 : slabdata 0 0 0
posix_timers_cache 0 0 176 22 1 : tunables 120 60 8 : slabdata 0 0 0
uid_cache 4 30 128 30 1 : tunables 120 60 8 : slabdata 1 1 0
UNIX 22 45 832 9 2 : tunables 54 27 8 : slabdata 5 5 0
ip_mrt_cache 0 0 128 30 1 : tunables 120 60 8 : slabdata 0 0 0
UDP-Lite 0 0 896 4 1 : tunables 54 27 8 : slabdata 0 0 0
tcp_bind_bucket 5 59 64 59 1 : tunables 120 60 8 : slabdata 1 1 0
inet_peer_cache 0 0 64 59 1 : tunables 120 60 8 : slabdata 0 0 0
secpath_cache 0 0 64 59 1 : tunables 120 60 8 : slabdata 0 0 0
xfrm_dst_cache 0 0 448 8 1 : tunables 54 27 8 : slabdata 0 0 0
ip_fib_alias 1 112 32 112 1 : tunables 120 60 8 : slabdata 1 1 0
ip_fib_hash 14 53 72 53 1 : tunables 120 60 8 : slabdata 1 1 0
ip_dst_cache 11 20 384 10 1 : tunables 54 27 8 : slabdata 2 2 0
arp_cache 4 15 256 15 1 : tunables 120 60 8 : slabdata 1 1 0
PING 0 0 832 9 2 : tunables 54 27 8 : slabdata 0 0 0
RAW 129 135 832 9 2 : tunables 54 27 8 : slabdata 15 15 0
UDP 1 4 896 4 1 : tunables 54 27 8 : slabdata 1 1 0
tw_sock_TCP 0 0 256 15 1 : tunables 120 60 8 : slabdata 0 0 0
request_sock_TCP 0 0 128 30 1 : tunables 120 60 8 : slabdata 0 0 0
TCP 5 12 1728 4 2 : tunables 24 12 8 : slabdata 3 3 0
eventpoll_pwq 22 424 72 53 1 : tunables 120 60 8 : slabdata 8 8 0
eventpoll_epi 22 360 128 30 1 : tunables 120 60 8 : slabdata 12 12 0
sgpool-128 2 2 4096 1 1 : tunables 24 12 8 : slabdata 2 2 0
sgpool-64 2 2 2048 2 1 : tunables 24 12 8 : slabdata 1 1 0
sgpool-32 2 4 1024 4 1 : tunables 54 27 8 : slabdata 1 1 0
sgpool-16 2 8 512 8 1 : tunables 54 27 8 : slabdata 1 1 0
sgpool-8 2 15 256 15 1 : tunables 120 60 8 : slabdata 1 1 0
scsi_data_buffer 0 0 24 144 1 : tunables 120 60 8 : slabdata 0 0 0
blkdev_integrity 0 0 112 34 1 : tunables 120 60 8 : slabdata 0 0 0
blkdev_queue 26 28 2864 2 2 : tunables 24 12 8 : slabdata 14 14 0
blkdev_requests 8 22 352 11 1 : tunables 54 27 8 : slabdata 2 2 0
blkdev_ioc 28 48 80 48 1 : tunables 120 60 8 : slabdata 1 1 0
fsnotify_event_holder 0 0 24 144 1 : tunables 120 60 8 : slabdata 0 0 0
fsnotify_event 0 0 104 37 1 : tunables 120 60 8 : slabdata 0 0 0
bio-0 2 20 192 20 1 : tunables 120 60 8 : slabdata 1 1 0
biovec-256 2 2 4096 1 1 : tunables 24 12 8 : slabdata 2 2 0
biovec-128 0 0 2048 2 1 : tunables 24 12 8 : slabdata 0 0 0
biovec-64 0 0 1024 4 1 : tunables 54 27 8 : slabdata 0 0 0
biovec-16 0 0 256 15 1 : tunables 120 60 8 : slabdata 0 0 0
bip-256 2 2 4224 1 2 : tunables 8 4 0 : slabdata 2 2 0
bip-128 0 0 2176 3 2 : tunables 24 12 8 : slabdata 0 0 0
bip-64 0 0 1152 7 2 : tunables 24 12 8 : slabdata 0 0 0
bip-16 0 0 384 10 1 : tunables 54 27 8 : slabdata 0 0 0
bip-4 0 0 192 20 1 : tunables 120 60 8 : slabdata 0 0 0
bip-1 0 0 128 30 1 : tunables 120 60 8 : slabdata 0 0 0
sock_inode_cache 345 400 704 5 1 : tunables 54 27 8 : slabdata 80 80 0
skbuff_fclone_cache 7 7 512 7 1 : tunables 54 27 8 : slabdata 1 1 0
skbuff_head_cache 546 630 256 15 1 : tunables 120 60 8 : slabdata 42 42 0
file_lock_cache 9 44 176 22 1 : tunables 120 60 8 : slabdata 2 2 0
net_namespace 0 0 2432 3 2 : tunables 24 12 8 : slabdata 0 0 0
shmem_inode_cache 677 685 784 5 1 : tunables 54 27 8 : slabdata 137 137 0
Acpi-Operand 5674 5777 72 53 1 : tunables 120 60 8 : slabdata 109 109 0
Acpi-ParseExt 0 0 72 53 1 : tunables 120 60 8 : slabdata 0 0 0
Acpi-Parse 0 0 48 77 1 : tunables 120 60 8 : slabdata 0 0 0
Acpi-State 0 0 80 48 1 : tunables 120 60 8 : slabdata 0 0 0
Acpi-Namespace 4492 4508 40 92 1 : tunables 120 60 8 : slabdata 49 49 0
task_delay_info 216 374 112 34 1 : tunables 120 60 8 : slabdata 11 11 0
taskstats 1 12 328 12 1 : tunables 54 27 8 : slabdata 1 1 0
proc_inode_cache 643 876 656 6 1 : tunables 54 27 8 : slabdata 146 146 0
sigqueue 1 24 160 24 1 : tunables 120 60 8 : slabdata 1 1 0
bdev_cache 14 16 832 4 1 : tunables 54 27 8 : slabdata 4 4 0
sysfs_dir_cache 10122 10233 144 27 1 : tunables 120 60 8 : slabdata 379 379 0
mnt_cache 28 45 256 15 1 : tunables 120 60 8 : slabdata 3 3 0
filp 652 1300 192 20 1 : tunables 120 60 8 : slabdata 65 65 0
inode_cache 7713 7998 592 6 1 : tunables 54 27 8 : slabdata 1333 1333 0
dentry 18629 19200 192 20 1 : tunables 120 60 8 : slabdata 960 960 0
names_cache 2 2 4096 1 1 : tunables 24 12 8 : slabdata 2 2 0
avc_node 510 885 64 59 1 : tunables 120 60 8 : slabdata 15 15 0
selinux_inode_security 16096 16907 72 53 1 : tunables 120 60 8 : slabdata 319 319 0
radix_tree_node 2468 2485 560 7 1 : tunables 54 27 8 : slabdata 355 355 0
key_jar 5 20 192 20 1 : tunables 120 60 8 : slabdata 1 1 0
buffer_head 25889 27602 104 37 1 : tunables 120 60 8 : slabdata 746 746 0
nsproxy 0 0 48 77 1 : tunables 120 60 8 : slabdata 0 0 0
vm_area_struct 8819 10678 200 19 1 : tunables 120 60 8 : slabdata 562 562 0
mm_struct 57 115 1408 5 2 : tunables 24 12 8 : slabdata 23 23 0
fs_cache 61 295 64 59 1 : tunables 120 60 8 : slabdata 5 5 0
files_cache 62 198 704 11 2 : tunables 54 27 8 : slabdata 18 18 0
signal_cache 123 203 1088 7 2 : tunables 24 12 8 : slabdata 29 29 0
sighand_cache 123 141 2112 3 2 : tunables 24 12 8 : slabdata 47 47 0
task_xstate 149 270 832 9 2 : tunables 54 27 8 : slabdata 30 30 0
task_struct 211 246 2656 3 2 : tunables 24 12 8 : slabdata 82 82 0
cred_jar 194 460 192 20 1 : tunables 120 60 8 : slabdata 23 23 0
anon_vma_chain 11818 18865 48 77 1 : tunables 120 60 8 : slabdata 245 245 0
anon_vma 4612 8464 40 92 1 : tunables 120 60 8 : slabdata 92 92 0
pid 221 390 128 30 1 : tunables 120 60 8 : slabdata 13 13 0
shared_policy_node 0 0 48 77 1 : tunables 120 60 8 : slabdata 0 0 0
numa_policy 1 28 136 28 1 : tunables 120 60 8 : slabdata 1 1 0
idr_layer_cache 234 238 544 7 1 : tunables 54 27 8 : slabdata 34 34 0
size-4194304(DMA) 0 0 4194304 1 1024 : tunables 1 1 0 : slabdata 0 0 0
size-4194304 0 0 4194304 1 1024 : tunables 1 1 0 : slabdata 0 0 0
size-2097152(DMA) 0 0 2097152 1 512 : tunables 1 1 0 : slabdata 0 0 0
size-2097152 0 0 2097152 1 512 : tunables 1 1 0 : slabdata 0 0 0
size-1048576(DMA) 0 0 1048576 1 256 : tunables 1 1 0 : slabdata 0 0 0
size-1048576 0 0 1048576 1 256 : tunables 1 1 0 : slabdata 0 0 0
size-524288(DMA) 0 0 524288 1 128 : tunables 1 1 0 : slabdata 0 0 0
size-524288 0 0 524288 1 128 : tunables 1 1 0 : slabdata 0 0 0
size-262144(DMA) 0 0 262144 1 64 : tunables 1 1 0 : slabdata 0 0 0
size-262144 0 0 262144 1 64 : tunables 1 1 0 : slabdata 0 0 0
size-131072(DMA) 0 0 131072 1 32 : tunables 8 4 0 : slabdata 0 0 0
size-131072 2 2 131072 1 32 : tunables 8 4 0 : slabdata 2 2 0
size-65536(DMA) 0 0 65536 1 16 : tunables 8 4 0 : slabdata 0 0 0
size-65536 3 3 65536 1 16 : tunables 8 4 0 : slabdata 3 3 0
size-32768(DMA) 0 0 32768 1 8 : tunables 8 4 0 : slabdata 0 0 0
size-32768 1 1 32768 1 8 : tunables 8 4 0 : slabdata 1 1 0
size-16384(DMA) 0 0 16384 1 4 : tunables 8 4 0 : slabdata 0 0 0
size-16384 10 10 16384 1 4 : tunables 8 4 0 : slabdata 10 10 0
size-8192(DMA) 0 0 8192 1 2 : tunables 8 4 0 : slabdata 0 0 0
size-8192 16 16 8192 1 2 : tunables 8 4 0 : slabdata 16 16 0
size-4096(DMA) 0 0 4096 1 1 : tunables 24 12 8 : slabdata 0 0 0
size-4096 240 240 4096 1 1 : tunables 24 12 8 : slabdata 240 240 0
size-2048(DMA) 0 0 2048 2 1 : tunables 24 12 8 : slabdata 0 0 0
size-2048 730 730 2048 2 1 : tunables 24 12 8 : slabdata 365 365 0
size-1024(DMA) 0 0 1024 4 1 : tunables 54 27 8 : slabdata 0 0 0
size-1024 883 884 1024 4 1 : tunables 54 27 8 : slabdata 221 221 0
size-512(DMA) 0 0 512 8 1 : tunables 54 27 8 : slabdata 0 0 0
size-512 1326 1336 512 8 1 : tunables 54 27 8 : slabdata 167 167 0
size-256(DMA) 0 0 256 15 1 : tunables 120 60 8 : slabdata 0 0 0
size-256 903 915 256 15 1 : tunables 120 60 8 : slabdata 61 61 0
size-192(DMA) 0 0 192 20 1 : tunables 120 60 8 : slabdata 0 0 0
size-192 1810 2020 192 20 1 : tunables 120 60 8 : slabdata 101 101 0
size-128(DMA) 0 0 128 30 1 : tunables 120 60 8 : slabdata 0 0 0
size-64(DMA) 0 0 64 59 1 : tunables 120 60 8 : slabdata 0 0 0
size-64 14903 15635 64 59 1 : tunables 120 60 8 : slabdata 265 265 0
size-32(DMA) 0 0 32 112 1 : tunables 120 60 8 : slabdata 0 0 0
size-128 3673 3780 128 30 1 : tunables 120 60 8 : slabdata 126 126 0
size-32 402141 403760 32 112 1 : tunables 120 60 8 : slabdata 3605 3605 0
kmem_cache 182 182 32896 1 16 : tunables 8 4 0 : slabdata 182 182 0
/proc, /sys
进程管理,CPU
内存调优,
I/O
文件系统
网络子系统
调优思路:
性能指标,定位瓶颈
调优,
Black art
进程间通信管理类命令:
ipcs
ipcrm
shm:
shmmni: 系统级别,所允许使用的共享内存上限;
shmall: 系统级别,能够为共享内存分配使用的最大页面数;
shmmax:单个共享内存段的上限;
message:
msgmnb:单个消息队列的上限,单位为字节
msgmni:系统级别,消息队列个数上限;
msgmax: 单个消息大小的上限,单位为字节;
手动清写脏缓存和缓冲
sync
echo s > /proc/sysrq-trigger(完成清写)
回收:
setting the /proc/sys/vm/panic_on_oom parameter to 0 instructs the kernel to call the oom_killer function when OOM occurs(如何设定为0,一旦内存耗尽就可以使用oom_killer)
-16-15:协助计算oom_score
-17: disables the oom_killer for that process,
内存泄漏:内存分出去了,别人不用了,却释放不了了,始终处于被使用状态;
在不同的磁盘上使用不同的swap分区,优先级相同,如果只有一个硬盘没有意义;
HugePage: 提高TLB性能,提高内存分配效率;
IPC:进程间通信;
pdflush:
slab
swap
oom
shared memory共享内存:
kernel.shmmni
Specifies the maximum number of shared memory segments system-wide,default = 4096(在全系统范围内允许使用多少个共享内存段)
kernel.shmall
Specifies the total amount of shared memory,in pages,that can be used at one time on the system,default = 2097152(全局范围内一次性可以使用的最大共享内存页面数)
This should be at least kernel.shmmax/PAGE_SIZE(最小不能少于shmmax除以页面大小)
kernel.shmmax
Specifies the maximum size of a shared memory segment that can be created(可以被创建的最大内存大小)
message消息:
kernel.msgmnb
Specifies the maximum number of bytes in single message queue,default=16384
kernel.msgmni(单个消息队列的最大字节数)
Specifies the maximum number of message queue identifiers,default=16(系统范围内最多有多少消息队列)
kernel.msgmax
Specifies the maximum size of message that can be passed between processes(进程间通信的时候所能够使用消息的最大上限)
This memory cannot be swapped,default=8192
Reclaiming dirty pages
Using memory as cache creates a strong need to control how pages are reclaimed
Memory is volatile
Dirty pages need to be written to disk
Free up pages for use by other processes
Handled by pdflush kernel threads
Default minimum of two threads
Additional threads created or destroyed according to IO activity
vm.nr_pdflush_threads show the current number of pdflush threads(当前默认定义启动多少个内核参数)
Tuning pdflush
Tune length/size of memory
vm.dirty_background_ratio
Percentag(of total memory)of number of dirty pages at which pdflush starts writing(脏页占到多大比例以后pdflush开始启动进行倾写)
vm.dirty_ratio
Percentage(of total memory)of number of dirty pages at which a process itself starts writing out dirty data(单个进程的脏页所占的比例以后pdflush开始启动进程倾写)
Tune wait time
vm.dirty_expire_centisecs
Interval between pdflush wakeups in 100ths of a second;set to zero to disable writeback(每隔多少长时间起来一次,0表禁止)
Tune observation period (between pdflush wakeups)
vm.dirty_writeback_centisecs
Defines when data is old enough,in 100ths of a second,to be eligible for writeout bye pdflush(如果某个脏页时间足够久了,在脏页里面已经存储很长时间了,他就起来把这个脏页刷写出去)
Reclaiming clean pages(回收干净页面)
1.Flush all dirty buffers and pages(将buffers和pages同步到磁盘上)
Sync command
fsync system call
Alt-SysRq-S magic system request
echo s > /proc/sysrq-trigger(完成清写)
2.Reclaim clean pages
echo 3 > /proc/sys/vm/drop_caches(回收方法,释放缓存)
1 to free pagecache(释放pagecache)
2 to free dentries and inodes(释放dentries和inodes)
3 to free pagecache,dentries and inodes(释放pagecache,dentries和inodes)
Eliminate bad data from cache
Reduce memory before laptop hibemate
Benchmark subsystem
Out-of-memory killer(用于将那些非常消耗内存的进程杀死,以腾出内存空间的)
kills processes if
All memory(inclswap) is active
No pages are available in ZONE_NORMAL
No memory is available for page table mappings
Interactive processes are preserved if possible
High sleep average indecates interactive
View level of immunity from oom-kill
/proc/PID/oom_score(数值越高越优先被杀死)
Manually invoking oom-kill
echo f>/proc/sysrq-trigger
Will can oom_kill to kill a memory hog process
Does not kill processes if memory is avaliable
Outputs verbose memory information in /var/log/messages
Tuning OOM policy
Protect daemons from oom-kill
echo n>/proc/PID/oom_adj
The process to be killed in an out-of-memory situation is selected based on its badness score
oom_score gets multiplied by 2^n(oom_score值得到是2^n)
Caution:child processes inherit oom_adj from parent
Disable oom-kill in /etc/sysctl.conf
vm.panic_on_oom=1(禁用oom-kill,0标识启动oom_kill)
oom-kill is not a fix for memory leaks
Detecting memory leaks
Tow types of memory leaks
Virtual:process requests but does not use virtual adress space(vsize)
Real:process fails to free memory(rss)
Use sar to observer system-wide memory change
sar -R 1 120(查看系统范围内内存分配情况)
Report memory statistics
Use watch with ps or pmap
watch -n1 'ps axo pid,comm,rss,vsize | grep httpd'(不间断检测某些进程它的rss,vsize,如果只增不减就可能内存泄漏了)
Use valgrind
valgrind --tool=memmcheck cat /proc/$$/maps(观察内存泄漏,评估当前进程,$$评估当前进程号)
What is swap?
The unmapping of page frames from an active process
Swap-out:page frames are unmapped and placed in page slots on a swap device(从系统写到swap交换分区上去了)
Swap-in:page frames are read in from page slots on a swap device and mapped into a process address space(从swap交换机分区读数据进来)
Which pages get swapped?(那些可能配swap)
Inactvie pages(非活动页)
Anonymous pages(匿名页)
Swap cache(从swap加载到内存里面,却没有在内存中做任何修改的数据)
Contains unmodified swapped-in pages(从swap中读进来的页面)
Avoids race conditions when multiple processes access a common page frame(swap cache可以有效避免资源竞争,尤其是多个进程同时访问同一个页框时候的资源竞争)
Improving swap performance(提高swap性能)
Defer swap until think time
Frequent,small swaps(使用小swaps分区)
Swap anonymous pages more readily(可以将匿名页交换出去)
Reduce visit count(降低访问次数)
Distribute swap areas across maximum of 32 LUNs
Assign an equal,high priority to multiple swap areas
Kernel uses highest priority first
Kernel uses round-robin forswap areas of equal priorty
Reduce service time(降低服务时间)
Use partitions,not files(使用快速磁盘)
Place swap areas on lowest numbered partitions of fastest LUNs(把swap放在硬盘最外道分区)
Tuning swappiness(使用交换内存概率调整)
Searching for inactive pages can consume the CPU
On large-memory boxes,finding and unmapping inactive pages consumes more disk and CPU resources than writing anonyous pages to disk
Prefer anonymous pages(higher value)
vm.swappiness(倾向百分比)
Linux prefer to swap anonymous pages when:(要不要将匿名页交换出去)
% of memory mapped into page tables + vm.swappiness >= 100(将内存映射为页表那些内存比例加上vm.swappiness如果大于等于100开始使用交换内存)
Consequences
Reduced CPU utilization
Reduced disk bandwidth
Tuning swap size(使用多大交换内存)
Considerations
Kernel uses two bytes in ZONE_NORMAL to track each page of swap
Storage bandwidth cannot keep up with RAM and can lead to swaplocak
If memory shorage is severe,kernel will kill user mode process
general guidelines
Batch compute servers:up to 4*RAM(如果科学计算就是4倍RAM)
Database server: <= 1Gib(如果是数据库服务器小于等于1G)
Application server: >= 0.5*RAM(应用程序服务器RAM一半)
Consequences
Avoid swaplocak
Tuning swap for think time
Swap smaller amounts
vm.page_cluster
Protect the current process from paging out
vm.swap_token_timeout
Used to control how long a process is protected from paging when the system thrashing,measured in seconds
The value would be useful to tune thrashing behavior
Consequences
Smoother swap behavior
Enable current process to clean up its pages
Tuning swap visit count(调优swap性能)
Create up to 32 swap devices
Make swap signatures
mkswap -L SWAP_LABEL /path/to/device
Assign priority in /etc/fstab(在多个设备多建立几个swap分区,让它们拥有相同的优先级,如果某个设备比较慢作为补充使用,让它优先级低点)
/dev/sda1 swap swap pri=5 0 0
/dev/sdb1 swap swap pri=5 0 0
LABEL=testswap swap swap pri=5 0 0
/swaps/swapfil swap swap pri=5 0 0
Active
swapon -a
View active swap devices in /proc/swaps
Monitoring memory usage(监控内存使用)
Memory activity
vmstat -n [interval] [count]
sar -r [interval] [count]
Report memory and swap space utilization statistics
Rate of change in memory
sar -R [interval] [count]
Report memory statistics
Swap activity
sar -W [interval] [count]
Report swapping statistics
ALL IO
sar -B [interval] [count]
Report paging statistics
[root@node1 ~]# man ipc(查看ipc的man帮助)
ipc - System V IPC system calls
ipc() is a common kernel entry point for the System V IPC calls for messages, semaphores, and shared
memory. call determines which IPC function to invoke; the other arguments are passed through to the
appropriate call.
提示:三种常见的进程间通信方式messages(消息), semaphores(信号),shared memory(共享内存);
[root@node1 ~]# ipcs -l(查看当前限定)
------ Shared Memory Limits --------
max number of segments = 4096(共享内存最大允许有多少个段)
max seg size (kbytes) = 4194303(段大小最大字节数)
max total shared memory (kbytes) = 1073741824(允许在全局范围使用的共享内存大小)
min seg size (bytes) = 1(最小段大小,一个信号至少使用一个字节传输)
------ Semaphore Limits --------
max number of arrays = 128
max semaphores per array = 250
max semaphores system wide = 32000
max ops per semop call = 32
semaphore max value = 32767
------ Messages: Limits --------
max queues system wide = 16(最大多少队列)
max size of message (bytes) = 65536(每个信息最大大小)
default max size of queue (bytes) = 65536(每个队列最大信息所能接收进来的最大消息的体积大小)
[root@node1 ~]# man ipcs(查看ipcs的man帮助)
ipcs - provide information on ipc facilities
-m shared memory segments
-q message queues
-s semaphore arrays
-a all (this is the default)
-p pid(以pid进行显示)
-c creator(信号创建者)
-l limits
[root@node1 ~]# ipcs -p
------ Shared Memory Creator/Last-op --------
shmid owner cpid lpid
4685824 root 2261 2261
5046273 root 2544 2562
4653058 root 2254 2254
5079043 root 2544 2562
5111812 root 2544 2562
------ Message Queues PIDs --------
msqid owner lspid lrpid
[root@node1 ~]# man ipcrm(查看ipcrm的man帮助)
ipcrm - remove a message queue, semaphore set or shared memory id(移除消息队列,如果某个消息队列处于睡眠状态,而且无法唤醒被使用,可以使用这个命令删除)
[root@node1 ~]# ps aux | grep pdf(查看所有终端所有用户进程只显示pdf相关)
root 250 0.0 0.0 0 0 ? S Jul05 0:00 [pdflush]
root 251 0.0 0.0 0 0 ? S Jul05 0:00 [pdflush]
root 19042 0.0 0.0 4220 604 pts/1 R+ 02:57 0:00 grep pdf
[root@node1 ~]# cat /proc/sys/vm/nr_pdflush_threads(查看当前启动多少个pdflush线程)
2
[root@node1 ~]# cat /proc/sys/vm/(查看/proc/sys/vm目录文件)
block_dump flush_mmap_pages min_free_kbytes panic_on_oom
dirty_background_bytes hugetlb_shm_group mmap_min_addr percpu_pagelist_fraction
dirty_background_ratio laptop_mode nr_hugepages swappiness
dirty_bytes legacy_va_layout nr_pdflush_threads swap_token_timeout
dirty_expire_centisecs lowmem_reserve_ratio overcommit_memory topdown_allocate_fast
dirty_ratio max_map_count overcommit_ratio vdso_enabled
dirty_writeback_centisecs max_reclaims_in_progress pagecache vfs_cache_pressure
drop_caches max_writeback_pages page-cluster vm_devzero_optimized
[root@node1 ~]# cat /proc/sys/vm/dirty_background_ratio(查看dirty_backgroud_ratio文件内容)
10
[root@node1 ~]# cat /proc/sys/vm/dirty_ratio(查看dirty_ratio文件内容)
40
[root@node1 ~]# cat /proc/sys/vm/dirty_bytes(查看dirty_bytes文件内容)
0
[root@node1 ~]# free -m(查看内存使用情况)
total used free shared buffers cached
Mem: 1010 986 23 0 101 697
-/+ buffers/cache: 187 823
Swap: 1027 0 1027
提示:buffers cache buffers占的缓冲;
[root@node1 ~]# sync(同步)
[root@node1 ~]# echo 1 > /proc/sys/vm/drop_caches(释放pagecache)
[root@node1 ~]# free -m(查看内存使用情况)
total used free shared buffers cached
Mem: 1010 201 808 0 0 24
-/+ buffers/cache: 176 834
Swap: 1027 0 1027
[root@node1 ~]# echo 3 > /proc/sys/vm/drop_caches(释放pagecache,dentries和inodes)
[root@node1 ~]# free -m(查看内存使用情况)
total used free shared buffers cached
Mem: 1010 177 832 0 0 24
-/+ buffers/cache: 152 857
Swap: 1027 0 1027
[root@node1 ~]# ls /proc/1/(查看/proc/1/目录文件及子目录)
attr coredump_filter environ fdinfo loginuid mounts oom_score smaps status
auxv cpuset exe io maps mountstats root stat task
cmdline cwd fd limits mem oom_adj schedstat statm wchan
提示:oom_score是系统通过观察进程自动计算得来的,计算这个数值主要参照oom_adj从-17到+15,有效值从-16到+15,只要把oom_adj设置为-17表示这个进程无论任何时候都不会被
oom_killer杀掉;-16到+15用于判定计算得出当前oom_score值的基本标准,值越大得到的oom_score也越大,被杀死可能性也就越大;
[root@Smoke ~]# man valgrind(查看valgrind的man帮助)
valgrind - a suite of tools for debugging and profiling programs(调试分析应用程序)
--tool=<toolname> [default: memcheck]
Run the Valgrind tool called toolname, e.g. Memcheck, Cachegrind, etc.(评估缓存命中率)
[root@Smoke ~]# valgrind --tool=memcheck cat /proc/$$/maps(评估当前进程)
==1395== Memcheck, a memory error detector
==1395== Copyright (C) 2002-2012, and GNU GPL'd, by Julian Seward et al.
==1395== Using Valgrind-3.8.1 and LibVEX; rerun with -h for copyright info
==1395== Command: cat /proc/1306/maps
==1395==
007d7000-007f5000 r-xp 00000000 08:12 672751 /lib/ld-2.12.so
007f5000-007f6000 r--p 0001d000 08:12 672751 /lib/ld-2.12.so
007f6000-007f7000 rw-p 0001e000 08:12 672751 /lib/ld-2.12.so
0087b000-0087c000 r-xp 00000000 00:00 0 [vdso]
008ce000-00a5e000 r-xp 00000000 08:12 672758 /lib/libc-2.12.so
00a5e000-00a5f000 ---p 00190000 08:12 672758 /lib/libc-2.12.so
00a5f000-00a61000 r--p 00190000 08:12 672758 /lib/libc-2.12.so
00a61000-00a62000 rw-p 00192000 08:12 672758 /lib/libc-2.12.so
00a62000-00a65000 rw-p 00000000 00:00 0
00a9e000-00ab4000 r-xp 00000000 08:12 672804 /lib/libtinfo.so.5.7
00ab4000-00ab7000 rw-p 00015000 08:12 672804 /lib/libtinfo.so.5.7
00e91000-00e94000 r-xp 00000000 08:12 672764 /lib/libdl-2.12.so
00e94000-00e95000 r--p 00002000 08:12 672764 /lib/libdl-2.12.so
00e95000-00e96000 rw-p 00003000 08:12 672764 /lib/libdl-2.12.so
00f38000-00f44000 r-xp 00000000 08:12 672774 /lib/libnss_files-2.12.so
00f44000-00f45000 r--p 0000b000 08:12 672774 /lib/libnss_files-2.12.so
00f45000-00f46000 rw-p 0000c000 08:12 672774 /lib/libnss_files-2.12.so
08048000-08118000 r-xp 00000000 08:12 688130 /bin/bash
08118000-0811d000 rw-p 000cf000 08:12 688130 /bin/bash
0811d000-08122000 rw-p 00000000 00:00 0
098db000-09919000 rw-p 00000000 00:00 0 [heap]
b73c9000-b73cb000 rw-p 00000000 00:00 0
b73cb000-b73d2000 r--s 00000000 08:12 353213 /usr/lib/gconv/gconv-modules.cache
b73d2000-b7526000 r--p 0326f000 08:12 352957 /usr/lib/locale/locale-archive
b7526000-b7566000 r--p 02eb5000 08:12 352957 /usr/lib/locale/locale-archive
b7566000-b7766000 r--p 00000000 08:12 352957 /usr/lib/locale/locale-archive
b7766000-b7768000 rw-p 00000000 00:00 0
b776f000-b7770000 rw-p 00000000 00:00 0
bfbe2000-bfbf7000 rw-p 00000000 00:00 0 [stack]
==1395==
==1395== HEAP SUMMARY:
==1395== in use at exit: 0 bytes in 0 blocks
==1395== total heap usage: 33 allocs, 33 frees, 38,884 bytes allocated
==1395==
==1395== All heap blocks were freed -- no leaks are possible(没有内存泄漏)
==1395==
==1395== For counts of detected and suppressed errors, rerun with: -v
==1395== ERROR SUMMARY: 0 errors from 0 contexts (suppressed: 13 from 8)
[root@Smoke ~]# cat /proc/sys/vm/swappiness(查看swappiness文件内容)
60
[root@Smoke ~]# sar -B 1(监控内存,每1秒刷新一次)
Linux 2.6.32-358.el6.i686 (Smoke.com) 2016年06月24日 _i686_ (1 CPU)
01时57分54秒 pgpgin/s pgpgout/s fault/s majflt/s pgfree/s pgscank/s pgscand/s pgsteal/s %vmeff
01时57分55秒 0.00 0.00 36.63 0.00 102.97 0.00 0.00 0.00 0.00
01时57分56秒 0.00 92.93 38.38 0.00 107.07 0.00 0.00 0.00 0.00
01时57分57秒 0.00 0.00 32.32 0.00 105.05 0.00 0.00 0.00 0.00
浙公网安备 33010602011771号