NFV FD.io VPP VM 系统性能调优

Host Setting:

1、关闭power savings mode在BIOS中

2、设置 /sys/devices/system/cpu/cpu*/cpufreq/scaling_governor 为 performance

3、添加启动参数GRUB_CMDLINE_LINUX="intel_iommu=on isolcpus=1-13 nohz_full=1-13 rcu_nocbs=1-13 hugepagesz=1GB hugepages=64 default_hugepagesz=1GB"

 

Guest setting run in KVM VM

1、关闭Irqbalance 服务

Irqbalance 服务缺省是开启的,它用于平衡irq在多核系统的各个CPU之间,然而它会导致运行vpp 的cpu停滞 进而导致Rx丢包,当Irqbalance服务关闭后,所有的irq都会备cpu0处理,因此不要运行vpp在cpu0上

2、echo 0 > /sys/kernel/mm/ksm/run

3、In order to run VPP in a VM, the following parameters must be configured on the command line invocation or in the libvirt / virsh xml domain configuration:-cpu host  : This parameter causes the VM to inherit the host OS flags. (-cpu host & -m 8192 &-smp 2,sockets=1,cores=4,threads=2)

8 GB of ram is required for optimal zero packet drop rates.  
TBD: Need to investigate why this is true.  4GB has Rx pkt drops even though there is only 2.2GB allocated!

 

4、To disable PXE boot delays, add the ",rombar=0" option to the end of each "-device" option list or add "<rom bar='off'/> to the device xml configuration.

 

5、<memballoon model='none'/>

 

6、Set CPU Affinity and NUMA Memory Policy for the VPP VM threads

 <cputune>
        <vcpupin vcpu="0" cpuset="1/>
        <vcpupin vcpu="1" cpuset="0,1"/>
        <vcpupin vcpu="2" cpuset="2,3"/>
        <vcpupin vcpu="3" cpuset="0,4"/>
<utune>
<numatune>
        <memory mode='strict' nodeset='0,2-3'/>
</numatune> 
 
7、echo never > /sys/kernel/mm/transparent_hugepages/enabled
 
 
8、Recommend turning this function off when running a single vpp instance.
echo 0 > /sys/kernel/mm/ksm/run
If it's not practical to turn off ksm, we recommend turning off ksm across numa nodes:
echo 0 > /sys/kernel/mm/ksm/merge_across_nodes
 
 























posted @ 2016-09-28 11:02  于杨  阅读(1832)  评论(0编辑  收藏  举报