Getting Started with DPDK and UHD - 了解UHD上的DPDK高速网络数据包处理

Getting Started with DPDK and UHD

了解UHD上的DPDK高速网络数据包处理

译者注:
DPDK(Data Plane Development Kit)是Intel的一套绕开传统网络套接字驱动机制的高速网络数据包处理框架,可以避开系统忙于应付处理大量的硬件数据中断和线程上下文切换。简单来说就是用来实现超低延迟的高速网络应用程序开发的一套框架。UHD驱动启用DPDK支持后会极大的降低网络数据延迟。本文档也适用于了解UHD应用程序的性能优化(低延迟)。
另本文档只适用于Linux系统。其实现阶段Windows平台也能够支持DPDK开发。编译时用cmake配置依赖开发库,理论应该也可以在Windows平台的UHD上启用DPDK,有时间本人可以先尝试下。要是有同志有过验证,请留言讨论。
本文档翻译并非机翻,部分文法不便直译的内容经由译者个人理解重新阐述。所以部分内容并非跟原文一一对应。若有错误敬请谅解。本译文由裤子本人手工翻译,只发布在博客园C+侦探个人名下,转载请留名。原文链接在此: Getting Started with DPDK and UHD - Ettus Knowledge Base

Contents - 内容

Application Note Number and Authors - 文档记录编号和作者

AN-500 by Nate Temple, Alex Williams, Wade Fife, and Matt Prost

Overview - 概述

This application note walks through the process to get started with the Data Plane Development Kit (DPDK) driver within UHD.

本文档带你环顾在UHD驱动上启用Data Plane Development Kit (DPDK)的全过程。

Abstract - 摘要

Up until now, UHD's only support for networked devices was backed by the kernel's sockets implementation. Every call to send() or recv() would cause a context switch and invite the kernel's scheduler to replace our thread with something else. Because the typical scheduler is optimized to distribute CPU time fairly across multiple loads, the timing-critical threads might sporadically be hit with sleeping time, and the thread might be migrated off its current CPU and forced to run on another. The overhead and random latency spikes make it difficult to enable reliable real-time streaming at higher rates.

目前为止,UHD支持的网络设备,都是由操作系统的套接字内核驱动实现支持。每次调用send()或recv()操作都会导致线程产生上下文切换,并通知内核调度去安排当前线程的处理。因为传统的线程调度是优化为去平均安排多个CPU线程负载的时间。这样需要实时运行的关键线程就有可能被切换出去,CPU有可能会去处理其他的线程需求。这样线程切换的开销和随机性的延迟就会导致难以实现可靠的高速率实时数据流传输。

DPDK is a high-speed packet processing framework that enables a kernel bypass for network drivers. By putting the entire driver in user space, avoiding context switches, and pinning I/O threads to cores, UHD and DPDK combine to largely prevent the latency spikes induced by the scheduler. In addition, the overall overhead for packet processing lowers.

而DPDK是一个高速的数据包处理框架,它实现了一条内核网络驱动的旁路。通过将整个驱动放置在用户空间内,避免线程的上下文切换,并且将I/O线程跟CPU处理核心绑定,UHD启用DPDK高速网络数据包处理支持后,可以很大程度上防止系统线程调度产生的数据延迟。并且减少数据包处理的总体开销。

Supported Devices - 支持设备

USRPs

DPDK is supported on the following USRP devices: DPDK目前在下列USRP设备上实现支持

  • N300 / N310
  • N320 / N321
  • X300 / X310
  • E320

Host Network Cards - 主机网络板卡

DPDK is supported on many Intel and Mellanox based 10Gb NICs. Below is a list of NICs Ettus Research has tested. For a full list of NICs supported by DPDK, please see the DPDK manual.

DPDK支持数种Intel和Mellanox的10Gb网卡。下面的网卡型号列表中的设备都经由Ettus Research公司测试过。需要了解DPDK所支持的全部网卡,请参考DPDK的相关文档。

  • Intel X520-DA1
  • Intel X520-DA2
  • Intel X710-DA2
  • Intel X710-DA4
  • Intel XL710
  • Mellanox MCX4121A-ACAT ConnectX-4 Lx
  • Mellanox MCX516A-CCAT ConnectX-5

References - 参考

Dependencies - 依赖

  • UHD 3.x requires DPDK 17.11, which is included in the default repos of Ubuntu 18.04.x
  • UHD 3.x需要DPDK 17.11,在Ubuntu 18.04.x操作系统上已默认包含。
  • UHD 4.0 requires DPDK 18.11
  • UHD 4.0需要DPDK 18.11
  • DPDK support was added for the N3xx/E320 USRPs with UHD 3.13.x.x
  • N3xx/E320设备在UHD 3.13.x.x版本中实现DPDK支持。
  • DPDK support was added for the X3xx with UHD 3.14.1.0
  • X3xx系列设备在UHD 3.14.1.0版本中实现DPDK支持。

Installing DPDK - 安装DPDK

On Ubuntu 18.04.x, it is possible to install DPDK 17.11 via apt:

在Ubuntu 18.04.x系统上,可以通过apt安装DPDK 17.11:

   sudo apt install dpdk dpdk-dev

For DPDK 18.11, follow the instructions on the DPDK website to download, configure, and build DPDK (https://doc.dpdk.org/guides-18.11/linux_gsg/build_dpdk.html).

需要安装DPDK 18.11,请跟随此文档(https://doc.dpdk.org/guides-18.11/linux_gsg/build_dpdk.html)下载,配置和编译部署。

Installing UHD - 安装UHD

Once the dpdk and dpdk-dev packages are installed, UHD will locate them during a build and you should see DPDK in the enabled components lists when running cmake.

一旦dpdk和dpdk-dev开发包被安装,UHD在经由cmake编译时会自动定位,编译时请注意查看是否有启用DPDK相关的依赖组件。

Enable hugepages - 启用巨帧设置

Edit your grub configuration file, /etc/default/grub, and add the follow parameters to GRUB_CMDLINE_LINUX_DEFAULT:

编辑grub的配置文件(/etc/default/grub),添加参数GRUB_CMDLINE_LINUX_DEFAULT:

   iommu=pt intel_iommu=on hugepages=2048

On a vanilla Ubuntu system it should look like this:

在vanilla Ubuntu操作系统上编辑完后应该是这样子:

   GRUB_CMDLINE_LINUX_DEFAULT="quiet splash iommu=pt intel_iommu=on hugepages=2048"

Close /etc/default/grub and at the command prompt, update your grub configuration with the command:

关闭/etc/default/grub文件,然后使用以下命令更新grub配置:

   sudo update-grub

For these settings to take effect, reboot your host machine.

为了设置起效,请重启主机系统。

Preparing your UHD Configuration File - 准备UHD配置文件

You should note the MAC addresses for your 10Gb NICs before proceeding.

在处理之前,你需要先记下你主机上10Gb网卡的MAC地址。

The MAC addresses for your NICs can be found by running the command:

你可以使用此命令查找网卡MAC地址:

   ip a

You should then create a UHD configuration file at the location /root/.uhd/uhd.conf.

你需要在指定路径上创建UHD的配置文件(root/.uhd/uhd.conf)。

   sudo su
   mkdir -p /root/.uhd
   nano /root/.uhd/uhd.conf

UHD 3.x

An example uhd.conf file is listed below. Note that field names in UHD 3.x are slightly different from UHD 4.0.

下面是uhd.conf文件样例。注意在UHD 3.x中,字段名称跟在UHD 4.0中有细微不同。

You should update the following fields for your configuration from this example:

注意:你需要自己更新配置文件样例中的下列字段内容:

  • Update the MAC address variables, dpdk-mac, to match your NIC
  • 更新MAC地址(dpdk-mac),去匹配网卡。
  • Update the dpdk-driver if the location is different on your system. /usr/lib/x86_64-linux-gnu/dpdk-17.11-drivers/ is the default location on Ubuntu 18.04.x when dpdk is installed via apt.
  • 如果改变了dpdk的默认安装位置,需要对应修改dpdk-driver字段。在Ubuntu 18.04.x操作系统中,通过apt安装dpdk的默认位置在(/usr/lib/x86_64-linux-gnu/dpdk-17.11-drivers/)。
  • Update the dpdk-corelist and dpdk-io-cpu fields. In this example, a two port NIC is used. There should be one core for the main dpdk thread (in this example core #2), and then separate cores assigned to each NIC (in this example core #3 for the first port on the NIC, core #4 for the second port on the NIC)
  • 更新dpdk-corelist和dpdk-io-cpu字段。在下面样例中,使用了两张网卡,配置了三个CPU核心。一个CPU核心用于dpdk的主线程(下面样例为核心#2),另外两个CPU核心分别配置在另外两张网卡上(下面样例中,核心#3分配在第一个网卡上,核心#4分配在第二个网卡上)。
  • Update the dpdk-ipv4 fields to your desired IP range.
  • 更改dpdk-ipv4字段,指定使用的网卡IP地址段范围。 
    • 192.168.30.2192.168.40.2 on a default X3xx system - X3xx系列设备的默认IPv4地址是192.168.30.2和192.168.40.2
    • 192.168.10.2192.168.20.2 on a default N3xx system - N3xx系列设备的默认IPv4地址是192.168.10.2和192.168.20.2
    • 192.168.10.2 on a default E320 system - E320设备的默认IPv4地址是192.168.10.2
   [use_dpdk=1]
   dpdk-mtu=9000
   dpdk-driver=/usr/lib/x86_64-linux-gnu/dpdk-17.11-drivers/
   dpdk-corelist=2,3,4
   dpdk-num-mbufs=4095
   dpdk-mbufs-cache-size=315
   
   [dpdk-mac=aa:bb:cc:dd:ee:f1]
   dpdk-io-cpu = 3
   dpdk-ipv4 = 192.168.10.1/24
   
   [dpdk-mac=aa:bb:cc:dd:ee:f2]
   dpdk-io-cpu = 4
   dpdk-ipv4 = 192.168.20.1/24

Note: Additional information on the UHD configuration file can be found here: https://files.ettus.com/manual_archive/v3.15.0.0/html/page_dpdk.html#dpdk_nic_config

备注:其他的UHD配置文件信息请参考: https://files.ettus.com/manual_archive/v3.15.0.0/html/page_dpdk.html#dpdk_nic_config

UHD 4.0

An example uhd.conf file is listed below. Note that the field names in UHD 4.0 are slightly different from UHD 3.x.

下面是uhd.conf文件样例。注意在UHD 4.0中,字段名称跟在UHD 3.x中有细微不同。

You should update the following fields for your configuration from this example:

注意:你需要自己更新配置文件样例中的下列字段内容:

  • Update the MAC address variables, dpdk_mac, to match your NIC 
  • 更新MAC地址(dpdk-mac),去匹配网卡。
  • Update the dpdk_driver if the location is different on your system. /usr/local/lib/ is the default location on Ubuntu 18.04.x when DPDK 18.11 is manually built and installed.
  • 如果改变了dpdk的默认安装位置,需要对应修改dpdk-driver字段。在Ubuntu 18.04.x操作系统中,DPDK 18.11的手工编译和安装的默认位置在(/usr/local/lib/)。
  • Update the dpdk_corelist and dpdk_lcore fields. In this example, a two port NIC is used. There should be one core for the main dpdk thread (in this example core #2), and then separate cores assigned to each NIC (in this example core #3 for the first port on the NIC, core #4 for the second port on the NIC)
  • 更新dpdk_corelist和dpdk_lcore字段。在下面样例中,使用了两张网卡,配置了三个CPU核心。一个CPU核心用于dpdk的主线程(下面样例为核心#2),另外两个CPU核心分别配置在另外两张网卡上(下面样例中,核心#3分配在第一个网卡上,核心#4分配在第二个网卡上)。
  • Update the dpdk_ipv4 fields to your desired IP range. 
  • 更改dpdk_ipv4字段,指定使用的网卡IP地址段范围。
    • 192.168.30.2192.168.40.2 on a default X3xx system - X3xx系列设备的默认IPv4地址是192.168.30.2和192.168.40.2
    • 192.168.10.2192.168.20.2 on a default N3xx system - N3xx系列设备的默认IPv4地址是192.168.10.2和192.168.20.2
    • 192.168.10.2 on a default E320 system - E320设备的默认IPv4地址是192.168.10.2
   [use_dpdk=1]
   dpdk_mtu=9000
   dpdk_driver=/usr/local/lib/
   dpdk_corelist=2,3,4
   dpdk_num_mbufs=4095
   dpdk_mbuf_cache_size=315
   
   [dpdk_mac=aa:bb:cc:dd:ee:f1]
   dpdk_lcore = 3
   dpdk_ipv4 = 192.168.10.1/24
   
   [dpdk_mac=aa:bb:cc:dd:ee:f2]
   dpdk_lcore = 4
   dpdk_ipv4 = 192.168.20.1/24

Note: Additional information on the UHD configuration file can be found here: https://files.ettus.com/manual/page_dpdk.html#dpdk_nic_config

备注:其他的UHD配置文件信息请参考: https://files.ettus.com/manual_archive/v3.15.0.0/html/page_dpdk.html#dpdk_nic_config

Additional Host Configuration for NIC Vendors - 不同网卡厂商之间的主机配置

The process for this step is different for Intel and Mellanox NICs and is detailed in individual sections below.

Intel和Mellanox网卡的不同处理配置请详见下面各小节。

Intel X520 / X710

The Intel based NICs will use the vfio-pci driver which must be loaded:

Intel网卡需要加载vfio-pci驱动

   sudo modprobe vfio-pci

Next, you will need to rebind the NIC to the vfio-pci drivers.

接下来,你需要重新绑定网卡和vfio-pci驱动。

First, identify the PCI address your NIC is at:

首先,使用下面命令,确认你网卡的PCI地址:

   dpdk-devbind --status

Note the PCI address that your NIC is connected to for the next step.

记录下网卡的PCI地址。

Before the next step, you will need to turn off the NIC first before doing the rebind.

在重新绑定网卡前,需要先关闭网卡。

In Ubuntu under System -> Network -> click the switches to off for the 10Gb ports, then run the dpdk-devbind commands:

在Ubuntu中,点击 System->Network,点击关闭10Gb网卡选项,然后运行dpdk-devbind命令绑定网卡:

Note: Your PCI address will likely be different than 02:00.0 as shown in the example below.

注意:你的网卡PCI地址跟下面命令例子中的02:00.0类似,请注意修改。

   sudo dpdk-devbind --bind=vfio-pci 02:00.0
   sudo dpdk-devbind --bind=vfio-pci 02:00.1

Now if you run dpdk-devbind --status again, you should see the NICs listed under DPDK devices

此时,如果你再次运行 dpdk-devbind --status,你就能看到DPDK上已经绑定的网卡列表。

   # dpdk-devbind --status
   
   Network devices using DPDK-compatible driver
   ============================================
   0000:02:00.0 '82599ES 10-Gigabit SFI/SFP+ Network Connection 10fb' drv=vfio-pci unused=ixgbe
   0000:02:00.1 '82599ES 10-Gigabit SFI/SFP+ Network Connection 10fb' drv=vfio-pci unused=ixgbe


Note: More info can be found here on the rebinding process: https://doc.dpdk.org/guides-17.11/linux_gsg/linux_drivers.html#binding-and-unbinding-network-ports-to-from-the-kernel-modules

备注:更多关于绑定DPDK网卡的处理请参考: https://doc.dpdk.org/guides-17.11/linux_gsg/linux_drivers.html#binding-and-unbinding-network-ports-to-from-the-kernel-modules

Mellanox NICs

The Mellanox NICs do not require rebinding using the vfio-pci driver. Mellanox provides additional drivers for DPDK.

Mellanox网卡不需要使用vfio-pci驱动进行绑定。Mellanox为DPDK提供了额外的驱动。

Install and activate the Mellanox drivers:

安装并激活Mellanox驱动:

   sudo apt install librte-pmd-mlx5-17.11
   sudo modprobe -a ib_uverbs mlx5_core mlx5_ib

For 18.11 you can download and install the latest Mellanox drivers from the mellanox website (https://www.mellanox.com/products/infiniband-drivers/linux/mlnx_ofed).

在Ubuntu 18.11上,你可以在Mellanox的网站(https://www.mellanox.com/products/infiniband-drivers/linux/mlnx_ofed)上下载和安装最新的网卡驱动。

The MLX5 poll mode driver library (librte_pmd_mlx5) in DPDK provides support for Mellanox ConnextX-4 and ConnectX-5 cards. This driver must be enabled manually with the build option CONFIG_RTE_LIBRTE_MLX5_PMD=y when building DPDK.

Mellanox ConnectX-4和ConnectX5网卡为DPDK提供的驱动需要MLX5轮询模式驱动库(librte_pmd_mlx5),因此在编译DPDK时请务必手工启用MLX5选项(CONFIG_RTE_LIBRTE_MLX5_PMD=y)。

Running UHD Applications with DPDK - 如何在DPDK支持下运行UHD的应用程序

UHD based application (including GNU Radio flowgraphs) can now be ran using a DPDK transport by passing in the Device Argument: use_dpdk=1.

基于UHD的应用程序(包括GNU Radio的图表流程程序),可以在程序运行时配置设备参数(Device Argument: use_dpdk=1)即可启用DPDK传输。

Important Note: In order for UHD to use DPDK, the UHD application *must* be ran as the root user. Using sudo will not work, you should switch to the root user by running sudo su.

重要提示:需要UHD启用DPDK传输时,由于DPDK驱动运行在用户空间,因此UHD的应用程序*必须*运行在root用户下。使用sudo命令授予权限将不会有效。需要使用sudo su命令切换至root用户。

For example, running the benchmark_rate utility:

下面举例,运行benchmark_rate速率测试工具时启用DPDK传输:

# cd /usr/local/lib/uhd/examples

# ./benchmark_rate --rx_rate 125e6 --rx_subdev "A:0 B:0" --rx_channels 0,1 --tx_rate 125e6 --tx_subdev "A:0 B:0" --tx_channels 0,1 --args "addr=192.168.10.2,second_addr=192.168.20.2,mgmt_addr=10.2.1.19,master_clock_rate=125e6,use_dpdk=1"

[INFO] [UHD] linux; GNU C++ version 7.3.0; Boost_106501; UHD_3.14.0.HEAD-0-gabf0db4e
EAL: Detected 8 lcore(s)
EAL: Some devices want iova as va but pa will be used because.. EAL: IOMMU does not support IOVA as VA
EAL: No free hugepages reported in hugepages-1048576kB
EAL: Probing VFIO support...
EAL: VFIO support initialized
EAL: PCI device 0000:02:00.0 on NUMA socket -1
EAL:   Invalid NUMA socket, default to 0
EAL:   probe driver: 8086:10fb net_ixgbe
EAL:   using IOMMU type 1 (Type 1)
EAL: Ignore mapping IO port bar(2)
EAL: PCI device 0000:02:00.1 on NUMA socket -1
EAL:   Invalid NUMA socket, default to 0
EAL:   probe driver: 8086:10fb net_ixgbe
EAL: Ignore mapping IO port bar(2)
PMD: ixgbe_dev_link_status_print():  Port 0: Link Down
EAL: Port 0 MAC: aa bb cc dd ee f1
EAL: Port 0 UP: 1
PMD: ixgbe_dev_link_status_print():  Port 1: Link Down
EAL: Port 1 MAC: aa bb cc dd ee f2
EAL: Port 1 UP: 1
EAL: Init DONE!
EAL: Starting I/O threads!
USER2: Thread 1 started
[00:00:00.000003] Creating the usrp device with: addr=192.168.10.2,second_addr=192.168.20.2,mgmt_addr=10.2.1.19,master_clock_rate=125e6,use_dpdk=1...
[INFO] [MPMD] Initializing 1 device(s) in parallel with args: mgmt_addr=10.2.1.19,type=n3xx,product=n310,serial=313ABDA,claimed=False,addr=192.168.10.2,second_addr=192.168.20.2,master_clock_rate=125e6,use_dpdk=1
[INFO] [MPM.PeriphManager] init() called with device args 'product=n310,time_source=internal,master_clock_rate=125e6,clock_source=internal,use_dpdk=1,second_addr=192.168.20.2,mgmt_addr=10.2.1.19'.
[INFO] [0/DmaFIFO_0] Initializing block control (NOC ID: 0xF1F0D00000000004)
[INFO] [0/DmaFIFO_0] BIST passed (Throughput: 1344 MB/s)
[INFO] [0/DmaFIFO_0] BIST passed (Throughput: 1341 MB/s)
[INFO] [0/DmaFIFO_0] BIST passed (Throughput: 1348 MB/s)
[INFO] [0/DmaFIFO_0] BIST passed (Throughput: 1347 MB/s)
[INFO] [0/Radio_0] Initializing block control (NOC ID: 0x12AD100000011312)
[INFO] [0/Radio_1] Initializing block control (NOC ID: 0x12AD100000011312)
[INFO] [0/DDC_0] Initializing block control (NOC ID: 0xDDC0000000000000)
[INFO] [0/DDC_1] Initializing block control (NOC ID: 0xDDC0000000000000)
[INFO] [0/DUC_0] Initializing block control (NOC ID: 0xD0C0000000000002)
[INFO] [0/DUC_1] Initializing block control (NOC ID: 0xD0C0000000000002)
Using Device: Single USRP:
  Device: N300-Series Device
  Mboard 0: ni-n3xx-313ABDA
  RX Channel: 0
    RX DSP: 0
    RX Dboard: A
    RX Subdev: Magnesium
  RX Channel: 1
    RX DSP: 0
    RX Dboard: B
    RX Subdev: Magnesium
  TX Channel: 0
    TX DSP: 0
    TX Dboard: A
    TX Subdev: Magnesium
  TX Channel: 1
    TX DSP: 0
    TX Dboard: B
    TX Subdev: Magnesium

[00:00:03.728707] Setting device timestamp to 0...
[INFO] [MULTI_USRP]     1) catch time transition at pps edge
[INFO] [MULTI_USRP]     2) set times next pps (synchronously)
[00:00:05.331920] Testing receive rate 125.000000 Msps on 2 channels
[00:00:05.610789] Testing transmit rate 125.000000 Msps on 2 channels
[00:00:15.878071] Benchmark complete.


Benchmark rate summary:
  Num received samples:     2557247854
  Num dropped samples:      0
  Num overruns detected:    0
  Num transmitted samples:  2504266704
  Num sequence errors (Tx): 0
  Num sequence errors (Rx): 0
  Num underruns detected:   0
  Num late commands:        0
  Num timeouts (Tx):        0
  Num timeouts (Rx):        0


Done!

Tuning Notes - 调优说明

General Host Performance Tuning App Note - 常规主机性能调优说明

The Application Note linked below covers general performance tuning tips that should be applied:

下面的链接文档内容,涵盖了通常的USRP主机程序性能调优的方法提示:

Increasing num_recv_frames - 增加num_recv_frames

If you experience Overflows at higher data rates, adding the device argument num_recv_frames=512 can help.

如果你在高速率数据传输时遇到Overflows提示,可以尝试在Device Argument设备参数中配置更大的num_recv_frames(num_recv_frames=512)。

Full Rate Streaming - 全速率数据流传输

If you're streaming data at the full master clock rate, and there is no interpolation or decimation being performed on the FPGA, you can skip the DUC and DDC blocks within the FPGA with the following parameters:

如果你想在主时钟频率下流化传输数据,并且FPGA不需要执行数据插值和抽取,可以使用以下设备参数跳过DUC和DDC:

  • skip_ddc=1
  • skip_duc=1

Full Rate on X3xx - X3xx系列设备的全速率

If you're streaming two transmit channels at full rate (200e6) on the X3xx platform, you should additionally set the device arg:

如果你想在X3xx平台上使用两条传输通道实现全速率(200e6)数据传输,你可以配置以下设备参数:

  • enable_tx_dual_eth=1

Isolate CPUs - 隔离CPU

Isolating the CPUs that are used for DPDK can improve performance. This can be done by adding the isolcpus parameter to your GRUB_CONFIG

配置好DPDK使用的专用CPU核心,可以有效的提升性能。需要在GRUB_CONFIG中增加isolcpus参数。

   isolcpus=2,3,4

Disable System Interrupts - 禁用系统中断

Disabling system interrupts can improve the jitter and performance generally by 1-3%. This can be done by adding the parameters below to your GRUB_CONFIG

禁用DPDK中配置用于DPDK驱动的CPU核心的系统中断,可以改善传输速率抖动,提升性能约1-3%。需要在GRUB_CONFIG中配置以下参数。

   nohz_full=2,3,4 rcu_nocbs=2,3,4

Disable Hyper-threading - 禁用超线程

In some applications which require the highest possible CPU performance per core, disabling hyper-threading can provide roughly a 10% increase in core performance, at the cost of having less core threads. Hyper-threading can be disabled within the BIOs and varies by manufacturer.

部分应用程序需要使用CPU每一个核心的全部性能,禁用超线程可以粗略提升约10%左右的核心性能,代价是减少了核心线程数量。禁用超线程需要在主板BIOS内设置,设置方法因主板制造商而异。

(译者备注:在应用程序控制线程数量的情况下,完全可以禁用超线程,此优化也适用于单一任务单一进程的服务器程序环境。)

Streaming on Multiple Channels - 在多个通道上传输数据流

If you're streaming on multiple channels simultaneously, you can create multiple streamer objects on separate threads. This can be accomplished with the `benchmark_rate` example by using the parameter `--multi_streamer`.

如果在多个通道上同时传输数据流,可以在每一个独立的线程内创建独立的streamer对象。可以参见样例程序'benchmark_rate'使用参数'--multi_streamer'来实现这一目的。

Elevated Streaming Thread Priority - 提高数据流线程优先级

In UHD 4, streaming thread priorities can be elevated with the `uhd::set_thread_priority_safe()` function call. This can be accomplished with the benchmark_rate example by using parameter `--priority high`.

在UHD 4中,可以通过调用'uhd::set_thread_priority_safe()'函数提高数据流线程的优先级。可以参见样例程序'benchmark_rate'使用参数'--priority high'实现这一目的。

Additional Tuning Notes from Intel - Intel的调优说明补充

Known Issues / Troubleshooting - 已知问题 / 故障排除

Underruns Every Second with DPDK + Ubuntu - DPDK + Ubuntu连续不断出现数据下溢错误(underrun)

With Linux kernels 5.10 and beyond, we have observed periodic underruns on systems that otherwise have no issues. These Linux kernel versions are the default for Ubuntu 20.04.3 LTS and later. The underrun issue is due to the RT_RUNTIME_SHARE feature being disabled by default in these versions of the Linux kernel (shown as NO_RT_RUNTIME_SHARE). The following procedure can be used to enable this feature. This process was tested on Linux kernel version 5.13; the procedure may be slightly different on other kernel versions. To determine the Linux kernel version of your system, in a terminal issue the command uname -r.

在Linux kernels内核版本5.10和以后版本中,我们注意到系统并没有异常,但是会周期性的出现数据下溢错误(underrun)。此Linux kernal内核版本默认使用于Ubuntu 20.04.3 LTS及其以后版本。此underrun问题是由于系统的RT_RUNTIME_SHARE特性在这个版本的Linux内核中被禁用(显示为NO_RT_RUNTIME_SHARE)。下面的处理方法可以启用这一特性。此方法已经在Linux kernel version 5.13上经过验证,其他内核版本的Linux kernal处理方式可能会略有不同。请注意检查你系统Linux的内核版本,可以使用命令'uname -r'查阅Linux内核版本信息。

$ sudo -s
$ cd /sys/kernel/debug/sched
$ cat features

GENTLE_FAIR_SLEEPERS START_DEBIT NO_NEXT_BUDDY LAST_BUDDY CACHE_HOT_BUDDY WAKEUP_PREEMPTION NO_HRTICK NO_HRTICK_DL NO_DOUBLE_TICK NONTASK_CAPACITY TTWU_QUEUE SIS_PROP NO_WARN_DOUBLE_CLOCK RT_PUSH_IPI '''NO_RT_RUNTIME_SHARE''' NO_LB_MIN ATTACH_AGE_LOAD WA_IDLE WA_WEIGHT WA_BIAS UTIL_EST UTIL_EST_FASTUP NO_LATENCY_WARN ALT_PERIOD BASE_SLICE

$ echo RT_RUNTIME_SHARE > features
$ cat features

GENTLE_FAIR_SLEEPERS START_DEBIT NO_NEXT_BUDDY LAST_BUDDY CACHE_HOT_BUDDY WAKEUP_PREEMPTION NO_HRTICK NO_HRTICK_DL NO_DOUBLE_TICK NONTASK_CAPACITY TTWU_QUEUE SIS_PROP NO_WARN_DOUBLE_CLOCK RT_PUSH_IPI '''RT_RUNTIME_SHARE''' NO_LB_MIN ATTACH_AGE_LOAD WA_IDLE WA_WEIGHT WA_BIAS UTIL_EST UTIL_EST_FASTUP NO_LATENCY_WARN ALT_PERIOD BASE_SLICE
posted @ 2022-11-09 16:22  裤子多多  阅读(810)  评论(0编辑  收藏  举报