灌木大叔

每一个不曾起舞的日子都是对以往生命的辜负!!

  :: 首页 :: :: 联系 :: 订阅 :: 管理 ::

在linux平台安装nfs和iSCSI Target.

NFS与iscsi性能比较

iodepth/_low/_batch/_complete numjobs

iops(read)

iscsi/nfs

bw(read)

iscsi/nfs

iops(write)

iscsi/nfs

bw(write)

iscsi/nfs

64/16/16/16 16 29.4k/10.9k 115MiB/42.6MiB 7355/2728 28.7MiB/10.7MiB
32/8/8/8 16 29.8k/11.2k 117MiB/43.6MiB 7464/2798 29.2MiB/10.9MiB
32/8/8/8 8 28.9k/10.8k 113MiB/42.3MiB 7229/2711 28.2MiB/10.6MiB

相比于NFS,iscsi几乎不会引起主机负载表高,cache也几乎没什么变化,这得益于Linux-IO及Zero-Copy技术,在后续的spdk及IO 虚拟化vhost中会再次见证Zero-Copy带来性能提升.

 

SCSI & iSCSI

SCSI是Small Computer System Interface(小型计算机系统接口)的缩写,基于client-server模型,SCSI client称为initiator,server称为target。iSCSI(Internet Small Computer System Interface),Internet小型计算机系统接口,又称为IP-SAN,是一种基于因特网及SCSI-3协议下的存储技术。iSCSI是建立在TCP协议之上的块传输层协议,是一组基于块的命令,iSCSI协议使用Initiator将一组SCSI命令发送到target上执行某种操作以实现存储和数据读写。

iSCSI及基本概念

两个重要的组件启动器(initiator)和目标(target)
目标,iSCSI target是存储设备端或模拟存储设备,是存储设备,其目的是为其他主机提供网络磁盘。Linux内核LIO模块即实现iSCSI target的模拟, 详见Linux Storage Stack Diagram 。

启动器,iSCSI initiator就是能够使用target的客户端,通常是服务器。也就是说,想要连接到iscsi target的服务器,需要安装iSCSI initiator相关的软件包,如open-iscsi

Network Portal: 网络端口。在 initiator端用IP地址标识,在target端IP地址及端口IP:PORT

Session: 连接initiator和target的一组TCP连接构成。可以向 session 添加 TCP 连接,一个session中是可以有多个连接,一个 session只能看到同一个target(多网卡应用)

Connection : Initiator和target间的一个TCP连接

CID(Connection ID): 每个connection都有一个CID ,该标识在session范围内是唯一。CID由 initiator产生,在 login 请求和使用 logout 关闭 连接时传递给 target。

SSID(Session ID):一个 iSCSI Initiator 与 iSCSI Target 之间的会话(Session)由会话ID(SSID)定义,该会话ID是一个由发起方部分(ISID)和目标部分(Target Portal Group Tag)组成的元组。 ISID 在会话建立时由发起者明确指定。 Target Portal Group Tag 由发起者在连接建立时选择的 TCP端口来隐式指定

Portal Groups: 网络端口组。iSCSI session 支持多连接,一些实现能把通过多个端口建立的多个连接捆绑到一个 session。 一个 iSCSI 网络实体的多个网络端口被定义为一个网络端口组,把该组和一个 session 联系起来,该 session 就可以捆绑通过该组内多个端口建立的多个连接,再使它们一起协同工作以达到捆绑的目的。每一个该组的 session 并不需要包括该组的所有网络端口。一个 iSCSI 节点可能有一或者多个网络端口组,但是每一个 iSCSI 使用的网络端口只能属于 iSCSI 节点的一个组。

Target Portal Group Tag: 网络端口组标识。使用 16 比特的数标识一个网络端口组。在 一个 iSCSI 节点里,所有具有同样组标志的端口构成一个网络端口组

iSCSI Task: 一个 iSCSI 任务是指一个需要响应的 iSCSI 请求

搭建基于Linux-IO的iSCSI target

1.  安装targetcli并启动target服务

ubuntu-live-server 22.04安装targetcli并启动target.service服务(或参考环境准备篇

  1. root@nvme:~# apt install -y targetcli-fb
  2. root@nvme:~# systemctl start target.service
  3. root@nvme:~# systemctl enable target.service

2. 运行targetcli配置iscsi target,先运行lsblk查看块设备

  1. root@nvme:~# lsblk
  2. NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS
  3. ......
  4. sda 8:0 0 1.8T 0 disk
  5. ├─sda2 8:2 0 200G 0 part /
  6. └─sda5 8:5 0 1007K 0 part
  7. nvme0n1 259:0 0 931.5G 0 disk

运行targetcli进入交互窗口(# 手动添加说明信息 #)

  1. root@nvme:~# targetcli
  2. targetcli shell version 2.1.53
  3. Copyright 2011-2013 by Datera, Inc and others.
  4. For help on commands, type 'help'.
  5.  
  6. /> ls
  7. o- / ................................................................................ [...]
  8. o- backstores ..........# iscsi后端支持的存储类型 #................................. [...]
  9. | o- block .............# 块设备,磁盘驱动器、磁盘分区、逻辑卷等 #.... [Storage Objects: 0]
  10. | o- fileio ............# 指定大小的文件 #........................... [Storage Objects: 0]
  11. | o- pscsi .............# 物理SCSI设备 #............................. [Storage Objects: 0]
  12. | o- ramdisk ...........# 内存盘,重启失效 #......................... [Storage Objects: 0]
  13. o- iscsi ................................................. [mutual disc auth, Targets: 0]
  14. o- loopback ................................................................ [Targets: 0]
  15. o- vhost ................................................................... [Targets: 0]
  16. />

第一步,基于/dev/nvme0n1创建名为iscsi_nvme的backstores/block

  1. /> cd backstores/block/
  2. /backstores/block>
  3. /backstores/block> create iscsi_nvme /dev/nvme0n1
  4. Created block storage object iscsi_nvme using /dev/nvme0n1.

第二步,创建iSCSI target,名称可由系统自动生成,是一串用于描述共享资源的唯一字符串,iSCSI target 命名约定格式,iqn.yyyy-mm.*:*,iqn:ISCSI Qualified Name,如果没有关闭(/> set group=global auto_add_default_portal=false),则自动创建网络端口组(portal groups)及网络端口(portal,0.0.0.0:3260)

  1. /> get global
  2. GLOBAL CONFIG GROUP
  3. ===================
  4. auto_add_default_portal=true
  5. ----------------------------
  6. If true, adds a portal listening on all IPs to new targets.
  7.  
  8. auto_add_mapped_luns=true
  9. -------------------------
  10. If true, automatically create node ACLs mapped LUNs after creating a new target LUN or a new node ACL
  11.  
  12. auto_cd_after_create=false
  13. --------------------------
  14. If true, changes current path to newly created objects.
  15.  
  16. auto_enable_tpgt=true
  17. ---------------------
  18. If true, automatically enables TPGTs upon creation.
  19. auto_save_on_exit=true
  20. ----------------------
  21. If true, saves configuration on exit.
  22. auto_use_daemon=false
  23. ---------------------
  24. If true, commands will be sent to targetclid.
  25. color_command=cyan
  26. ------------------
  27. Color to use for command completions.
  28. color_default=none
  29. ------------------
  30. Default text display color.
  31. color_keyword=cyan
  32. ------------------
  33. Color to use for keyword completions.
  34. color_mode=true
  35. ---------------
  36. Console color display mode.
  37. color_parameter=magenta
  38. -----------------------
  39. Color to use for parameter completions.
  40. color_path=magenta
  41. ------------------
  42. Color to use for path completions
  43. daemon_use_batch_mode=false
  44. ---------------------------
  45. If true, use batch mode for daemonized approach.
  46. export_backstore_name_as_model=true
  47. ----------------------------------
  48. If true, the backstore name is used for the scsi inquiry model name.
  49. logfile=/root/.targetcli/log.txt
  50. --------------------------------
  51. Logfile to use.
  52. loglevel_console=info
  53. ---------------------
  54. Log level for messages going to the console.
  55. loglevel_file=debug
  56. -------------------
  57. Log level for messages going to the log file.
  58. max_backup_files=10
  59. -------------------
  60. Max no. of configurations to be backed up in /etc/rtslib-fb-target/backup/ directory.
  61. prompt_length=30
  62. ----------------
  63. Max length of the shell prompt path, 0 for infinite.
  64. tree_max_depth=0
  65. ----------------
  66. Maximum depth of displayed node tree.
  67. tree_round_nodes=true
  68. ---------------------
  69. Tree node display style.
  70. tree_show_root=true
  71. -------------------
  72. Whether or not to display tree root.
  73. tree_status_mode=true
  74. ---------------------
  75. Whether or not to display status in tree.
  76. />
  77. /> cd /iscsi
  78. /iscsi>
  79. /iscsi> create iqn.2023-05.nvme.iscsi.com:nvmeiscsi
  80. Created target iqn.2023-05.nvme.iscsi.com:nvmeiscsi.
  81. Created TPG 1.
  82. Global pref auto_add_default_portal=true
  83. Created default portal listening on all IPs (0.0.0.0), port 3260.

第三步,创建LUN

  1. /> cd iscsi/iqn.2023-05.nvme.iscsi.com:nvmeiscsi/tpg1/luns
  2. /iscsi/iqn.20...csi/tpg1/luns> create /backstores/block/iscsi_nvme
  3. Created LUN 0

第四步,配置discovery_auth & acl,"# 注释信息" 

  1. # 设置discovery认证信息
  2. /iscsi> set discovery_auth userid=disUser password=disUserPwd
  3. Parameter userid is now 'disUser'.
  4. Parameter password is now 'disUserPwd'.
  5. /iscsi> set discovery_auth mutual_userid=mDisUser mutual_password=mDisUserPwd
  6. Parameter mutual_userid is now 'mDisUser'.
  7. Parameter mutual_password is now 'mDisUserPwd'.
  8. /iscsi> get discovery_auth
  9. DISCOVERY_AUTH CONFIG GROUP
  10. ===========================
  11. enable=True
  12. -----------
  13. The enable discovery_auth parameter.
  14. mutual_password=mDisUserPwd
  15. ---------------------------
  16. The mutual_password discovery_auth parameter.
  17. mutual_userid=mDisUser
  18. ----------------------
  19. The mutual_userid discovery_auth parameter.
  20. password=disUserPwd
  21. -------------------
  22. The password discovery_auth parameter.
  23. userid=disUser
  24. --------------
  25. The userid discovery_auth parameter.
  26. /iscsi>
  27. # 设置target 认证信息(只读权限)
  28. /iscsi> cd iqn.2023-05.nvme.iscsi.com:nvmeiscsi/tpg1/
  29. /iscsi/iqn.20...vmeiscsi/tpg1> set attribute authentication=1 generate_node_acls=1
  30. Parameter authentication is now '1'.
  31. Parameter generate_node_acls is now '1'.
  32. /iscsi/iqn.20...vmeiscsi/tpg1> set auth userid=clientUser password=clientUserPwd
  33. Parameter userid is now 'clientUser'.
  34. Parameter password is now 'clientUserPwd'.
  35. /iscsi/iqn.20...vmeiscsi/tpg1> set auth mutual_userid=mClientUser mutual_password=mClientUserPwd
  36. Parameter mutual_userid is now 'mClientUser'.
  37. Parameter mutual_password is now 'mClientUserPwd'.
  38. /iscsi/iqn.20...vmeiscsi/tpg1> get auth
  39. AUTH CONFIG GROUP
  40. =================
  41. mutual_password=mClientUserPwd
  42. ------------------------------
  43. The mutual_password auth parameter.
  44. mutual_userid=mClientUser
  45. -------------------------
  46. The mutual_userid auth parameter.
  47. password=clientUserPwd
  48. ----------------------
  49. The password auth parameter.
  50. userid=clientUser
  51. -----------------
  52. The userid auth parameter.
  53. /iscsi/iqn.20...vmeiscsi/tpg1>
  54. /iscsi/iqn.20...vmeiscsi/tpg1> cd acls/
  55. # 创建acl,及配置认证信息,acl的名称将用于InitiatorName(读写权限)
  56. /iscsi/iqn.20...csi/tpg1/acls> create iqn.2023-05.vm-nvme.iscsi.com:client001
  57. Created Node ACL for iqn.2023-05.vm-nvme.iscsi.com:client001
  58. Created mapped LUN 0.
  59. /iscsi/iqn.20...csi/tpg1/acls> cd iqn.2023-05.vm-nvme.iscsi.com:client001/
  60. /iscsi/iqn.20...com:client001> set auth userid=rwClientUser password=rwClientUserPwd
  61. Parameter userid is now 'rwClientUser'.
  62. Parameter password is now 'rwClientUserPwd'.
  63. /iscsi/iqn.20...com:client001> set auth mutual_userid=mRwClientUser mutual_password=mRwClientUserPwd
  64. Parameter mutual_userid is now 'mRwClientUser'.
  65. Parameter mutual_password is now 'mRwClientUserPwd'.
  66. /iscsi/iqn.20...com:client001> get auth
  67. AUTH CONFIG GROUP
  68. =================
  69. mutual_password=mRwClientUserPwd
  70. --------------------------------
  71. The mutual_password auth parameter.
  72. mutual_userid=mRwClientUser
  73. ---------------------------
  74. The mutual_userid auth parameter.
  75. password=rwClientUserPwd
  76. ------------------------
  77. The password auth parameter.
  78. userid=rwClientUser
  79. -------------------
  80. The userid auth parameter.
  81. /iscsi/iqn.20...com:client001>
  82. /iscsi/iqn.20...com:client001> cd /

第四步,最后查看iscsi配置信息

  1.  
    /> cd iscsi/iqn.2023-05.nvme.iscsi.com:nvmeiscsi/tpg1/
  2.  
    /iscsi/iqn.20...vmeiscsi/tpg1> cd /
  3.  
    /> ls
  4.  
    o- / ......................................................................................................................... [...]
  5.  
    o- backstores .............................................................................................................. [...]
  6.  
    | o- block .................................................................................................. [Storage Objects: 1]
  7.  
    | | o- iscsi_nvme ................................................................. [/dev/nvme0n1 (931.5GiB) write-thru activated]
  8.  
    | | o- alua ................................................................................................... [ALUA Groups: 1]
  9.  
    | | o- default_tg_pt_gp ....................................................................... [ALUA state: Active/optimized]
  10.  
    | o- fileio ................................................................................................. [Storage Objects: 0]
  11.  
    | o- pscsi .................................................................................................. [Storage Objects: 0]
  12.  
    | o- ramdisk ................................................................................................ [Storage Objects: 0]
  13.  
    o- iscsi .......................................................................................... [mutual disc auth, Targets: 1]
  14.  
    | o- iqn.2023-05.nvme.iscsi.com:nvmeiscsi .............................................................................. [TPGs: 1]
  15.  
    | o- tpg1 .................................................................................... [gen-acls, tpg-auth, mutual auth]
  16.  
    | o- acls .......................................................................................................... [ACLs: 1]
  17.  
    | | o- iqn.2023-05.vm-nvme.iscsi.com:client001 ................................................ [auth via tpg, Mapped LUNs: 1]
  18.  
    | | o- mapped_lun0 ............................................................................ [lun0 block/iscsi_nvme (rw)]
  19.  
    | o- luns .......................................................................................................... [LUNs: 1]
  20.  
    | | o- lun0 ............................................................. [block/iscsi_nvme (/dev/nvme0n1) (default_tg_pt_gp)]
  21.  
    | o- portals .................................................................................................... [Portals: 1]
  22.  
    | o- 0.0.0.0:3260 ..................................................................................................... [OK]
  23.  
    o- loopback ......................................................................................................... [Targets: 0]
  24.  
    o- vhost ............................................................................................................ [Targets: 0]
  25.  
    /> exit
bash

第五步,设置防火墙(开放端口,或直接关闭ubuntu防火墙)

  1.  
    # 开放3260端口
  2.  
    root@nvme:~# ufw allow 3260
  3.  
    Rules updated
  4.  
    Rules updated (v6)
  5.  
     
  6.  
    # 关闭Ubuntu防火墙
  7.  
    root@nvme:~# ufw disable
  8.  
    Firewall stopped and disabled on system startup
  9.  
    root@nvme:~#
bash

配置iSCSI Initiator

1. 启动qemu-kvm虚拟机,参见环境准备篇启动qemu虚拟机

2. 进入root用户,安装并配置open-iscsi

  1.  
    root@nvme:~# ssh ubuntu@192.168.2.112
  2.  
    Welcome to Ubuntu 22.04.2 LTS (GNU/Linux 5.15.0-71-generic x86_64)
  3.  
     
  4.  
    ......
  5.  
     
  6.  
    Expanded Security Maintenance for Applications is not enabled.
  7.  
     
  8.  
    5 updates can be applied immediately.
  9.  
    To see these additional updates run: apt list --upgradable
  10.  
     
  11.  
    Enable ESM Apps to receive additional future security updates.
  12.  
    See https://ubuntu.com/esm or run: sudo pro status
  13.  
     
  14.  
     
  15.  
    Last login: Fri May 12 03:27:02 2023 from 192.168.2.111
  16.  
    ubuntu@vm-nvme:~$ sudo su -
  17.  
    root@vm-nvme:~#
  18.  
    root@vm-nvme:~# apt install -y open-iscsi
  19.  
     
  20.  
    # 修改/etc/iscsi/initiatorname.iscsi,修改后内容如下
  21.  
    root@vm-nvme:~# cat /etc/iscsi/initiatorname.iscsi | grep -v -E "^$|^#"
  22.  
    InitiatorName=iqn.2023-05.vm-nvme.iscsi.com:client001
  23.  
    root@vm-nvme:~#
  24.  
     
  25.  
     
  26.  
    # 修改/etc/iscsi/iscsi.conf,修改后内容如下
  27.  
    root@vm-nvme:~# cat /etc/iscsi/iscsid.conf | grep -v -E "^$|^#" | grep auth
  28.  
    node.session.auth.authmethod = CHAP
  29.  
    node.session.auth.username = rwClientUser
  30.  
    node.session.auth.password = rwClientUserPwd
  31.  
    node.session.auth.username_in = mRwClientUser
  32.  
    node.session.auth.password_in = mRwClientUserPwd
  33.  
    discovery.sendtargets.auth.authmethod = CHAP
  34.  
    discovery.sendtargets.auth.username = disUser
  35.  
    discovery.sendtargets.auth.password = disUserPwd
  36.  
    discovery.sendtargets.auth.username_in = mDisUser
  37.  
    discovery.sendtargets.auth.password_in = mDisUserPwd
  38.  
     
bash

3. 发现iSCSI target,然后login,之后运行lsblk查看iSCSI设备(/dev/sdc)

  1.  
    root@vm-nvme:~# iscsiadm -m discovery -t sendtargets -p 192.168.2.111:3260
  2.  
    192.168.2.111:3260,1 iqn.2023-05.nvme.iscsi.com:nvmeiscsi
  3.  
    root@vm-nvme:~#
  4.  
    root@vm-nvme:~# iscsiadm -m node -T iqn.2023-05.nvme.iscsi.com:nvmeiscsi --login
  5.  
    Logging in to [iface: default, target: iqn.2023-05.nvme.iscsi.com:nvmeiscsi, portal: 192.168.2.111,3260]
  6.  
    Login to [iface: default, target: iqn.2023-05.nvme.iscsi.com:nvmeiscsi, portal: 192.168.2.111,3260] successful.
  7.  
    root@vm-nvme:~#
  8.  
    root@vm-nvme:~# iscsiadm -m session show
  9.  
    tcp: [2] 192.168.2.111:3260,1 iqn.2023-05.nvme.iscsi.com:nvmeiscsi (non-flash)
  10.  
    root@vm-nvme:~#
  11.  
    root@vm-nvme:~# lsblk
  12.  
    NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS
  13.  
    fd0 2:0 1 4K 0 disk
  14.  
    ......
  15.  
    sdb 8:16 0 32.2G 0 disk
  16.  
    ├─sdb1 8:17 0 32.1G 0 part /
  17.  
    ├─sdb14 8:30 0 4M 0 part
  18.  
    └─sdb15 8:31 0 106M 0 part /boot/efi
  19.  
    sdc 8:32 0 931.5G 1 disk
  20.  
    vda 252:0 0 366K 0 disk
  21.  
    root@vm-nvme:~#
  22.  
     
bash

4. 来一波fio性能测试,性能测试之前,先看一下网络,宿主机(192.168.2.111)和虚拟机(192.168.2.112)通过宿主机虚拟网桥kvmbr0连接,先使用iperf测试一下两个系统之间的网络带宽,两个OS都执行,apt install -y iperf,如果虚拟机执行失败,使用宿主机/etc/apt/sources.list覆盖虚拟机/etc/apt/sources.list,在执行apt install -y iperf

  1.  
    # 宿主机192.168.2.111,安装并启动iperf server
  2.  
    root@nvme:~# apt install -y iperf
  3.  
    root@nvme:~# iperf -s -f M
  4.  
    ------------------------------------------------------------
  5.  
    Server listening on TCP port 5001
  6.  
    TCP window size: 9.54 MByte (default)
  7.  
    ------------------------------------------------------------
  8.  
    [ 1] local 192.168.2.111 port 5001 connected with 192.168.2.112 port 49926
  9.  
    [ ID] Interval Transfer Bandwidth
  10.  
    [ 1] 0.0000-10.0011 sec 37436 MBytes 3743 MBytes/sec
  11.  
    root@nvme:~#
  12.  
     
  13.  
     
  14.  
    # 虚拟机192.168.2.112
  15.  
    root@vm-nvme:~# apt install -y iperf
  16.  
    root@vm-nvme:~# iperf -f G -c 192.168.2.111
  17.  
    ------------------------------------------------------------
  18.  
    Client connecting to 192.168.2.111, TCP port 5001
  19.  
    TCP window size: 595 KByte (default)
  20.  
    ------------------------------------------------------------
  21.  
    [ 1] local 192.168.2.112 port 56570 connected with 192.168.2.111 port 5001
  22.  
    [ ID] Interval Transfer Bandwidth
  23.  
    [ 1] 0.0000-10.0116 sec 36.5 GBytes 3.64 GBytes/sec
  24.  
    root@vm-nvme:~#
  25.  
     
bash

iperf测试结果可以看出宿主机与虚拟机的网络带宽可达到36Gbps,3700M字节,可以断定网络带宽足够,不会成为iSCSI数据传输瓶颈

确认网络带宽后,启动fio测试iSCSI的性能,使用​Linux开源存储漫谈——IO性能测试利器中本地测试相同fio参数

  1.  
    root@vm-nvme:~/fio# lsscsi -l
  2.  
    [0:0:1:0] disk ATA QEMU HARDDISK 2.5+ /dev/sda
  3.  
    state=running queue_depth=1 scsi_level=6 type=0 device_blocked=0 timeout=30
  4.  
    [1:0:0:0] disk ATA QEMU HARDDISK 2.5+ /dev/sdb
  5.  
    state=running queue_depth=1 scsi_level=6 type=0 device_blocked=0 timeout=30
  6.  
    [2:0:0:0] disk LIO-ORG iscsi_nvme 4.0 /dev/sdc
  7.  
    state=running queue_depth=128 scsi_level=7 type=0 device_blocked=0 timeout=30
  8.  
    root@vm-nvme:~/fio#
  9.  
    root@vm-nvme:~/fio# cat nread.fio
  10.  
    [global]
  11.  
    bs=4096
  12.  
    rw=read
  13.  
    ioengine=libaio
  14.  
    size=50G
  15.  
    direct=1
  16.  
    iodepth=256
  17.  
    iodepth_batch=128
  18.  
    iodepth_low=128
  19.  
    iodepth_batch_complete=128
  20.  
    userspace_reap
  21.  
    group_reporting
  22.  
    [test]
  23.  
    numjobs=1
  24.  
    filename=/dev/sdc
  25.  
    root@vm-nvme:~/fio# fio nread.fio
  26.  
    test: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=256
  27.  
    fio-3.28
  28.  
    Starting 1 process
  29.  
    Jobs: 1 (f=1): [R(1)][100.0%][r=1183MiB/s][r=303k IOPS][eta 00m:00s]
  30.  
    test: (groupid=0, jobs=1): err= 0: pid=2448: Fri May 12 06:25:25 2023
  31.  
    read: IOPS=299k, BW=1168MiB/s (1225MB/s)(50.0GiB/43839msec)
  32.  
    slat (usec): min=69, max=1855, avg=93.69, stdev=16.30
  33.  
    clat (usec): min=368, max=12692, avg=745.73, stdev=115.29
  34.  
    lat (usec): min=457, max=13858, avg=839.43, stdev=119.77
  35.  
    clat percentiles (usec):
  36.  
    | 1.00th=[ 586], 5.00th=[ 685], 10.00th=[ 693], 20.00th=[ 709],
  37.  
    | 30.00th=[ 717], 40.00th=[ 725], 50.00th=[ 734], 60.00th=[ 742],
  38.  
    | 70.00th=[ 758], 80.00th=[ 775], 90.00th=[ 807], 95.00th=[ 865],
  39.  
    | 99.00th=[ 979], 99.50th=[ 1074], 99.90th=[ 1778], 99.95th=[ 2311],
  40.  
    | 99.99th=[ 4359]
  41.  
    bw ( MiB/s): min= 1001, max= 1217, per=100.00%, avg=1168.30, stdev=27.87, samples=87
  42.  
    iops : min=256256, max=311552, avg=299084.48, stdev=7134.34, samples=87
  43.  
    lat (usec) : 500=0.22%, 750=66.40%, 1000=32.58%
  44.  
    lat (msec) : 2=0.74%, 4=0.05%, 10=0.01%, 20=0.01%
  45.  
    cpu : usr=7.08%, sys=23.74%, ctx=535537, majf=0, minf=266
  46.  
    IO depths : 1=0.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=100.0%
  47.  
    submit : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0%
  48.  
    complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0%
  49.  
    issued rwts: total=13107200,0,0,0 short=0,0,0,0 dropped=0,0,0,0
  50.  
    latency : target=0, window=0, percentile=100.00%, depth=256
  51.  
     
  52.  
    Run status group 0 (all jobs):
  53.  
    READ: bw=1168MiB/s (1225MB/s), 1168MiB/s-1168MiB/s (1225MB/s-1225MB/s), io=50.0GiB (53.7GB), run=43839-43839msec
  54.  
     
  55.  
    Disk stats (read/write):
  56.  
    sdc: ios=203884/0, merge=12844944/0, ticks=149896/0, in_queue=149896, util=99.82%
  57.  
    root@vm-nvme:~/fio#
bash

测试数据可以看出,内核NVME驱动 + LIO iSCSI Target,性能损耗,iops:299k/728k,BW:1183MiB/2842MiB,无论是iops还是BW都损耗60%,latency则增长近1.5倍,338增长到839

iSCSI混合随机读写(读写比例80/20)

  1.  
    [global]
  2.  
    bs=4096
  3.  
    rw=randrw
  4.  
    rwmixwrite=20
  5.  
    ioengine=libaio
  6.  
    direct=1
  7.  
    group_reporting
  8.  
    iodepth=32
  9.  
    iodepth_batch=8
  10.  
    iodepth_low=8
  11.  
    iodepth_batch_complete=8
  12.  
    userspace_reap
  13.  
    runtime=60
  14.  
    [test]
  15.  
    numjobs=16
  16.  
    filename=/dev/sdc
bash
iodepth/_low/_batch/_complete numjobs iops(read) bw(read) iops(write) bw(write)
64/16/16/16 16 29.4k 115MiB/s 7355 28.7MiB/s
32/8/8/8 16 29.8k 117MiB/s 7464 29.2MiB/s
32/8/8/8 8 28.9k 113MiB/s 7229 28.2MiB/s

对比​Linux开源存储漫谈——IO性能测试利器内核nvme驱动的测试结果相同参数总体上损耗掉25%的性能

NFS性能测试

Linux内核中实现了nfs,nfs具体是用rpc来实现的,linux内核的rpc模块实现大致三个模块:一是与用户层的接口;二是逻辑控制框架;三是通信框架,其中,用户层的接口及通信框架都是可插拔可替换,nfs就是作为一个用户接口子的rpc应用,介绍完了NFS的实现原理,开始搭建NFS服务

铲除iscsi配置

虚拟机,注销iscsi session禁掉iscsi服务

  1.  
    root@vm-nvme:~# iscsiadm -m node --logout
  2.  
    root@vm-nvme:~# systemctl stop iscsi.service
  3.  
    root@vm-nvme:~# systemctl disable iscsi.service
bash

宿主机,进入targetcli delete掉target 及backstores/block

  1.  
    # 清除iscsi target
  2.  
    root@nvme:~# targetcli
  3.  
    /> cd iscsi
  4.  
    /iscsi> delete iqn.2023-05.nvme.iscsi.com:nvmeiscsi
  5.  
    /iscsi> cd /backstores/block
  6.  
    /backstores/block> delete iscsi_nvme
  7.  
    /backstores/block> exit
  8.  
    root@nvme:~#
  9.  
     
  10.  
    # 停止并禁用target服务
  11.  
    root@nvme:~# systemctl stop target.service
  12.  
    root@nvme:~# systemctl disable target.service
  13.  
     
bash

宿主机配置NFS服务(见:Install the NFS Server on Ubuntu

  1.  
    # 安装nfs-server
  2.  
    root@nvme:~# apt install -y nfs-kernel-server
  3.  
     
  4.  
    # mkfs nvme盘,并挂载至/mnt/nvme
  5.  
    root@nvme:~# mkdir -p /mnt/nvme
  6.  
    root@nvme:~# mkfs -t ext4 /dev/nvme0n1
  7.  
    root@nvme:~# mount -t ext4 /dev/nvme0n1 /mnt/nvme
  8.  
    root@nvme:~# mkdir /mnt/nvme/share
  9.  
    root@nvme:~# chown nobody:nogroup /mnt/nvme/share
  10.  
    root@nvme:~# chmod 777 /mnt/nvme/share
  11.  
     
  12.  
    # /etc/exports 文件追加"/mnt/nvme/share *(rw,sync,fsid=0,crossmnt,no_subtree_check)"
  13.  
    root@nvme:~# cat /etc/exports
  14.  
    # /etc/exports: the access control list for filesystems which may be exported
  15.  
    # to NFS clients. See exports(5).
  16.  
    #
  17.  
    # Example for NFSv2 and NFSv3:
  18.  
    # /srv/homes hostname1(rw,sync,no_subtree_check)
  19.  
    # Example for NFSv4:
  20.  
    # /srv/nfs4 gss/krb5i(rw,sync,fsid=0,crossmnt,no_subtree_check)
  21.  
    # /srv/nfs4/homes gss/krb5i(rw,sync,no_subtree_check)
  22.  
    /mnt/nvme/share *(rw,sync,fsid=0,crossmnt,no_subtree_check)
  23.  
     
  24.  
    # 启动nfs服务
  25.  
    root@nvme:~# systemctl start nfs-kernel-server
bash

虚拟机 (见:Install the NFS Server on Ubuntu

  1.  
    root@vm-nvme:~# apt install -y nfs-common
  2.  
    root@vm-nvme:~# mkdir -p /mnt/nvme
  3.  
    root@vm-nvme:~# mount -t nfs 192.168.2.111:/mnt/nvme/share /mnt/nvme
bash

fio测试NFS性能

  1.  
    # 宿主机与虚拟机分别执行fio -name=write -rw=write -ioengine=libaio -direct=1 -iodepth=64 -numjobs=1 -size=5G
  2.  
     
  3.  
    # 先在宿主机执行
  4.  
     
  5.  
    root@nvme:~# fio -name=write -rw=write -ioengine=libaio -direct=1 -iodepth=64 -numjobs=1 -size=5G -filename=/mnt/nvme/share/local-fio-5g
  6.  
    write: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=64
  7.  
    ......
  8.  
    Jobs: 1 (f=1): [W(1)][100.0%][w=527MiB/s][w=135k IOPS][eta 00m:00s]
  9.  
    write: (groupid=0, jobs=1): err= 0: pid=2458: Sun May 14 07:05:39 2023
  10.  
    write: IOPS=135k, BW=527MiB/s (553MB/s)(5120MiB/9715msec); 0 zone resets
  11.  
    ......
  12.  
    Run status group 0 (all jobs):
  13.  
    WRITE: bw=527MiB/s (553MB/s), 527MiB/s-527MiB/s (553MB/s-553MB/s), io=5120MiB (5369MB), run=9715-9715msec
  14.  
     
  15.  
    Disk stats (read/write):
  16.  
    nvme0n1: ios=0/1300604, merge=0/420, ticks=0/28956, in_queue=28957, util=98.93%
  17.  
    root@nvme:~#
  18.  
     
  19.  
     
  20.  
     
  21.  
    # 切换到vm中执行
  22.  
    root@vm-nvme:~# fio -name=write -rw=write -ioengine=libaio -direct=1 -iodepth=64 -numjobs=1 -size=5G -filename=/mnt/nvme/nfs-fio-5g
  23.  
    ......
  24.  
    Jobs: 1 (f=1): [W(1)][100.0%][w=26.8MiB/s][w=6857 IOPS][eta 00m:00s]
  25.  
    write: (groupid=0, jobs=1): err= 0: pid=2022: Sun May 14 07:10:07 2023
  26.  
    write: IOPS=6815, BW=26.6MiB/s (27.9MB/s)(5120MiB/192325msec); 0 zone resets
  27.  
    ......
  28.  
    Run status group 0 (all jobs):
  29.  
    WRITE: bw=26.6MiB/s (27.9MB/s), 26.6MiB/s-26.6MiB/s (27.9MB/s-27.9MB/s), io=5120MiB (5369MB), run=192325-192325msec
  30.  
    root@vm-nvme:~#
bash

可以看出,差距相当明显,iops:135k/6815,bw:527MiB/26.8MiB,再测试顺序读,差距没有顺序写那么夸张,但也很大,同时,如果你多一些测试,同时使用top、vmstat、sar等性能观测工具,你就会发现,宿主机的负载会因iowait增高而快速攀升,也就是说,如果存在资源竞争或资源有限的情况下,NFS性能问题会加剧

NFS混合随机读写(读写比例80/20)

为了排除宿主机Page Cache对测试结果的影响,每次执行fio前,先在宿主机上执行echo 3>/proc/sys/vm/drop_caches,如果宿主机不清缓存,数据会漂亮一些(1.8倍左右)

  1.  
    [global]
  2.  
    bs=4096
  3.  
    rw=randrw
  4.  
    rwmixwrite=20
  5.  
    ioengine=libaio
  6.  
    direct=1
  7.  
    group_reporting
  8.  
    iodepth=32
  9.  
    iodepth_batch=8
  10.  
    iodepth_low=8
  11.  
    iodepth_batch_complete=8
  12.  
    userspace_reap
  13.  
    runtime=60
  14.  
    [test]
  15.  
    numjobs=16
  16.  
    filename=/mnt/nvme/nfs-fio-10g
bash
iodepth/_low/_batch/_complete numjobs

iops(read)

iscsi/nfs

bw(read)

iscsi/nfs

iops(write)

iscsi/nfs

bw(write)

iscsi/nfs

64/16/16/16 16 29.4k/10.9k 115MiB/42.6MiB 7355/2728 28.7MiB/10.7MiB
32/8/8/8 16 29.8k/11.2k 117MiB/43.6MiB 7464/2798 29.2MiB/10.9MiB
32/8/8/8 8 28.9k/10.8k 113MiB/42.3MiB 7229/2711 28.2MiB/10.6MiB

相比于NFS,iscsi几乎不会引起主机负载表高,cache也几乎没什么变化,这得益于Linux-IO及Zero-Copy技术,在后续的spdk及IO 虚拟化vhost中会再次见证Zero-Copy带来性能提升

 

欢迎转载,请说明出处

 

 

 
posted on 2025-07-22 15:53  灌木大叔  阅读(152)  评论(0)    收藏  举报