代码改变世界

iSCSI的客户端messages频繁报错问题解决

2023-01-17 13:53  AlfredZhao  阅读(735)  评论(0编辑  收藏  举报

问题现象:

在自己的工作站中安装的RAC测试环境,使用了iSCSI模拟共享存储,环境运行OK,但是在messages信息中频繁报错如下:

[root@db01rac2 ~]# tail -20f /var/log/messages
Jan 13 23:08:37 db01rac2 iscsid: iscsid: connecting to 192.168.1.10:3260
Jan 13 23:08:37 db01rac2 iscsid: iscsid: connected local port 64350 to 192.168.1.10:3260
Jan 13 23:08:37 db01rac2 iscsid: iscsid: login response status 0000
Jan 13 23:08:37 db01rac2 iscsid: iscsid: deleting a scheduled/waiting thread!
Jan 13 23:08:37 db01rac2 iscsid: iscsid: connection1:0 is operational after recovery (1 attempts)
Jan 13 23:08:39 db01rac2 iscsid: iscsid: Kernel reported iSCSI connection 1:0 error (1020 - ISCSI_ERR_TCP_CONN_CLOSE: TCP connection closed) state (3)
Jan 13 23:08:39 db01rac2 iscsid: iscsid: re-opening session 1 (reopen_cnt 0)
Jan 13 23:08:39 db01rac2 iscsid: iscsid: disconnecting conn 0x55beee57de90, fd 6
Jan 13 23:08:39 db01rac2 kernel: connection1:0: detected conn error (1020)
Jan 13 23:08:41 db01rac2 iscsid: iscsid: Poll was woken by an alarm
Jan 13 23:08:41 db01rac2 iscsid: iscsid: re-opening session 1 (reopen_cnt 0)

根据

的说法是:

This is normal and can be safely ignored.
The error sequence indicates that there was a temporary problem in connectivity to the storage backend but it was safely recovered within seconds.

但是,这是持续在/var/log/messages中报错的,虽然未影响RAC使用,但是总觉得这样刷日志肯定是不正常的,继续查询:

在suse的一篇文章中,

The systemd journal, by default also written to /var/log/messages, fills with similar messages when an iSCSI LUN is shared across multiple nodes.

这个写的场景很匹配,我这里就是两个节点共享同样的iSCSI LUN,但是究竟能否忽略这个错误呢?
或是有别的设置?

1.确保IQN的名字唯一性

如果我们需要避免这样的情况发生。需要在节点上修改IQN,避免重名。
而其实我这里原本这个文件是不一样的,但确实在配置时修改成一样的了:

cat /etc/iscsi/initiatorname.iscsi

[root@db01rac1 ~]# cat /etc/iscsi/initiatorname.iscsi
#InitiatorName=iqn.1988-12.com.oracle:178a747c44
InitiatorName=iqn.2003-01.org.linux-iscsi.storage.x8664:sn.d5fd6c3922b6:client

[root@db01rac2 ~]# cat /etc/iscsi/initiatorname.iscsi
#InitiatorName=iqn.1988-12.com.oracle:b8e5b14ad0fa
InitiatorName=iqn.2003-01.org.linux-iscsi.storage.x8664:sn.d5fd6c3922b6:client

可是如果不一样的话,按我现在的配置,就无法识别开机启动挂载盘;

那为何无法识别?是不是iSCSI服务端的ACL配置问题?

下面来验证下,先把 /etc/iscsi/initiatorname.iscsi 改回默认值,两个节点不一样;

[root@db01rac1 ~]# cat /etc/iscsi/initiatorname.iscsi
InitiatorName=iqn.1988-12.com.oracle:178a747c44

[root@db01rac2 ~]# cat /etc/iscsi/initiatorname.iscsi
InitiatorName=iqn.1988-12.com.oracle:b8e5b14ad0fa

如果不重启机器,直接尝试logout和login是没问题的:

iscsiadm -m node
iscsiadm -m discovery -t st -p 192.168.1.10
iscsiadm -m node -T iqn.2003-01.org.linux-iscsi.storage.x8664:sn.d5fd6c3922b6 -p 192.168.1.10 --login

--以节点2为例:
[root@db01rac2 ~]# iscsiadm -m node -T iqn.2003-01.org.linux-iscsi.storage.x8664:sn.d5fd6c3922b6 -p 192.168.1.10 --logout
Logging out of session [sid: 1, target: iqn.2003-01.org.linux-iscsi.storage.x8664:sn.d5fd6c3922b6, portal: 192.168.1.10,3260]
Logout of [sid: 1, target: iqn.2003-01.org.linux-iscsi.storage.x8664:sn.d5fd6c3922b6, portal: 192.168.1.10,3260] successful.
[root@db01rac2 ~]#
[root@db01rac2 ~]#
[root@db01rac2 ~]# lsblk
NAME        MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
vdb         251:16   0  100G  0 disk
└─ora-u01   252:2    0  100G  0 lvm  /u01
sr0          11:0    1 1024M  0 rom
vda         251:0    0   10G  0 disk
├─vda2      251:2    0    9G  0 part
│ ├─ol-swap 252:1    0    1G  0 lvm  [SWAP]
│ └─ol-root 252:0    0    8G  0 lvm  /
└─vda1      251:1    0    1G  0 part /boot
[root@db01rac2 ~]#
[root@db01rac2 ~]#
[root@db01rac2 ~]# iscsiadm -m node -T iqn.2003-01.org.linux-iscsi.storage.x8664:sn.d5fd6c3922b6 -p 192.168.1.10 --login
Logging in to [iface: default, target: iqn.2003-01.org.linux-iscsi.storage.x8664:sn.d5fd6c3922b6, portal: 192.168.1.10,3260] (multiple)
Login to [iface: default, target: iqn.2003-01.org.linux-iscsi.storage.x8664:sn.d5fd6c3922b6, portal: 192.168.1.10,3260] successful.
[root@db01rac2 ~]#
[root@db01rac2 ~]#
[root@db01rac2 ~]# lsblk
NAME        MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sdf           8:80   0    1G  0 disk
sdd           8:48   0    1G  0 disk
vdb         251:16   0  100G  0 disk
└─ora-u01   252:2    0  100G  0 lvm  /u01
sr0          11:0    1 1024M  0 rom
sdg           8:96   0   30G  0 disk
sde           8:64   0    1G  0 disk
vda         251:0    0   10G  0 disk
├─vda2      251:2    0    9G  0 part
│ ├─ol-swap 252:1    0    1G  0 lvm  [SWAP]
│ └─ol-root 252:0    0    8G  0 lvm  /
└─vda1      251:1    0    1G  0 part /boot
sdh           8:112  0   16G  0 disk

但是如果重启了机器,会发现现有配置无法正常login操作:

[root@db01rac1 ~]# iscsiadm -m node
192.168.1.10:3260,1 iqn.2003-01.org.linux-iscsi.storage.x8664:sn.d5fd6c3922b6
[root@db01rac1 ~]# iscsiadm -m node -T iqn.2003-01.org.linux-iscsi.storage.x8664:sn.d5fd6c3922b6 -p 192.168.1.10 --login
Logging in to [iface: default, target: iqn.2003-01.org.linux-iscsi.storage.x8664:sn.d5fd6c3922b6, portal: 192.168.1.10,3260] (multiple)
iscsiadm: Could not login to [iface: default, target: iqn.2003-01.org.linux-iscsi.storage.x8664:sn.d5fd6c3922b6, portal: 192.168.1.10,3260].
iscsiadm: initiator reported error (24 - iSCSI login failed due to authorization failure)
iscsiadm: Could not log into all portals

2.修正iSCSI服务端ACL的配置

看起来真是iSCSI服务端ACL的配置问题,那就去服务端看下acl的配置:

acl 
/> cd /iscsi/iqn.2003-01.org.linux-iscsi.storage.x8664:sn.d5fd6c3922b6/tpg1/acls/
/iscsi/iqn.20...2b6/tpg1/acls> create iqn.1988-12.com.oracle:178a747c44
/iscsi/iqn.20...2b6/tpg1/acls> create iqn.1988-12.com.oracle:b8e5b14ad0fa
/iscsi/iqn.20...2b6/tpg1/acls> delete iqn.2003-01.org.linux-iscsi.storage.x8664:sn.d5fd6c3922b6:client

再次节点1尝试login,发现果然成功:

[root@db01rac1 ~]# iscsiadm -m node -T iqn.2003-01.org.linux-iscsi.storage.x8664:sn.d5fd6c3922b6 -p 192.168.1.10 --login
Logging in to [iface: default, target: iqn.2003-01.org.linux-iscsi.storage.x8664:sn.d5fd6c3922b6, portal: 192.168.1.10,3260] (multiple)
Login to [iface: default, target: iqn.2003-01.org.linux-iscsi.storage.x8664:sn.d5fd6c3922b6, portal: 192.168.1.10,3260] successful.

ACL的配置我改为两个RAC节点的,删除了之前的配置,具体最终显示如下:

/iscsi/iqn.20...2b6/tpg1/acls> pwd
/iscsi/iqn.2003-01.org.linux-iscsi.storage.x8664:sn.d5fd6c3922b6/tpg1/acls
/iscsi/iqn.20...2b6/tpg1/acls> ls
o- acls .................................................................................................................. [ACLs: 2]
  o- iqn.1988-12.com.oracle:178a747c44 ............................................................................ [Mapped LUNs: 5]
  | o- mapped_lun0 ......................................................................................... [lun0 block/disk1 (rw)]
  | o- mapped_lun1 ......................................................................................... [lun1 block/disk2 (rw)]
  | o- mapped_lun2 ......................................................................................... [lun2 block/disk3 (rw)]
  | o- mapped_lun3 ......................................................................................... [lun3 block/disk4 (rw)]
  | o- mapped_lun4 ......................................................................................... [lun4 block/disk5 (rw)]
  o- iqn.1988-12.com.oracle:b8e5b14ad0fa .......................................................................... [Mapped LUNs: 5]
    o- mapped_lun0 ......................................................................................... [lun0 block/disk1 (rw)]
    o- mapped_lun1 ......................................................................................... [lun1 block/disk2 (rw)]
    o- mapped_lun2 ......................................................................................... [lun2 block/disk3 (rw)]
    o- mapped_lun3 ......................................................................................... [lun3 block/disk4 (rw)]
    o- mapped_lun4 ......................................................................................... [lun4 block/disk5 (rw)]
/iscsi/iqn.20...2b6/tpg1/acls>

这样,最终测试,重启两个rac节点,磁盘挂载正常,且再次查询/var/log/messages信息,不再有iscsi的报错。
其实这个问题早在很多年前的测试我就遇到过,当时没有查出来原因就忽略了,因为也不影响测试,又不是自己的专业范围,而如今本着成长型思维,注重这些细节,终于还是找到了原因。最后看着清爽的messages信息,还是有些成就感的_