Ceph MDS高可用【十一】

Ceph mds(metadata service)作为 ceph 的访问人口,需要实现高性能及数据备份,而 MDS支持多 MDS 结构,甚至还能实现类似于 redis cluster 的多主从结构
以实现 MDS 服务的高性能和高可用,假设启动4个MDS 进程,设置最大 max mds为 2,这时候有2个MDS 成为主节点,另外的两个2个MDS 作为备份节点。

设置每个主节点专用的备份 MDS,,也就是如果此主节点出现问题马上切换到另个 MDS 接管主 MDS 并继续对外提供元数据读写,设置备份 MDS的常用选项如下

mds_standby_replay:值为 true 或 false,true 表示开启 replay 模式,这种模式下主 MDS内的数量将实时与从 MDS 同步,如果主宕机,从可以快速的切换,如果为 false 只有宕机的时候才去同步数据,这样会有一段时间的中断。
mds_standby_for_name:设置当前 MDS 进程只用于备份于指定名称的 MDS
mds_standby_for_rank:设置当前 MDS 进程只用于备份于哪个 Rank((上级节点),通常头Rank 编号。另外在存在多个 CephFS 文件系统中,还可以使用 mds standby for fscid 参数来为指定不同的文件系统。
mds standby for fscid:指定 CephFS 文件系统 ID,需要联合 mds standby for_rank生效,如果设置 mds_standby_for_rank,那么就是用于指定文件系统的指定 Rank,如果没有设置,就是指定文件系统的所有 Rank。

##当前mds服务器状态
cephadmin@ceph-deploy:~$ ceph mds stat
mycephfs:1 {0=ceph-mgr1=up:active}

##添加MSD服务器
#将ceph-mgr1、ceph-mgr2、ceph-mon2、ceph-mon3作为mds服务角色添加至ceph集群,最好实现2主2备的mds高可用和高性能结构
root@ceph-mon2:~# apt install -y ceph-mds -y
root@ceph-mon3:~# apt install -y ceph-mds -y
root@ceph-mgr2:~# apt install -y ceph-mds -y
#添加mds服务器
cephadmin@ceph-deploy:~$ ceph-deploy mds create ceph-mon2
cephadmin@ceph-deploy:~$ ceph-deploy mds create ceph-mon3
cephadmin@ceph-deploy:~$ ceph-deploy mds create ceph-mgr2
#再次查看有4个mds配置-1主3备
cephadmin@ceph-deploy:~$ ceph -s
  cluster:
    id:     0d8fb726-ee6d-4aaf-aeca-54c68e2584af
    health: HEALTH_WARN
            1 pool(s) do not have an application enabled

  services:
    mon: 3 daemons, quorum ceph-mon1,ceph-mon2,ceph-mon3 (age 30h)
    mgr: ceph-mgr1(active, since 30h), standbys: ceph-mgr2
    mds: 1/1 daemons up, 3 standby
    osd: 9 osds: 9 up (since 30h), 9 in (since 2d)

  data:
    volumes: 1/1 healthy
    pools:   6 pools, 225 pgs
    objects: 112 objects, 168 MiB
    usage:   3.1 GiB used, 267 GiB / 270 GiB avail
    pgs:     225 active+clean
##验证ceph集群当前状态-当前处于激活状态的 mds 服务器有一台,处于备份状态的 mds 服务器有三台.
cephadmin@ceph-deploy:~$ ceph fs status
mycephfs - 1 clients
========
RANK  STATE      MDS        ACTIVITY     DNS    INOS   DIRS   CAPS
 0    active  ceph-mgr1  Reqs:    0 /s    17     14     12      1
      POOL         TYPE     USED  AVAIL
cephfs-metadata  metadata   831k  84.3G
  cephfs-data      data    12.0k  84.3G
STANDBY MDS
 ceph-mon2
 ceph-mon3
 ceph-mgr2
MDS version: ceph version 16.2.15 (618f440892089921c3e944a991122ddc44e60516) pacific (stable)
##当前的文件系统状态
cephadmin@ceph-deploy:~$ ceph fs get mycephfs
Filesystem 'mycephfs' (1)
fs_name mycephfs
epoch   18
flags   12
created 2024-03-11T16:43:48.430223+0800
modified        2024-03-13T10:21:40.750894+0800
tableserver     0
root    0
session_timeout 60
session_autoclose       300
max_file_size   1099511627776
required_client_features        {}
last_failure    0
last_failure_osd_epoch  132
compat  compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}
max_mds 1
in      0
up      {0=54104}
failed
damaged
stopped
data_pools      [4]
metadata_pool   3
inline_data     disabled
balancer
standby_count_wanted    1
[mds.ceph-mgr1{0:54104} state up:active seq 19 addr [v2:192.168.40.154:6800/4207641365,v1:192.168.40.154:6801/4207641365] compat {c=[1],r=[1],i=[7ff]}]
##设置处于激活状态mds的数量
#目前有四个 mds 服务器,但是有一个主三个备,可以优化一下部署架构,设置为为两主两备。
cephadmin@ceph-deploy:~$ ceph fs set mycephfs max_mds 2 #设置同时活跃的主mds最大值为2 推荐全部为mds 为4
cephadmin@ceph-deploy:~$ ceph fs status
mycephfs - 1 clients
========
RANK  STATE      MDS        ACTIVITY     DNS    INOS   DIRS   CAPS
 0    active  ceph-mgr1  Reqs:    0 /s    17     14     12      1
 1    active  ceph-mgr2  Reqs:    0 /s    10     13     11      0
      POOL         TYPE     USED  AVAIL
cephfs-metadata  metadata   903k  84.3G
  cephfs-data      data    12.0k  84.3G
STANDBY MDS
 ceph-mon2
 ceph-mon3
MDS version: ceph version 16.2.15 (618f440892089921c3e944a991122ddc44e60516) pacific (stable)

MDS高可用优化

#目前的状态是 ceph-mgr1 和 ceph-mgr2 分别是 active 状态,ceph-mon2 和 ceph-mon3分别处于 standby 状态,
#现在可以将 ceph-mgr2 设置为 ceph-mgr1 的 standby,将ceph-mon3 设置为 ceph-mon2 的 standby,以实现每个主都有一个固定备份角色的结构
cephadmin@ceph-deploy:~$ cat ceph.conf
[global]
fsid = 0d8fb726-ee6d-4aaf-aeca-54c68e2584af
public_network = 192.168.40.0/24
cluster_network = 172.31.40.0/24
mon_initial_members = ceph-mon1
mon_host = 192.168.40.151
auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx

[mds.ceph-mgr2]
#mds_standby_for_fscid=mycephfs
mds_standby_for_name=ceph-mgr1
mds_standby_replay=true

[mds.ceph-mgr1]
#mds_standby_for_fscid=mycephfs
mds_standby_for_name=ceph-mgr2
mds_standby_replay=true

[mds.ceph-mon2]
#mds_standby_for_fscid=mycephfs
mds_standby_for_name=ceph-mon1
mds_standby_replay=true

[mds.ceph-mon1]
#mds_standby_for_fscid=mycephfs
mds_standby_for_name=ceph-mon2
mds_standby_replay=true

#分发配置文件并重启mds服务
cephadmin@ceph-deploy:~$ ceph-deploy --overwrite-conf config push ceph-mon2
cephadmin@ceph-deploy:~$ ceph-deploy --overwrite-conf config push ceph-mon3
cephadmin@ceph-deploy:~$ ceph-deploy --overwrite-conf config push ceph-mgr1
cephadmin@ceph-deploy:~$ ceph-deploy --overwrite-conf config push ceph-mgr2

root@ceph-mon2:~# systemctl restart ceph-mds@ceph-mon2.service
root@ceph-mon3:~# systemctl restart ceph-mds@ceph-mon3.service
root@ceph-mgr1:~# systemctl restart ceph-mds@ceph-mgr1.service
root@ceph-mgr2:~# systemctl restart ceph-mds@ceph-mgr2.service

##ceph集群mds高可用状态
cephadmin@ceph-deploy:~$ ceph fs status
mycephfs - 1 clients
========
RANK  STATE      MDS        ACTIVITY     DNS    INOS   DIRS   CAPS
 0    active  ceph-mon3  Reqs:    0 /s    22     14     12      1
 1    active  ceph-mgr1  Reqs:    0 /s    10     13     11      0
      POOL         TYPE     USED  AVAIL
cephfs-metadata  metadata   915k  84.3G
  cephfs-data      data    12.0k  84.3G
STANDBY MDS
 ceph-mon2
 ceph-mgr2
MDS version: ceph version 16.2.15 (618f440892089921c3e944a991122ddc44e60516) pacific (stable)
#查看avtive和standby对应关系
cephadmin@ceph-deploy:~$ ceph fs get mycephfs
Filesystem 'mycephfs' (1)
fs_name mycephfs
epoch   43
flags   12
created 2024-03-11T16:43:48.430223+0800
modified        2024-03-14T16:53:20.778278+0800
tableserver     0
root    0
session_timeout 60
session_autoclose       300
max_file_size   1099511627776
required_client_features        {}
last_failure    0
last_failure_osd_epoch  176
compat  compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}
max_mds 2
in      0,1
up      {0=54662,1=55740}
failed
damaged
stopped
data_pools      [4]
metadata_pool   3
inline_data     disabled
balancer
standby_count_wanted    1
[mds.ceph-mon3{0:54662} state up:active seq 5 addr [v2:192.168.40.153:6800/3612421810,v1:192.168.40.153:6801/3612421810] compat {c=[1],r=[1],i=[7ff]}]
[mds.ceph-mgr1{1:55740} state up:active seq 8 addr [v2:192.168.40.154:6800/1423113127,v1:192.168.40.154:6801/1423113127] compat {c=[1],r=[1],i=[7ff]}]

posted @ 2024-03-14 16:56  しみずよしだ  阅读(144)  评论(0)    收藏  举报