05-cephFS
一、cephFS基础知识
1.CephFS概述
RBD提供了远程磁盘挂载的问题,但无法做到多个主机共享一个磁盘,如果有一份数据很多客户端都要读写该怎么办呢?这时CephFS作为文件系统解决方案就派上用场了。
CephFS是POSIX兼容的文件系统,它直接使用Ceph存储集群来存储数据。Ceph文件系统于Ceph块设备,同时提供S3和Swift API的Ceph对象存储或者原生库(librados)的实现机制稍显不同。

如上图所示,CephFS支持内核模块或者fuse方式访问,如果宿主机没有安装ceph模块,则可以考虑使用fuse方式访问。可以通过"modinfo ceph"来检查当前宿主机是否有ceph相关内核模块。即:
[root@ceph141 ~]# modinfo ceph
filename: /lib/modules/5.15.0-135-generic/kernel/fs/ceph/ceph.ko
license: GPL
description: Ceph filesystem for Linux
author: Patience Warnick <patience@newdream.net>
author: Yehuda Sadeh <yehuda@hq.newdream.net>
author: Sage Weil <sage@newdream.net>
alias: fs-ceph
srcversion: 2FCF4F127F7C5521F58D0F2
depends: libceph,fscache,netfs
retpoline: Y
intree: Y
name: ceph
vermagic: 5.15.0-135-generic SMP mod_unload modversions
sig_id: PKCS#7
signer: Build time autogenerated kernel key
sig_key: 28:C9:87:73:6F:1C:0B:8E:97:3A:43:19:64:D6:FA:D2:96:00:31:1C
sig_hashalgo: sha512
signature: 8F:4B:9B:F3:E7:4B:D5:8C:B0:82:8C:4F:68:96:F8:22:D1:01:AD:8E:
23:29:F7:7C:94:A1:20:F3:87:BF:1E:B1:BE:2E:2C:2E:21:CB:FC:B4:
C3:6C:64:B8:B2:DD:CD:D6:A0:62:72:AA:1E:27:E2:A0:9C:41:BE:F8:
40:B2:F7:8F:FE:94:5C:3C:45:0B:5A:1C:53:7D:9B:DE:6D:53:0F:84:
83:7E:45:81:B4:CF:D9:02:44:B9:D9:31:89:01:D0:05:48:81:20:83:
76:57:5E:C3:93:B0:BB:42:60:98:0E:7F:4A:C8:40:25:6E:25:79:4C:
28:09:F3:D6:4B:67:ED:2D:68:0F:2D:C3:50:6A:68:5F:6E:21:9C:A4:
6F:6E:14:36:EE:13:39:EE:6B:A4:0C:2C:BA:4F:10:15:FC:2B:B4:E7:
52:78:81:D8:9A:6D:70:88:96:9A:FD:33:1F:4A:CB:5A:37:87:F0:3C:
DB:AF:72:7E:1F:B8:BE:31:C8:26:8C:B9:A5:31:FA:9D:A6:63:21:5A:
C8:8E:2E:C0:44:5A:FB:ED:49:47:75:9A:2C:91:DF:C3:80:2C:C4:5F:
0C:33:DA:9E:4F:E7:7E:68:60:D9:82:06:21:D6:8D:14:63:23:3C:07:
5C:EA:2D:3A:B8:6E:CC:7E:8C:B9:81:A8:3C:98:09:27:1D:AC:B6:AC:
1F:5E:07:16:A0:2A:8B:79:37:5C:90:6B:E1:6E:97:9F:FB:01:8E:A3:
6F:AF:72:E0:1A:7B:FE:A3:2A:2C:45:E7:B2:06:BD:C8:24:25:D6:49:
7E:81:DB:D1:79:89:A0:D6:0F:73:A5:1E:7F:CC:BE:DC:45:CB:71:75:
F4:7C:C9:4D:95:17:B6:76:CF:A0:1C:CA:B6:E6:30:FF:E6:51:27:A0:
9D:95:8B:5A:A8:F5:AC:6C:8A:79:4B:48:42:55:0C:31:16:21:BB:7B:
44:59:78:1B:B5:C5:DE:93:EF:41:DB:2E:15:CC:37:6F:3D:33:F4:71:
37:D9:7E:C4:E5:4E:B5:CC:B6:C8:50:E8:98:91:25:97:00:E5:DD:FB:
64:2A:67:58:B3:34:CD:11:78:B3:A5:48:C9:85:19:20:93:64:C6:DD:
E4:C1:7D:24:44:6C:F3:75:46:41:A7:05:F7:92:93:1C:E8:E9:CC:89:
81:E1:31:23:7C:69:5C:F4:3B:69:17:04:5A:6D:02:D9:97:43:4D:A3:
88:F5:03:42:22:9A:B0:CC:35:15:BC:73:6A:14:B0:25:93:73:FA:C0:
DC:58:D6:36:BE:50:16:87:0E:B2:73:50:2D:96:FB:8A:43:01:DC:13:
31:67:2F:74:03:5D:C0:B2:71:2E:29:5A
parm: disable_send_metrics:Enable sending perf metrics to ceph cluster (default: on)
2.官方文档
https://docs.ceph.com/en/reef/architecture/#ceph-file-system
https://docs.ceph.com/en/reef/cephfs/#ceph-file-system
3.CephFS架构原理
CephFS需要至少运行一个元数据服务器(MDS)守护进程(ceph-mds),此进程管理与CephFS上存储文件相关的元数据信息。
MDS虽然称为元数据服务,但是它却不存储任何元数据信息,它存在的目的仅仅是让我们rados集群提供存储接口。
客户端在访问文件接口时,首先链接到MDS上,在MDS到内存里面维持元数据的索引信息,从而间接找到去哪个数据节点读取数据。这一点和HDFS文件系统类似。
4.CephFS和NFS对比
相较于NFS来说,它主要有以下特点优势:
- 1.底层数据冗余的功能,底层的roados提供了基本数据冗余功能,因此不存在NFS的单点故障因素;
- 2.底层roados系统有N个存储节点组成,所以数据的存储可以分散I/O,吞吐量较高;
- 3.底层roados系统有N个存储节点组成,所以ceph提供的扩展性要相当的高;
二、cephFS的一主一从架构部署
官方文档
https://docs.ceph.com/en/reef/cephfs/createfs/
1.创建两个存储池分别用于存储mds的元数据和数据
#查看部署前的集群状态
[root@ceph141 ~]# ceph -s
cluster:
id: 11e66474-0e02-11f0-82d6-4dcae3d59070
health: HEALTH_OK
services:
mon: 3 daemons, quorum ceph141,ceph142,ceph143 (age 41h)
mgr: ceph141.mbakds(active, since 41h), standbys: ceph142.qgifwo
osd: 9 osds: 9 up (since 41h), 9 in (since 41h)
data:
pools: 3 pools, 25 pgs
objects: 307 objects, 644 MiB
usage: 2.8 GiB used, 5.3 TiB / 5.3 TiB avail
pgs: 25 active+clean
#创建cephfs_data存储池
[root@ceph141 ~]# ceph osd pool create cephfs_data
pool 'cephfs_data' created
#创建ceph——metadata存储池
[root@ceph141 ~]# ceph osd pool create cephfs_metadata
pool 'cephfs_metadata' created
2.创建一个文件系统,名称为"dezyan-cephfs"
[root@ceph141 ~]# ceph fs new dezyan-cephfs cephfs_metadata cephfs_data
# 标记'cephfs_data'存储池为大容量
[root@ceph141 ~]# ceph osd pool set cephfs_data bulk true
[root@ceph141 ~]# ceph osd pool get cephfs_data bulk
bulk: tru
3.查看创建的文件系统
[root@ceph141 ~]# ceph fs ls
name: dezyan-cephfs, metadata pool: cephfs_metadata, data pools: [cephfs_data ]
#查看 Ceph 文件系统(CephFS)的元数据服务器(MDS)的状态
[root@ceph141 ~]# ceph mds stat
dezyan-cephfs:0 #当前没有活动的 MDS 进程
#查看集群状态
[root@ceph141 ~]# ceph -s
cluster:
id: 11e66474-0e02-11f0-82d6-4dcae3d59070
health: HEALTH_ERR
1 filesystem is offline
1 filesystem is online with fewer MDS than max_mds
services:
mon: 3 daemons, quorum ceph141,ceph142,ceph143 (age 41h)
mgr: ceph141.mbakds(active, since 41h), standbys: ceph142.qgifwo
#出现mds字样,显示没有任何元数据服务器
mds: 0/0 daemons up
osd: 9 osds: 9 up (since 41h), 9 in (since 41h)
data:
volumes: 1/1 healthy
pools: 5 pools, 313 pgs
objects: 307 objects, 645 MiB
usage: 2.9 GiB used, 5.3 TiB / 5.3 TiB avail
pgs: 313 active+clean
4.应用mds的文件系统
#部署或更新 Ceph 文件系统(CephFS)的元数据服务器(MDS)服务
[root@ceph141 ~]# ceph orch apply mds dezyan-cephfs
Scheduled mds.dezyan-cephfs update...
#再次查看cephFS的mds状态,显示已有一个处于存活状态
[root@ceph141 ~]# ceph mds stat
dezyan-cephfs:1 {0=dezyan-cephfs.ceph142.pmzglk=up:active} 1 up:standby
#再次查看集群状态
[root@ceph141 ~]# ceph -s
cluster:
id: 11e66474-0e02-11f0-82d6-4dcae3d59070
health: HEALTH_OK
services:
mon: 3 daemons, quorum ceph141,ceph142,ceph143 (age 41h)
mgr: ceph141.mbakds(active, since 41h), standbys: ceph142.qgifwo
#显示已有一个启用
mds: 1/1 daemons up, 1 standby
osd: 9 osds: 9 up (since 41h), 9 in (since 42h)
data:
volumes: 1/1 healthy
pools: 5 pools, 313 pgs
objects: 329 objects, 645 MiB
usage: 2.9 GiB used, 5.3 TiB / 5.3 TiB avail
pgs: 313 active+clean
io:
client: 5.8 KiB/s rd, 0 B/s wr, 5 op/s rd, 0 op/s wr
5.查看cephFS集群的详细信息
#查看state信息,active即为存活状态
[root@ceph141 ~]# ceph fs status dezyan-cephfs
dezyan-cephfs - 0 clients
================
RANK STATE MDS ACTIVITY DNS INOS DIRS CAPS
0 active dezyan-cephfs.ceph142.khlacd Reqs: 0 /s 10 13 12 0
POOL TYPE USED AVAIL
cephfs_metadata metadata 96.0k 1732G
cephfs_data data 0 1732G
STANDBY MDS
dezyan-cephfs.ceph141.zlsaoa
MDS version: ceph version 19.2.1 (58a7fab8be0a062d730ad7da874972fd3fba59fb) squid (stable)
6.添加多个mds服务器
#用于在指定的节点(这里是 ceph141)上手动添加一个元数据服务器(MDS)守护进程,用于管理 Ceph 文件系统 dezyan-cephfs
[root@ceph141 ~]# ceph orch daemon add mds dezyan-cephfs ceph141
Deployed mds.dezyan-cephfs.ceph141.pthitg on host 'ceph141'
#再次查看集群详细信息
[root@ceph141 ~]# ceph fs status dezyan-cephfs
dezyan-cephfs - 0 clients
================
RANK STATE MDS ACTIVITY DNS INOS DIRS CAPS
0 active dezyan-cephfs.ceph142.khlacd Reqs: 0 /s 10 13 12 0
POOL TYPE USED AVAIL
cephfs_metadata metadata 96.0k 1732G
cephfs_data data 0 1732G
STANDBY MDS
dezyan-cephfs.ceph141.zlsaoa
MDS version: ceph version 19.2.1 (58a7fab8be0a062d730ad7da874972fd3fba59fb) squid (stable)
三、cephFS的一主一从架构高可用验证
1.查看cephFS集群的详细信息
[root@ceph141 ~]# ceph fs status dezyan-cephfs
dezyan-cephfs - 0 clients
================
RANK STATE MDS ACTIVITY DNS INOS DIRS CAPS
0 active dezyan-cephfs.ceph142.khlacd Reqs: 0 /s 10 13 12 0
POOL TYPE USED AVAIL
cephfs_metadata metadata 96.0k 1732G
cephfs_data data 0 1732G
STANDBY MDS
dezyan-cephfs.ceph141.zlsaoa
MDS version: ceph version 19.2.1 (58a7fab8be0a062d730ad7da874972fd3fba59fb) squid (stable)
2. ceph142直接关机
[root@ceph142 ~]# init 0
3.查看cephFS集群状态【需要等待30s左右才能看到效果】
#正在迁移,状态变为replay,MDS变为141节点
[root@ceph141 ~]# ceph fs status dezyan-cephfs
dezyan-cephfs - 0 clients
================
RANK STATE MDS ACTIVITY DNS INOS DIRS CAPS
0 replay dezyan-cephfs.ceph141.pthitg 0 0 0 0
POOL TYPE USED AVAIL
cephfs_metadata metadata 96.0k 2595G
cephfs_data data 0 1730G
MDS version: ceph version 19.2.1 (58a7fab8be0a062d730ad7da874972fd3fba59fb) squid (stable)
#迁移完成,状态变为active
[root@ceph141 ~]# ceph fs status dezyan-cephfs
dezyan-cephfs - 0 clients
================
RANK STATE MDS ACTIVITY DNS INOS DIRS CAPS
0 active dezyan-cephfs.ceph141.pthitg Reqs: 0 /s 10 13 12 0
POOL TYPE USED AVAIL
cephfs_metadata metadata 96.0k 2595G
cephfs_data data 0 1730G
MDS version: ceph version 19.2.1 (58a7fab8be0a062d730ad7da874972fd3fba59fb) squid (stable)
4.查看集群状态
[root@ceph141 ~]# ceph -s
cluster:
id: 11e66474-0e02-11f0-82d6-4dcae3d59070
health: HEALTH_WARN
insufficient standby MDS daemons available
1/3 mons down, quorum ceph141,ceph143
3 osds down
1 host (3 osds) down
Degraded data redundancy: 331/993 objects degraded (33.333%), 29 pgs degraded, 313 pgs undersized
services:
mon: 3 daemons, quorum ceph141,ceph143 (age 2m), out of quorum: ceph142
mgr: ceph141.mbakds(active, since 44h)
mds: 1/1 daemons up
osd: 9 osds: 6 up (since 2m), 9 in (since 44h)
data:
volumes: 1/1 healthy
pools: 5 pools, 313 pgs
objects: 331 objects, 655 MiB
usage: 2.9 GiB used, 5.3 TiB / 5.3 TiB avail
pgs: 331/993 objects degraded (33.333%)
284 active+undersized
29 active+undersized+degraded
io:
client: 12 KiB/s wr, 0 op/s rd, 0 op/s wr
5.重启启动ceph142节点后,再次观察集群状态
[root@ceph141 ~]# ceph -s
cluster:
id: 11e66474-0e02-11f0-82d6-4dcae3d59070
health: HEALTH_OK
services:
mon: 3 daemons, quorum ceph141,ceph142,ceph143 (age 26s)
mgr: ceph141.mbakds(active, since 44h)
mds: 1/1 daemons up, 1 standby
osd: 9 osds: 9 up (since 9s), 9 in (since 44h)
data:
volumes: 1/1 healthy
pools: 5 pools, 313 pgs
objects: 331 objects, 655 MiB
usage: 2.6 GiB used, 5.3 TiB / 5.3 TiB avail
pgs: 313 active+clean
io:
client: 3.9 KiB/s wr, 0 op/s rd, 0 op/s wr
recovery: 26 KiB/s, 0 objects/s
[root@ceph141 ~]# ceph fs status dezyan-cephfs
dezyan-cephfs - 0 clients
================
RANK STATE MDS ACTIVITY DNS INOS DIRS CAPS
0 active dezyan-cephfs.ceph141.pthitg Reqs: 0 /s 10 13 12 0
POOL TYPE USED AVAIL
cephfs_metadata metadata 106k 1730G
cephfs_data data 0 1730G
STANDBY MDS
dezyan-cephfs.ceph142.pmzglk
MDS version: ceph version 19.2.1 (58a7fab8be0a062d730ad7da874972fd3fba59fb) squid (stable)
四、cephfs的客户端之借助内核ceph模块挂载
与是否安装ceph-common无关
1.管理节点创建用户并导出钥匙环和key文件
1.1 创建用户并授权
[root@ceph141 ~]# ceph auth add client.dezyan-os mon 'allow r' mds 'allow rw' osd 'allow rwx'added key for client.dezyan-os
[root@ceph141 ~]# ceph auth get client.dezyan-os
[client.dezyan-os]
key = AQBlNO1nplqaHxAAxpM2m90FsuZAjnm+CtQ9sw==
caps mds = "allow rw"
caps mon = "allow r"
caps osd = "allow rwx"
1.2 导出认证信息
#导出秘钥环
[root@ceph141 ~]# ceph auth get client.dezyan-os > ceph.client.dezyan-os.keyring
[root@ceph141 ~]# cat ceph.client.dezyan-os.keyring
[client.dezyan-os]
key = AQBlNO1nplqaHxAAxpM2m90FsuZAjnm+CtQ9sw==
caps mds = "allow rw"
caps mon = "allow r"
caps osd = "allow rwx"
#导出秘钥key
[root@ceph141 ~]# ceph auth print-key client.dezyan-os > dezyan-os.key
[root@ceph141 ~]# cat dezyan-os.key;echo
AQBlNO1nplqaHxAAxpM2m90FsuZAjnm+CtQ9sw==
1.3.将钥匙环和秘钥key拷贝到客户端指定目录
[root@ceph141 ~]# scp ceph.client.dezyan-os.keyring 10.0.0.150:/etc/ceph
[root@ceph141 ~]# scp dezyan-os.key 10.0.0.150:/etc/ceph
2.基于KEY进行挂载,无需拷贝秘钥文件!
2.1 查看本地文件
[root@test01 ~]# ll /etc/ceph/
total 24
drwxr-xr-x 2 root root 4096 Apr 2 21:03 ./
drwxr-xr-x 130 root root 4096 Apr 2 15:56 ../
#可以删除,但是在19.2.1版本中挂载时会提示缺失文件,但不影响挂载
-rw-r--r-- 1 root root 136 Apr 2 21:02 ceph.client.dezyan-os.keyring
-rw-r--r-- 1 root root 259 Apr 2 20:01 ceph.conf
-rw-r--r-- 1 root root 92 Dec 18 22:48 rbdmap
2.2 挂载
#在客户端配置本地hosts文件
[root@test01 ~]# vim /etc/hosts
10.0.0.141 ceph141
10.0.0.142 ceph142
10.0.0.143 ceph143
#查看key值
[root@test01 ~]# cat /etc/ceph/ceph.client.dezyan-os.keyring
[client.dezyan-os]
key = AQBlNO1nplqaHxAAxpM2m90FsuZAjnm+CtQ9sw==
caps mds = "allow rw"
caps mon = "allow r"
caps osd = "allow rwx"
#挂载,secret值即为key值
[root@test01 ~]# mount -t ceph ceph141:6789,ceph142:6789,ceph143:6789:/ /data -o name=dezyan-os,secret=AQBlNO1nplqaHxAAxpM2m90FsuZAjnm+CtQ9sw==
#查看磁盘信息
[root@test01 ~]# df -h |grep 6789
10.0.0.141:6789,10.0.0.142:6789,10.0.0.143:6789:/ 1.7T 0 1.7T 0% /data
2.3 尝试写入测试数据
[root@test01 ~]# cp /etc/os-release /etc/fstab /data/
[root@test01 ~]# ll /data/
total 6
drwxr-xr-x 2 root root 2 Apr 2 21:09 ./
drwxr-xr-x 21 root root 4096 Apr 2 21:07 ../
-rw-r--r-- 1 root root 657 Apr 2 21:09 fstab
-rw-r--r-- 1 root root 427 Apr 2 21:09 os-release
3.基于secretfile进行挂载
3.1 查看本地文件
[root@test02 ~]# ll /etc/ceph/
total 24
drwxr-xr-x 2 root root 4096 Apr 2 21:03 ./
drwxr-xr-x 130 root root 4096 Apr 2 21:05 ../
-rw-r--r-- 1 root root 259 Apr 2 20:01 ceph.conf
-rw-r--r-- 1 root root 40 Apr 2 21:02 dezyan-os.key
-rw-r--r-- 1 root root 92 Dec 18 22:48 rbdmap
3.2 基于key文件进行挂载并尝试写入数据
[root@test02 ~]# mount -t ceph ceph141:6789,ceph142:6789,ceph143:6789:/ /data -o name=dezyan-os,secretfile=/etc/ceph/dezyan-os.key
# 挂载时会警告可以忽略
[root@elk93 ~]# df -h | grep 6789
10.0.0.141:6789,10.0.0.142:6789,10.0.0.143:6789:/ 1.7T 0 1.7T 0% /data
3.3 写测试数据
[root@test02 ~]# cp /etc/passwd /data/
[root@test02 ~]# ll /data/
total 9
drwxr-xr-x 2 root root 3 Apr 2 14:56 ./
drwxr-xr-x 23 root root 4096 Apr 2 14:55 ../
-rw-r--r-- 1 root root 657 Apr 2 14:54 fstab
-rw-r--r-- 1 root root 427 Apr 2 14:54 os-release
-rw-r--r-- 1 root root 2941 Apr 2 14:56 passwd
3.5 再次查看test01节点,发现数据是同步的
[root@test01 ~]# ll /data/
[root@prometheus-server31 ~]# ll /data/
total 9
drwxr-xr-x 2 root root 3 Apr 2 14:56 ./
drwxr-xr-x 22 root root 4096 Apr 2 14:45 ../
-rw-r--r-- 1 root root 657 Apr 2 14:54 fstab
-rw-r--r-- 1 root root 427 Apr 2 14:54 os-release
-rw-r--r-- 1 root root 2941 Apr 2 14:56 passwd
五、cephfs的客户端之基于用户空间fuse方式访问
1.FUSE概述
对于某些操作系统来说,它没有提供对应的ceph内核模块,我们还需要使用CephFS的话,可以通过FUSE方式来实现。
FUSE英文全称为:"Filesystem in Userspace",用于非特权用户能够无需操作内核而创建文件系统,但需要单独安装"ceph-fuse"程序包。
2.安装ceph-fuse程序包
#麒麟系统添加软件源之后依旧无法安装
#Centos连软件源都无法添加,程序包也无法安装
#所以使用ubuntu系统测试
[root@docker02 ~]# apt -y install ceph-fuse
3.创建挂载点
[root@docker02 ~]# mkdir -pv /dezyan/cephfs
4.拷贝认证文件
[root@docker02 ~]# mkdir /etc/ceph
[root@docker02 ~]# scp 10.0.0.141:/etc/ceph/ceph.client.admin.keyring /etc/ceph
[root@docker02 ~]# ll /etc/ceph/
total 12
drwxr-xr-x 2 root root 4096 Apr 2 21:32 ./
drwxr-xr-x 131 root root 4096 Apr 2 21:31 ../
-rw------- 1 root root 151 Apr 2 21:32 ceph.client.admin.keyring
5.使用ceph-fuse工具挂载cephFS
[root@docker02 ~]# ceph-fuse -n client.admin -m 10.0.0.141:6789,10.0.0.142:6789,10.0.0.143:6789 /dezyan/cephfs/
did not load config file, using default settings.
2025-04-02T21:33:24.502+0800 7fd7757fd3c0 -1 Errors while parsing config file!
2025-04-02T21:33:24.502+0800 7fd7757fd3c0 -1 can't open ceph.conf: (2) No such file or directory
2025-04-02T21:33:24.518+0800 7fd7757fd3c0 -1 Errors while parsing config file!
2025-04-02T21:33:24.518+0800 7fd7757fd3c0 -1 can't open ceph.conf: (2) No such file or directory
2025-04-02T21:33:24.526+0800 7fd7757fd3c0 -1 init, newargv = 0x55f6f760e470 newargc=13
2025-04-02T21:33:24.526+0800 7fd7757fd3c0 -1 init, args.argv = 0x55f6f760e5c0 args.argc=4
ceph-fuse[2936]: starting ceph client
ceph-fuse[2936]: starting fuse
6.查看挂载情况并写入测试数据
[root@docker02 ~]# ll /dezyan/cephfs/
total 6
drwxr-xr-x 2 root root 1084 Apr 2 21:09 ./
drwxr-xr-x 3 root root 4096 Apr 2 21:31 ../
-rw-r--r-- 1 root root 657 Apr 2 21:09 fstab
-rw-r--r-- 1 root root 427 Apr 2 21:09 os-release
[root@docker02 ~]# rm -rf /dezyan/cephfs/*
[root@docker02 ~]# ll /dezyan/cephfs/
total 5
drwxr-xr-x 2 root root 0 Apr 2 21:34 ./
drwxr-xr-x 3 root root 4096 Apr 2 21:31 ../
#因为数据是共享的,所以其他节点的数据应该也不存在了
[root@test01 ~]# ll /data/
total 4
drwxr-xr-x 2 root root 0 Apr 2 21:34 ./
drwxr-xr-x 21 root root 4096 Apr 2 21:07 ../
六、cephFS两种方式开机自动挂载实战
1.基于rc.local脚本方式开机自动挂载【推荐】
1.1 修改启动脚本
[root@prometheus-server31 ~]# cat /etc/rc.local
#!/bin/bash
#基于key挂载
mount -t ceph ceph141:6789,ceph142:6789,ceph143:6789:/ /data -o name=dezyan-os,secret=AQBlNO1nplqaHxAAxpM2m90FsuZAjnm+CtQ9sw==
#基于secretfile进行挂载
ceph-fuse -n client.admin -m 10.0.0.141:6789,10.0.0.142:6789,10.0.0.143:6789 /dezyan/cephfs/
1.2 重启服务器
[root@prometheus-server31 ~]# reboot
1.3 测试验证
[root@prometheus-server31 ~]# df -h | grep data
10.0.0.141:6789,10.0.0.142:6789,10.0.0.143:6789:/ 1.7T 0 1.7T 0% /data
2.基于fstab文件进行开机自动挂载
2.1 修改fstab的配置文件
[root@elk93 ~]# tail -1 /etc/fstab
ceph141:6789,ceph142:6789,ceph143:6789:/ /data ceph name=linux96,secretfile=/etc/ceph/linux96.key,noatime,_netdev 0 2
2.2 重启系统
[root@elk93 ~]# reboot
2.3 测试验证
[root@elk93 ~]# df -h | grep data
10.0.0.141:6789,10.0.0.142:6789,10.0.0.143:6789:/ 1.7T 0 1.7T 0% /data
七、Docker结合ceph项目实战案例
1.服务端创建rbd块设备
[root@ceph141 ~]# ceph osd pool create docker-data
[root@ceph141 ~]# rbd create -s 100G docker-data/docker
2.创建新的docker数据目录,安装ceph-comon工具
#当前服务器无任何挂载
[root@docker02 ~]# df -h | grep ubuntu--vg-ubuntu
/dev/mapper/ubuntu--vg-ubuntu--lv 48G 9.3G 37G 21% /
#创建新的数据目录
[root@docker02 ~]# mkdir /dezyan/data/docker
#安装工具
[root@docker02 ~]# apt -y install ceph-common
3.拷贝认证文件
[root@ceph141 ~]# scp /etc/ceph/ceph.{client.admin.keyring,conf} 10.0.0.96:/etc/ceph
[root@docker02 ~]# ll /etc/ceph
total 28
drwxr-xr-x 2 root root 4096 Apr 2 21:57 ./
drwxr-xr-x 131 root root 12288 Apr 2 21:57 ../
-rw------- 1 root root 151 Apr 2 21:57 ceph.client.admin.keyring
-rw-r--r-- 1 root root 259 Apr 2 21:57 ceph.conf
-rw-r--r-- 1 root root 92 Dec 18 22:48 rbdmap
4.添加映射、挂载
[root@docker02 ~]# rbd map docker-data/docker
/dev/rbd0
[root@docker02 ~]# rbd showmapped
id pool namespace image snap device
0 docker-data docker - /dev/rbd0
[root@docker02 ~]# fdisk -l /dev/rbd0
Disk /dev/rbd0: 100 GiB, 107374182400 bytes, 209715200 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 65536 bytes / 65536 bytes
[root@docker02 ~]# mount /dev/rbd0 /data/docker/
5.格式化并配置开机自动挂载
[root@docker02 ~]# mkfs.xfs /dev/rbd0
meta-data=/dev/rbd0 isize=512 agcount=16, agsize=1638400 blks
= sectsz=512 attr=2, projid32bit=1
= crc=1 finobt=1, sparse=1, rmapbt=0
= reflink=1 bigtime=0 inobtcount=0
data = bsize=4096 blocks=26214400, imaxpct=25
= sunit=16 swidth=16 blks
naming =version 2 bsize=4096 ascii-ci=0, ftype=1
log =internal log bsize=4096 blocks=12800, version=2
= sectsz=512 sunit=16 blks, lazy-count=1
realtime =none extsz=4096 blocks=0, rtextents=0
Discarding blocks...Done.
[root@docker02 ~]# cat /etc/rc.local
#!/bin/bash
rbd map docker-data/docker
mount /dev/rbd0 /data/docker/
[root@docker02 ~]# chmod +x /etc/rc.local
6.重启服务器
[root@docker02 ~]# reboot
7.验证是否挂载成功
[root@docker02 ~]# rbd showmapped
id pool namespace image snap device
0 docker-data docker - /dev/rbd0
[root@docker02 ~]# df -h | grep data
/dev/rbd0 100G 747M 100G 1% /data/docker
8.迁移docker原有的数据
# 如果有容器运行,建议手动停止容器(会自动卸载系统的overlayFS的挂载信息)后,再去停止服务的运行。
[root@docker02 ~]# systemctl stop docker
#原目录大小
[root@docker02 ~]# du -sh /var/lib/docker/
320K /var/lib/docker/
#将数据迁移
[root@docker02 ~]# mv /var/lib/docker/* /data/docker/
9.修改docker的启动脚本并重启docker服务
[root@docker02 ~]# vim /lib/systemd/system/docker.service
[Unit]
Description=linux Docke Engine
Documentation=https://docs.docker.com,https://www.dezyan.com
Wants=network-online.target
[Service]
Type=notify
#修改此处,指定新的数据目录
ExecStart=/usr/bin/dockerd --data-root /data/docker
[Install]
WantedBy=multi-user.target
[root@docker02 ~]# systemctl daemon-reload
[root@docker02 ~]# systemctl restart docker
#还可以修改配置文件daemon.json实现
[root@docker02 ~]# cat /etc/docker/daemon.json
{
"insecure-registries": ["10.0.0.92:5000","10.0.0.93","10.0.0.91"],
"data-root": "/data/docker"
}
[root@docker02 ~]# systemctl restart docker
10.验证docker的数据镜像是否迁移成功
至此镜像应该是迁移成功的,由于本次测试的docker内没有镜像,所以不展示了
[root@docker02 ~]# docker image ls
REPOSITORY TAG IMAGE ID CREATED SIZE
本文来自博客园,作者:丁志岩,转载请注明原文链接:https://www.cnblogs.com/dezyan/p/18811449

浙公网安备 33010602011771号