03-ceph块存储(RBD)

一、创建块设备的流程

1.创建存储池

Ceph 中创建块设备时,必须指定一个存储池。

存储池是 Ceph 中用于存储对象的逻辑分区,块设备的数据将以对象的形式存储在指定的存储池中。

#设置PG和PGP数量为8,副本数为3
[root@ceph141 ~]# ceph osd pool create rdb-test 8 8 --autoscale_mode off --size 3 
pool 'rdb-test' created

2.声明存储池为rbd应用类型

在我们创建完一个存储池之后,查看集群状态,会提示我们有n个池未启用

[root@ceph141 ~]# ceph -s 
  cluster:
    id:     de7264fa-0e36-11f0-8f7b-9771d5b41507
    health: HEALTH_WARN
            clock skew detected on mon.ceph143
            3 pool(s) do not have an application enabled
……………………

声明

[root@ceph141 ~]# rbd pool init rdb-test

声明前后对比

#声明前,没有application rbd值
[root@ceph141 ~]# ceph osd pool ls detail | grep application
#声明后
[root@ceph141 ~]# ceph osd pool ls detail | grep application
pool 12 'rdb-test' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 8 pgp_num 8 autoscale_mode off last_change 499 flags hashpspool,selfmanaged_snaps stripe_width 0 application rbd read_balance_score 3.37

再次查看集群状态,集群健康

3.创建一个块设备

[root@ceph141 ~]# rbd create -s 5G rdb-test/t1

4.查看块设备列表

[root@ceph141 ~]# rbd ls rdb-test
t1

5.查看块设备的详细信息

[root@ceph141 ~]# rbd info rdb-test/t1
rbd image 't1':
	size 5 GiB in 1280 objects
	order 22 (4 MiB objects)
	snapshot_count: 0
	id: d4a82eabbd41
	block_name_prefix: rbd_data.d4a82eabbd41
	format: 2
	features: layering, exclusive-lock, object-map, fast-diff, deep-flatten
	op_features: 
	flags: 
	create_timestamp: Tue Apr  1 21:48:12 2025
	access_timestamp: Tue Apr  1 21:48:12 2025
	modify_timestamp: Tue Apr  1 21:48:12 2025

二、修改块设备的信息

1.扩容块设备大小

[root@ceph141 ~]# rbd resize -s 10G rdb-test/t1
Resizing image: 100% complete...done.
[root@ceph141 ~]# rbd info rdb-test/t1
rbd image 't1':
	size 10 GiB in 2560 objects
…………

2.缩容块设备大小

  • 注意!!!!缩容是十分危险的操作,操作不当容易造成数据丢失

尝试直接修改

[root@ceph141 ~]# rbd resize -s 2G rdb-test/t1
rbd: shrinking an image is only allowed with the --allow-shrink flag

提示我们加--allow-shrink参数

[root@ceph141 ~]# rbd resize -s 2G rdb-test/t1 --allow-shrink
Resizing image: 100% complete...done.

3.修改块设备名称

[root@ceph141 ~]# rbd rename -p rdb-test t1 T1
[root@ceph141 ~]# rbd ls rdb-test
T1

三、删除块设备

[root@ceph141 ~]# rbd rm rdb-test/T1
Removing image: 100% complete...done.

四、Ubuntu客户端挂载rbd设备

1.ceph集群创建RDB设备

[root@ceph141 ~]# ceph osd pool create rdb-test 8 8 --autoscale_mode off --size 3 
[root@ceph141 ~]# rbd pool init rdb-test
[root@ceph141 ~]# rbd create -s 5G rdb-test/t1

2.Ubuntu客户端挂载rbd设备格式化ext4文件系统格式

2.1 安装ceph通用包环境

[root@prometheus-server31 ~]# apt -y install ceph-common

2.2 拷贝ceph集群认证文件

[root@ceph141 ~]# scp /etc/ceph/{ceph.conf,ceph.client.admin.keyring} 10.0.0.31:/etc/ceph

2.3 客户端挂载rbd设备

#映射
[root@prometheus-server31 ~]# rbd map rdb-test/t1
/dev/rbd0

2.4 客户端格式化ext4文件系统

[root@prometheus-server31 ~]# mkfs.ext4 /dev/rbd0
mke2fs 1.46.5 (30-Dec-2021)
Discarding device blocks: done                            
Creating filesystem with 1048576 4k blocks and 262144 inodes
Filesystem UUID: 96976f6e-fef9-437a-9d4f-6c1490ee0426
Superblock backups stored on blocks: 
	32768, 98304, 163840, 229376, 294912, 819200, 884736

Allocating group tables: done                            
Writing inode tables: done                            
Creating journal (16384 blocks): done
Writing superblocks and filesystem accounting information: done 

2.5 挂载块设备

[root@prometheus-server31 ~]# mount /dev/rbd0 /mnt/
[root@prometheus-server31 ~]# df -h | grep mnt
/dev/rbd0                          3.9G   24K  3.7G   1% /mnt

2.6 测试尝试写入数据

3.Ubuntu客户端挂载rbd设备格式化xfs文件系统格式

3.1 服务端创建块设备

[root@ceph141 ~]# rbd create -s 5G rdb-test/t2

3.2 客户端挂载

[root@prometheus-server31 ~]# rbd map rdb-test/t2
/dev/rbd1

3.3 格式化块设备

[root@prometheus-server31 ~]# mkfs.xfs /dev/rbd1 

3.4 挂载块设备

[root@prometheus-server31 ~]# mount /dev/rbd1 /opt/
[root@prometheus-server31 ~]# df -h | grep opt
/dev/rbd1                          8.0G   90M  8.0G   2% /opt

3.5 写入数据测试

五、RBD客户端文件系统容量热更新

1.服务端调大设备大小

[root@ceph141 ~]# rbd resize -s 40G rdb-test/t1
[root@ceph141 ~]# rbd resize -s 100G rdb-test/t2

2.resize2fs热更新ext4文件系统

#不执行此命令,容量是不会更新的
[root@prometheus-server31 ~]# resize2fs /dev/rbd0

3.xfs_growfs热更新xfs文件系统

#不执行此命令,容量是不会更新的
[root@prometheus-server31 ~]# xfs_growfs /opt/

六、rbd的快照进行数据备份和恢复

参考链接:
https://docs.ceph.com/en/squid/rbd/rbd-snapshot/

1.快照有什么用?

rbd的快照可以进行数据的备份,恢复

2.创建快照

[root@ceph141 ~]# rbd snap create -p 存储池 --image 块设备 --snap 快照名称
[root@ceph141 ~]# rbd snap create -p rdb-test --image t1 --snap snap01
Creating snap: 100% complete...done.
[root@ceph141 ~]# rbd snap create -p rdb-test --image t2 --snap snap02
Creating snap: 100% complete...done.

3.查看快照信息

[root@ceph141 ~]# rbd snap ls rdb-test/t1
[root@ceph141 ~]# rbd snap ls rdb-test/t1
SNAPID  NAME    SIZE   PROTECTED  TIMESTAMP               
     3  snap01  40 GiB             Tue Apr  1 22:09:49 2025
[root@ceph141 ~]# rbd snap ls rdb-test/t2
SNAPID  NAME    SIZE     PROTECTED  TIMESTAMP               
     4  snap02  100 GiB             Tue Apr  1 22:27:56 2025

4.客户端篡改数据

去客户端挂载目录下创建几个文件,删除几个原有的文件

5.客户单准备恢复数据前要删除块设备映射

#取消挂载
[root@prometheus-server31 ~]# umount /opt
#取消映射
[root@prometheus-server31 ~]# rbd unmap /dev/rbd1

6.服务端开始回滚数据

# rbd snap rollback 存储池/块设备@快照名称
[root@ceph141 ~]# rbd snap rollback rdb-test/t1@snap01
Rolling back to snapshot: 100% complete...done.

7.客户端重新挂载测试,验证数据是否恢复

[root@prometheus-server31 ~]# rbd map rdb-test/t1
[root@prometheus-server31 ~]# mount /dev/rbd1  /opt/

七、快照的删除

1.未被保护的快照可以被删除

服务端通过查看快照列表信息可确定快照是否被保护

#观察PROTECTED字段
[root@ceph141 ~]# rbd snap ls rdb-test/t2
SNAPID  NAME    SIZE     PROTECTED  TIMESTAMP               
     4  snap02  100 GiB             Tue Apr  1 22:27:56 2025

2.保护快照

保护的快照,在快照信息列表的PROTECTEDyes字样

# rbd snap protect 存储池/块设备@快照名称
[root@ceph141 ~]# rbd snap protect rdb-test/t2@snap02
[root@ceph141 ~]# rbd snap ls rdb-test/t2
SNAPID  NAME    SIZE     PROTECTED  TIMESTAMP               
     4  snap02  100 GiB  yes        Tue Apr  1 22:27:56 2025

3.验证保护的快照无法删除

[root@ceph141 ~]# rbd snap rm rdb-test/t2@snap02
Removing snap: 0% complete...failed.2025-04-01T22:38:22.500+0800 7ff0b1436640 -1 librbd::Operations: snapshot is protected


rbd: snapshot 'snap02' is protected from removal.

八、快照的克隆

1.克隆快照

# rbd clone 存储池/块设备@快照名称 存储池/新块设备
[root@ceph141 ~]# rbd clone rdb-test/t2@snap02 rdb-test/kl01

2.查看快照是否有子镜像

[root@ceph141 ~]# rbd snap ls rdb-test/t2
SNAPID  NAME    SIZE     PROTECTED  TIMESTAMP               
     4  snap02  100 GiB  yes        Tue Apr  1 22:27:56 2025

[root@ceph141 ~]# rbd children rdb-test/t2@snap02
rdb-test/kl01
[root@ceph141 ~]# rbd ls rdb-test -l
NAME       SIZE     PARENT              FMT  PROT  LOCK
kl01       100 GiB  rdb-test/t2@snap02    2            
t1           5 GiB                        2            
t1@snap01    5 GiB                        2            
t2         100 GiB                        2            
t2@snap02  100 GiB                        2  yes 

九、基于克隆的子镜像恢复数据实战【相比于回滚快照速度更快】

  • 将图像回滚到快照意味着用快照中的数据覆盖图像的当前版本。执行回滚所需的时间随着映像的大小而增加。

  • 从快照克隆比将映像回滚到快照更快。从快照克隆是返回到预先存在状态的首选方法。

1.篡改客户端数据、取消挂载、映射

查看客户端原有数据、在客户端随便添加、删除点东西
[root@prometheus-server31 ~]# umount /opt
[root@prometheus-server31 ~]# rbd unmap /dev/rbd1

选择子镜像映射
# 此快照时从父快照克隆而言,几乎无需恢复时间(速度比rollback更快!)
[root@prometheus-server31 ~]# rbd map rdb-test/kl01
挂载
[root@prometheus-server31 ~]# mount /dev/rbd1 /opt/

十、关于子镜像的取消快照保护

1.首先,肯定是无法移除被保护的镜像的

[root@ceph141 ~]# rbd ls rdb-test -l
NAME       SIZE     PARENT              FMT  PROT  LOCK
kl01       100 GiB  rdb-test/t2@snap02    2            
t1           5 GiB                        2            
t1@snap01    5 GiB                        2            
t2         100 GiB                        2            
t2@snap02  100 GiB                        2  yes 

2.如果被保护的快照有子镜像则无法取消保护

#rdb-test/t2是有子镜像的
#测试发现无法移除
[root@ceph141 ~]# rbd snap unprotect  rdb-test/t2@snap02
2025-04-01T22:48:16.910+0800 7f506bfff640 -1 librbd::SnapshotUnprotectRequest: cannot unprotect: at least 1 child(ren) [d65c9b99a4e5] in pool 'rdb-test'

2025-04-01T22:48:16.910+0800 7f5078f89640 -1 librbd::SnapshotUnprotectRequest: encountered error: (16) Device or resource busy

2025-04-01T22:48:16.910+0800 7f5078f89640 -1 librbd::SnapshotUnprotectRequest: 0x55693c2e92a0 should_complete_error: ret_val=-16

rbd: unprotecting snap failed: (16) Device or resource busy
2025-04-01T22:48:16.914+0800 7f506bfff640 -1 librbd::SnapshotUnprotectRequest: 0x55693c2e92a0 should_complete_error: ret_val=-16

3.基于flatten取消父镜像和子镜像的关联

就是让子镜像从父镜像依赖的文件重复复制一份,独立出来,这样就和父镜像无关

[root@ceph141 ~]# rbd children  rdb-test/t2@snap02
rdb-test/kl01

# 如果父镜像数据过大,可能需要较长的时间拷贝数据
[root@ceph141 ~]# rbd flatten rdb-test/kl01
Image flatten: 100% complete...done.

[root@ceph141 ~]# rbd children  rdb-test/t2@snap02
[root@ceph141 ~]# 

4.取消快照保护

#测试发现可以
[root@ceph141 ~]# rbd snap unprotect rdb-test/t2@snap02

5.取消保护后,就可以移除快照

[root@ceph141 ~]# rbd snap rm rdb-test/t2@snap02
Removing snap: 100% complete...done.

十一、客户端卸载rbd设备

1.查看本地rbd块设备的映射信息

[root@prometheus-server31 ~]# rbd showmapped 
id  pool       namespace  image              snap  device   
0   rdb-test             prometheus-server  -     /dev/rbd0
1   rdb-test             child-xixi-001     -     /dev/rbd1

2.查看本地的挂载信息

[root@prometheus-server31 ~]# df -h | grep rbd
/dev/rbd0                           40G  184M   38G   1% /mnt
/dev/rbd1                           20G  177M   20G   1% /opt

3.取消挂载点

[root@prometheus-server31 ~]# umount /opt

4.取消映射关系

[root@prometheus-server31 ~]# rbd unmap -p dezyan --image child-xixi-001

5.另一种卸载方式

[root@prometheus-server31 ~]# umount /mnt 

[root@prometheus-server31 ~]# rbd unmap dezyan/prometheus-server

十二、rbd的开机挂载

1.编写开机启动脚本

[root@prometheus-server31 ~]# cat /etc/rc.local 
#!/bin/bash

rbd map dezyan/prometheus-server
rbd map dezyan/child-xixi-001
mount /dev/rbd0 /mnt
mount /dev/rbd1 /opt
[root@prometheus-server31 ~]# 
[root@prometheus-server31 ~]# chmod +x /etc/rc.local
[root@prometheus-server31 ~]# 
[root@prometheus-server31 ~]# ll /etc/rc.local
-rwxr-xr-x 1 root root 111 Apr  1 16:33 /etc/rc.local*

2.重启服务器

[root@prometheus-server31 ~]# reboot 

3.验证测试

[root@prometheus-server31 ~]# df -h | grep rbd
/dev/rbd0                           40G  184M   38G   1% /mnt
/dev/rbd1                           20G  177M   20G   1% /opt

十三、多个节点无法同时使用同一块设备案例

1.LOCK字段中'excl'标记着该设备正在被使用

[root@ceph141 ~]# rbd ls rdb-test  -l
NAME       SIZE     PARENT  FMT  PROT  LOCK
kl01       100 GiB            2        excl    
t1           5 GiB            2            
t1@snap01    5 GiB            2            
t2         100 GiB            2        excl    

2.客户端取消映射

[root@prometheus-server31 ~]# umount /opt
[root@prometheus-server31 ~]# rbd unmap /dev/rbd1 

3.再次查看服务端

[root@ceph141 ~]# rbd ls rdb-test  -l
NAME       SIZE     PARENT  FMT  PROT  LOCK
kl01       100 GiB            2            
t1           5 GiB            2            
t1@snap01    5 GiB            2            
t2         100 GiB            2        excl  

4.客户端挂载设备

4.1 终端1挂载设备

[root@elk93 ~]# apt -y install ceph-common

4.2 拷贝认证文件

[root@ceph141 ~]# scp /etc/ceph/ceph{.client.admin.keyring,.conf} 10.0.0.93:/etc/ceph

4.3 挂载设备

[root@elk93 ~]# rbd map rdb-test/kl01
/dev/rbd0

4.4 测试数据

4.5 服务端查看块设备挂载情况

[root@ceph141 ~]# rbd ls rdb-test  -l
NAME       SIZE     PARENT  FMT  PROT  LOCK
kl01       100 GiB            2        excl    
t1           5 GiB            2            
t1@snap01    5 GiB            2            
t2         100 GiB            2        excl    

4.6 终端2继续挂载一个exel的设备

[root@prometheus-server31 ~]# rbd map rdb-test/kl01
/dev/rbd1
[root@prometheus-server31 ~]# mount /dev/rbd1 /opt/

4.7 再次切回终端1发现数据没有任何变化

此时数据已经开始冲突了,因此生产环境中,不要让2个主机使用同一个镜像的情况!

十四、rbd存储池的资源配额实战

1.存储池资源配额概述

ceph集群官方支持基于对象存储数量和数据存储的大小两种方式限制存储资源配额。

官网连接:
https://docs.ceph.com/en/latest/rados/operations/pools/#setting-pool-quotas

2.创建存储池

[root@ceph141 ~]# ceph osd pool create linux96 32 32 --size 3 --autoscale_mode off

3.查看存储池的资源限制信息

[root@ceph141 ~]# ceph osd pool get-quota linux96
quotas for pool 'linux96':
  max objects: N/A
  max bytes  : N/A

4.限制存储池最大上限有1500个对象

[root@ceph141 ~]# ceph osd pool set-quota linux96  max_objects 30000
set-quota max_objects = 30000 for pool linux96

[root@ceph141 ~]# ceph osd pool get-quota linux96
quotas for pool 'linux96':
  max objects: 30k objects  (current num objects: 0 objects)
  max bytes  : N/A

5.限制存储池最大存储10M大小

[root@ceph141 ~]# ceph osd pool set-quota linux96  max_bytes 10485760
set-quota max_bytes = 10485760 for pool linux96

[root@ceph141 ~]# ceph osd pool get-quota linux96
quotas for pool 'linux96':
  max objects: 30k objects  (current num objects: 0 objects)
  max bytes  : 10 MiB  (current num bytes: 0 bytes)

6.验证数据存储的上限

添加一个小于10M
[root@ceph141 ~]# rados put file01 ./install-docker.sh -p linux96
[root@ceph141 ~]# ceph osd pool get-quota linux96
quotas for pool 'linux96':
  max objects: 30k objects  (current num objects: 1 objects)
  max bytes  : 10 MiB  (current num bytes: 3513 bytes)  # 大小不足10MB
  
添加一个大于10M的文件
[root@ceph141 ~]# rados put file02 ./node-exporter-v1.7.0.tar.gz -p linux96
[root@ceph141 ~]# ceph osd pool get-quota linux96
quotas for pool 'linux96':
  max objects: 30k objects  (current num objects: 2 objects)
  max bytes  : 10 MiB  (current num bytes: 23871929 bytes)  # 由于在上传文件前,其大小还不足10M,因此判断此文件可以上传
  
[root@ceph141 ~]# rados put file03 ./node-exporter-v1.7.0.tar.gz -p linux96  # 你会发现无法上传成功啦!因为已经超过10MB啦~(目前将近22MB)

7.清除资源限制

[root@ceph141 ~]# ceph osd pool set-quota linux96  max_objects 0 
set-quota max_objects = 0 for pool linux96

[root@ceph141 ~]# ceph osd pool set-quota linux96  max_bytes 0

[root@ceph141 ~]# ceph osd pool get-quota linux96
quotas for pool 'linux96':
  max objects: N/A
  max bytes  : N/A

十五、rbd块设备实战案例MySQL

1.ceph集群创建镜像设备

[root@ceph141 ~]# rbd create -s 500G dezayn/mysql80

2.MySQL客户端添加挂载ceph的镜像(块)设备

[root@elk93 ~]# rbd map dezayn/mysql80
[root@elk93 ~]# mkfs.xfs /dev/rbd1
[root@elk93 ~]# install -d /dezayn/data/mysql80 -o mysql -g mysql
[root@elk93 ~]# mount /dev/rbd1 /dezayn/data/mysql80/

3.修改MySQL数据的存储目录

停止数据库
[root@elk93 ~]# grep ^datadir= /etc/init.d/mysql.server 
datadir=/var/lib/mysql

[root@elk93 ~]# sed -ri '/^datadir=/s#/var/lib/mysql#/dezyan/data/mysql80#' /etc/init.d/mysql.server 

[root@elk93 ~]# grep ^datadir= /etc/init.d/mysql.server 
datadir=/dezyan/data/mysql80

4.初始化MySQL服务

[root@elk93 ~]# cat /etc/my.cnf 
[mysqld]
basedir=/usr/local/mysql844
#datadir=/var/lib/mysql
datadir=/dezyan/data/mysql80
socket=/tmp/mysql80.sock
port=3306
mysql_native_password=on

[client]
socket=/tmp/mysql80.sock

[root@elk93 ~]# mysqld --initialize-insecure  --user=mysql  --datadir=/dezyan/data/mysql80  --basedir=/usr/local/mysql844

5.启动MySQL服务

[root@elk93 ~]# /etc/init.d/mysql.server start

6.验证测试

[root@elk93 ~]# mysql
mysql> CREATE DATABASE db01;
Query OK, 1 row affected (0.02 sec)

mysql> CREATE DATABASE db02;
Query OK, 1 row affected (0.01 sec)

mysql> CREATE DATABASE db03;
Query OK, 1 row affected (0.01 sec)

mysql> QUIT
Bye

十六、rbd块设备实战案例harbor

1.创建块设备文件

[root@ceph141 ~]# rbd create -s 1T dezyan/harbor

2.harbor服务器挂载ceph的设备

[root@elk93 harbor]# rbd map dezyan/harbor
/dev/rbd2
[root@elk93 harbor]# mkfs.xfs /dev/rbd2
[root@elk93 harbor]# install -d /dezyan/data/harbor-2025
[root@elk93 harbor]# mount /dev/rbd2 /dezyan/data/harbor-2025

3.修改harbor的配置文件

[root@elk93 ~]# cd /usr/local/harbor/
[root@elk93 harbor]# grep ^data_volume harbor.yml
data_volume: /dezyan/data/harbor
[root@elk93 harbor]# sed -ri '/^data_volume/s#(/dezyan/data/harbor)#\1-2025#' harbor.yml
[root@elk93 harbor]# grep ^data_volume harbor.yml
data_volume: /dezyan/data/harbor-2025
[root@elk93 harbor]# ./install.sh 

4.验证harbor是否可用

5.验证ceph集群的使用空间大小

十七、rdb块设备实战案例之Prometheus

1.服务端创建块设备

[root@ceph141 ~]# rbd create -s 500M dezyan/prometheus

2.客户端开机挂载测试(略,此处为了省事,我就手动挂载了)

[root@prometheus-server31 prometheus-2.53.4.linux-amd64]# rbd map dezyan/prometheus
/dev/rbd2
[root@prometheus-server31 prometheus-2.53.4.linux-amd64]# mkfs.xfs /dev/rbd2 
[root@prometheus-server31 prometheus-2.53.4.linux-amd64]# mkdir /dezyan/data/prometheus-2025
[root@prometheus-server31 prometheus-2.53.4.linux-amd64]# mount /dev/rbd2 /dezyan/data/prometheus-2025

3.Prometheus指定数据目录

[root@prometheus-server31 ~]# cd /dezyan/softwares/prometheus-2.53.4.linux-amd64/

[root@prometheus-server31 prometheus-2.53.4.linux-amd64]# ll /dezyan/data/prometheus-2025/
total 4
drwxr-xr-x 2 root root    6 Apr  1 17:35 ./
drwxr-xr-x 4 root root 4096 Apr  1 17:36 ../

[root@prometheus-server31 prometheus-2.53.4.linux-amd64]# ./prometheus --storage.tsdb.path="/dezyan/data/prometheus-2025/" --web.listen-address="0.0.0.0:19090"

4.访问测试

http://10.0.0.31:19090/

5.验证数据是否写入成功

[root@prometheus-server31 ~]# ll /dezyan/data/prometheus-2025/
total 24
drwxr-xr-x 4 root root    58 Apr  1 17:39 ./
drwxr-xr-x 4 root root  4096 Apr  1 17:36 ../
drwxr-xr-x 2 root root     6 Apr  1 17:38 chunks_head/
-rw-r--r-- 1 root root 20001 Apr  1 17:39 queries.active
drwxr-xr-x 2 root root    22 Apr  1 17:38 wal/
[root@prometheus-server31 ~]# 

十八、

十九、 ceph块设备的回收站机制【推荐使用】

1.创建测试块设备

[root@ceph141 ~]# rbd ls mysql
ubt-2204

2.查看指定存储池回收站列表

[root@ceph141 ~]# rbd trash ls -p mysql
#查看回收站更详细的信息
[root@ceph141 ~]# rbd trash ls -p mysql -l
ID            NAME      SOURCE  DELETED_AT                STATUS                               PARENT
d6cb675c29d6  ubt-2204  USER    Wed Apr  2 10:13:55 2025  expired at Wed Apr  2 10:13:55 2025

3.将块设备移到回收站模拟删除效果

[root@ceph141 ~]# rbd trash move mysql/ubt-2204
[root@ceph141 ~]# rbd trash ls -p mysql
d6cb675c29d6 ubt-2204

4.再次查看存储池的信息列表【发现块设备没有了】

[root@ceph141 ~]# rbd  ls mysql
[root@ceph141 ~]# 

5.恢复块设备

# rbd trash restore -p 存储池 --image 块设备 --image-id 回收站内的ID
[root@ceph141 ~]# rbd trash restore -p mysql --image ubt-2204 --image-id d6cb675c29d6

6.验证是否回收成功

[root@ceph141 ~]# rbd ls mysql
ubt-2204

二十、RBD的块设备的快照数量限制

1.添加数量限制

1.1 查看添加限制前的快照详情

#当前只有一个
[root@ceph141 ~]# rbd snap ls  mysql/ubt-2204
SNAPID  NAME    SIZE    PROTECTED  TIMESTAMP               
     3  snap01  10 MiB             Wed Apr  2 10:23:30 2025
#查看块设备详细信息
[root@ceph141 ~]# rbd info mysql/ubt-2204
rbd image 'ubt-2204':
	size 10 MiB in 3 objects
	order 22 (4 MiB objects)
	snapshot_count: 1
	id: d6cb675c29d6
	block_name_prefix: rbd_data.d6cb675c29d6
	format: 2
	features: layering, exclusive-lock, object-map, fast-diff, deep-flatten
	op_features: 
	flags: 
	create_timestamp: Wed Apr  2 10:13:10 2025
	access_timestamp: Wed Apr  2 10:13:10 2025
	modify_timestamp: Wed Apr  2 10:13:10 2025

1.2 添加快照限制

[root@ceph141 ~]# rbd snap limit set mysql/ubt-2204 --limit 2
#查看块设备详细信息
[root@ceph141 ~]#  rbd info mysql/ubt-2204
rbd image 'ubt-2204':
	size 10 MiB in 3 objects
	order 22 (4 MiB objects)
	snapshot_count: 2
	id: d6cb675c29d6
	block_name_prefix: rbd_data.d6cb675c29d6
	format: 2
	features: layering, exclusive-lock, object-map, fast-diff, deep-flatten
	op_features: 
	flags: 
	create_timestamp: Wed Apr  2 10:13:10 2025
	access_timestamp: Wed Apr  2 10:13:10 2025
	modify_timestamp: Wed Apr  2 10:13:10 2025
	#出现限制信息
	snapshot_limit: 2

2.创建快照测试

#第二个创建成功
[root@ceph141 ~]# rbd snap create mysql/ubt-2204@snap02
Creating snap: 100% complete...done.
#第三个达到上限,创建失败
[root@ceph141 ~]# rbd snap create mysql/ubt-2204@snap03
Creating snap: 10% complete...failed.
rbd: failed to create snapshot: (122) Disk quota exceeded

3.清除快照数量限制

[root@ceph141 ~]# rbd  snap limit clear mysql/ubt-2204

4.再次测试验证,发现能够创建快照

[root@ceph141 ~]#  rbd snap create mysql/ubt-2204@snap03
Creating snap: 100% complete...done.

5.删除所有快照

#删除单个快照
##第一种方法
[root@ceph141 ~]# rbd snap  rm mysql/ubt-2204@snap01
Removing snap: 100% complete...done.
##第二种方法
[root@ceph141 ~]# rbd snap rm mysql/ubt-2204 --snap snap02
Removing snap: 100% complete...done.

#删除所有快照
[root@ceph141 ~]# rbd snap purge mysql/ubt-2204
Removing all snapshots: 100% complete...done.

posted @ 2025-04-03 14:28  丁志岩  阅读(133)  评论(0)    收藏  举报