Ceph-RBD块存储使用详解【九】

创建存储池

##Ceph 可以同时提供 RADOSGW(对象存储网关)、RBD(块存储)、Ceph FS(文件系统存储)
#RBD即 RADOS Block Device 的简称,RBD 块存储是常用的存储类型之一,RBD 块设备类似磁盘可以被挂载
#RBD块设备具有快照、多副本、克隆和一致性等特性,数据以条带化的方式存储在 Ceph 集群的多个 OSD 中。
#条带化技术就是一种自动的将 O 的负载均衡到多个物理磁盘上的技术,条带化技术就是将一块连续的数据分成很多小部分并把他们分别存储到不同磁盘上去。
#这就能使多个进程同时访问数据的多个不同部分而不会造成磁盘冲突,而且在需要对这种数据进行顺序访问的时候可以获得最大程度上的 I/O 并行能力
#从而获得非常好的性能。

cephadmin@ceph-deploy:~$ ceph osd pool create rbd-ibm 32 32 #创建存储池 
pool 'rbd-ibm' created
cephadmin@ceph-deploy:~$ ceph osd pool ls #验证存储池
device_health_metrics
mypool
cephfs-metadata
cephfs-data
rbd-ibm
cephadmin@ceph-deploy:~$ ceph osd pool application enable rbd-ibm rbd #在存储池启用rbd
enabled application 'rbd' on pool 'rbd-ibm'
cephadmin@ceph-deploy:~$ rbd pool init -p rbd-ibm #初始化rbd

 创建img镜像

#rbd存储池并不能直接用于块设备,而是需要事先在其中按需创建映像(image),并把映像文件作为块设备使用。
#rbd命令可用于创建、查看及删除块设备相在的映像(image)以及克隆映像、创建快照、将映像回滚到快照和査看快照等管理操作。
#例如,下面的命令能够在指定的 RBD 即 rbd-ibm 创建一个名为 liberty1 的映像:
cephadmin@ceph-deploy:~$ rbd help create #创建镜像命令格式
usage: rbd create [--pool <pool>] [--namespace <namespace>] [--image <image>]
                  [--image-format <image-format>] [--new-format]
                  [--order <order>] [--object-size <object-size>]
                  [--image-feature <image-feature>] [--image-shared]
                  [--stripe-unit <stripe-unit>]
                  [--stripe-count <stripe-count>] [--data-pool <data-pool>]
                  [--mirror-image-mode <mirror-image-mode>]
                  [--journal-splay-width <journal-splay-width>]
                  [--journal-object-size <journal-object-size>]
                  [--journal-pool <journal-pool>]
                  [--thick-provision] --size <size> [--no-progress]
                  <image-spec>

Create an empty image.

Positional arguments
  <image-spec>              image specification
                            (example: [<pool-name>/[<namespace>/]]<image-name>)

Optional arguments
  -p [ --pool ] arg         pool name
  --namespace arg           namespace name
  --image arg               image name
  --image-format arg        image format [default: 2]
  --object-size arg         object size in B/K/M [4K <= object size <= 32M]
  --image-feature arg       image features
                            [layering(+), exclusive-lock(+*), object-map(+*),
                            deep-flatten(+-), journaling(*)]
  --image-shared            shared image
  --stripe-unit arg         stripe unit in B/K/M
  --stripe-count arg        stripe count
  --data-pool arg           data pool
  --mirror-image-mode arg   mirror image mode [journal or snapshot]
  --journal-splay-width arg number of active journal objects
  --journal-object-size arg size of journal objects [4K <= size <= 64M]
  --journal-pool arg        pool for journal objects
  --thick-provision         fully allocate storage and zero image
  -s [ --size ] arg         image size (in M/G/T) [default: M]
  --no-progress             disable progress output

Image Features:
  (*) supports enabling/disabling on existing images
  (-) supports disabling-only on existing images
  (+) enabled by default for new images if features not specified

cephadmin@ceph-deploy:~$ rbd create data-img1 --size 3G --pool rbd-ibm --image-format 2 --image-feature layering #创建1个镜像 data-img1
cephadmin@ceph-deploy:~$ rbd create data-img2 --size 5G --pool rbd-ibm --image-format 2 --image-feature layering #创建1个镜像 data-img2
cephadmin@ceph-deploy:~$ rbd ls --pool rbd-ibm #验证镜像
data-img1
data-img2
cephadmin@ceph-deploy:~$ rbd ls --pool rbd-ibm -l #列出镜像多个信息
NAME       SIZE   PARENT  FMT  PROT  LOCK
data-img1  3 GiB            2
data-img2  5 GiB            2
cephadmin@ceph-deploy:~$ rbd --image data-img1 --pool rbd-ibm info #查看镜像详细信息
rbd image 'data-img1':
        size 3 GiB in 768 objects
        order 22 (4 MiB objects)
        snapshot_count: 0
        id: d5966048d97b
        block_name_prefix: rbd_data.d5966048d97b
        format: 2
        features: layering
        op_features:
        flags:
        create_timestamp: Wed Mar 13 13:55:12 2024
        access_timestamp: Wed Mar 13 13:55:12 2024
        modify_timestamp: Wed Mar 13 13:55:12 2024
cephadmin@ceph-deploy:~$ rbd --image data-img2 --pool rbd-ibm info #查看镜像详细信息
rbd image 'data-img2':
        size 5 GiB in 1280 objects
        order 22 (4 MiB objects)
        snapshot_count: 0
        id: d59f2b9eae3
        block_name_prefix: rbd_data.d59f2b9eae3
        format: 2
        features: layering
        op_features:
        flags:
        create_timestamp: Wed Mar 13 13:55:19 2024
        access_timestamp: Wed Mar 13 13:55:19 2024
        modify_timestamp: Wed Mar 13 13:55:19 2024

cephadmin@ceph-deploy:~$ rbd ls --pool rbd-ibm -l --format json --pretty-format #以json格式显示镜像信息
[
    {
        "image": "data-img1",
        "id": "d5966048d97b",
        "size": 3221225472,
        "format": 2
    },
    {
        "image": "data-img2",
        "id": "d59f2b9eae3",
        "size": 5368709120,
        "format": 2
    }
]

##镜像的其他特性
#特性简介
layering:支持镜像分层快照特性,用于快照及写时复制,可以对 image 创建快照并保护然后从快照克隆出新的 image 出来,父子 image 之间采用 COW 技术,共享对象数据。
striping:支持条带化 v2,类似 raid 0,只不过在 ceph 环境中的数据被分散到不同的对多中,可改善顺序读写场景较多情况下的性能。
exclusive-lock:支持独占锁,限制一个镜像只能被一个客户端使用。
object-map:支持对象映射(依赖 exclusive-lock),加速数据导人导出及已用空间统计等,特性开启的时候,会记录image 所有对象的一个位图,用以标记对象是否真的存在,在些场景下可以加速 io。
fast-diff:快速计算镜像与快照数据差异对比(依赖 object-map)
deep-flatten:支持快照扁平化操作,用于快照管理时解决快照依赖关系等
journaling:修改数据是否记录日志,该特性可以通过记录日志并通过日志恢复数据(依赖独占锁),开启此特性会增加系统磁盘 IO 使用,
jewel 默认开启的特性包括: layering/exlcusive lock/object map/fast diff/deep flatten
#镜像特性的启用
cephadmin@ceph-deploy:~$ rbd feature enable exclusive-lock --pool rbd-ibm --image data-img1 #启用指定的存储池的指定镜像特性
cephadmin@ceph-deploy:~$ rbd feature enable object-map --pool rbd-ibm --image data-img1     #启用指定的存储池的指定镜像特性
cephadmin@ceph-deploy:~$ rbd --image data-img1 --pool rbd-ibm info #验证镜像特征
rbd image 'data-img1':
        size 3 GiB in 768 objects
        order 22 (4 MiB objects)
        snapshot_count: 0
        id: d5966048d97b
        block_name_prefix: rbd_data.d5966048d97b
        format: 2
        features: layering, exclusive-lock, object-map, fast-diff
        op_features:
        flags: object map invalid, fast diff invalid
        create_timestamp: Wed Mar 13 13:55:12 2024
        access_timestamp: Wed Mar 13 13:55:12 2024
        modify_timestamp: Wed Mar 13 13:55:12 2024
cephadmin@ceph-deploy:~$ rbd feature disable object-map --pool rbd-ibm --image data-img1 #禁用指定存储池中镜像特性
cephadmin@ceph-deploy:~$ rbd --image data-img1 --pool rbd-ibm info #验证镜像特征
rbd image 'data-img1':
        size 3 GiB in 768 objects
        order 22 (4 MiB objects)
        snapshot_count: 0
        id: d5966048d97b
        block_name_prefix: rbd_data.d5966048d97b
        format: 2
        features: layering, exclusive-lock
        op_features:
        flags:
        create_timestamp: Wed Mar 13 13:55:12 2024
        access_timestamp: Wed Mar 13 13:55:12 2024
        modify_timestamp: Wed Mar 13 13:55:12 2024

配置客户端使用RBD-admin用户挂载RBD

#使用admin用户挂载 /IBM1 目录
cephadmin@ceph-deploy:~$ ceph df
--- RAW STORAGE ---
CLASS     SIZE    AVAIL     USED  RAW USED  %RAW USED
hdd    270 GiB  267 GiB  2.9 GiB   2.9 GiB       1.06
TOTAL  270 GiB  267 GiB  2.9 GiB   2.9 GiB       1.06

--- POOLS ---
POOL                   ID  PGS   STORED  OBJECTS     USED  %USED  MAX AVAIL
device_health_metrics   1    1      0 B        0      0 B      0     84 GiB
mypool                  2   64    628 B        2   12 KiB      0     84 GiB
cephfs-metadata         3   32   75 KiB       23  315 KiB      0     84 GiB
cephfs-data             4   64  100 MiB       25  300 MiB   0.12     84 GiB
rbd-ibm                 6   32     50 B        6   36 KiB      0     84 GiB

#使用Centos7客户端上安装ceph-common
[root@k8s-haproxy02 ~]# cat /etc/redhat-release
CentOS Linux release 7.9.2009 (Core)

[root@k8s-haproxy02 ~]# yum install epel-release
[root@k8s-haproxy02 ~]# yum install -y https://mirrors.aliyun.com/ceph/rpm-octopus/el7/noarch/ceph-release-1-1.el7.noarch.rpm
[root@k8s-haproxy02 ~]# yum install -y ceph-common
cephadmin@ceph-deploy:~$ scp ceph.conf ceph.client.admin.keyring root@192.168.40.110:/etc/ceph #从部署服务器同步认证文件

[root@k8s-haproxy02 ceph]# rbd -p rbd-ibm map data-img1 #这里可能会报错 关闭一些特性 rbd feature disable rbd-ibm/data-img1 object-map
/dev/rbd0
[root@k8s-haproxy02 ceph]# rbd -p rbd-ibm map data-img2
/dev/rbd1
[root@k8s-haproxy02 ceph]# lsblk
NAME            MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sda               8:0    0   20G  0 disk
├─sda1            8:1    0    1G  0 part /boot
└─sda2            8:2    0   19G  0 part
  ├─centos-root 253:0    0   17G  0 lvm  /
  └─centos-swap 253:1    0    2G  0 lvm  [SWAP]
sr0              11:0    1 1024M  0 rom
rbd0            252:0    0    3G  0 disk
rbd1            252:16   0    5G  0 disk
[root@k8s-haproxy02 ceph]# mkfs.xfs /dev/rbd0
Discarding blocks...Done.
meta-data=/dev/rbd0              isize=512    agcount=8, agsize=98304 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=1        finobt=0, sparse=0
data     =                       bsize=4096   blocks=786432, imaxpct=25
         =                       sunit=1024   swidth=1024 blks
naming   =version 2              bsize=4096   ascii-ci=0 ftype=1
log      =internal log           bsize=4096   blocks=2560, version=2
         =                       sectsz=512   sunit=8 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0
[root@k8s-haproxy02 ceph]# mkfs.xfs /dev/rbd1
Discarding blocks...Done.
meta-data=/dev/rbd1              isize=512    agcount=8, agsize=163840 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=1        finobt=0, sparse=0
data     =                       bsize=4096   blocks=1310720, imaxpct=25
         =                       sunit=1024   swidth=1024 blks
naming   =version 2              bsize=4096   ascii-ci=0 ftype=1
log      =internal log           bsize=4096   blocks=2560, version=2
         =                       sectsz=512   sunit=8 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0
[root@k8s-haproxy02 ceph]# mkdir /IBM1 /IBM2 -p
[root@k8s-haproxy02 ceph]# mount /dev/rbd0 /IBM1/
[root@k8s-haproxy02 ceph]# mount /dev/rbd1 /IBM2/

##客户端验证写入数据
#安装 docker 并创建 mysql容器,验证容器数据能否写人 rbd 挂载的路径/IBM1
yum install -y yum-utils device-mapper-persistent-data lvm2
yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
yum makecache fast
yum install -y docker-ce
service docker start
docker run -it -d -p 3306:3306 -e MYSQL_ROOT_PASSWORD=sheca -v /IBM1:/var/lib/mysql mysql:5.6.46
ll /IBM1/
[root@k8s-haproxy02 ceph]# ll /IBM1
total 110604
-rw-rw---- 1 polkitd input       56 Mar 13 15:24 auto.cnf
-rw-rw---- 1 polkitd input 12582912 Mar 13 15:24 ibdata1
-rw-rw---- 1 polkitd input 50331648 Mar 13 15:24 ib_logfile0
-rw-rw---- 1 polkitd input 50331648 Mar 13 15:23 ib_logfile1
drwx------ 2 polkitd input     4096 Mar 13 15:24 mysql
drwx------ 2 polkitd input     4096 Mar 13 15:23 performance_schema
#验证mysql访问
[root@k8s-haproxy02 ceph]# yum install -y mysql
[root@k8s-haproxy02 ceph]# mysql -uroot -psheca -h 192.168.40.110
MySQL [(none)]> show databases;
+--------------------+
| Database           |
+--------------------+
| information_schema |
| mysql              |
| performance_schema |
+--------------------+
3 rows in set (0.00 sec)

MySQL [(none)]> create database mydatabase;
Query OK, 1 row affected (0.00 sec)

cephadmin@ceph-deploy:~$ ceph df #
--- RAW STORAGE ---
CLASS     SIZE    AVAIL     USED  RAW USED  %RAW USED
hdd    270 GiB  267 GiB  3.3 GiB   3.3 GiB       1.22
TOTAL  270 GiB  267 GiB  3.3 GiB   3.3 GiB       1.22

--- POOLS ---
POOL                   ID  PGS   STORED  OBJECTS     USED  %USED  MAX AVAIL
device_health_metrics   1    1      0 B        0      0 B      0     84 GiB
mypool                  2   64    628 B        2   12 KiB      0     84 GiB
cephfs-metadata         3   32   75 KiB       23  315 KiB      0     84 GiB
cephfs-data             4   64  100 MiB       25  300 MiB   0.12     84 GiB
rbd-ibm                 6   32  136 MiB       60  409 MiB   0.16     84 GiB

配置客户端使用RBD-普通用户挂载RBD

#使用普通用户挂载 /IBM2 目录
cephadmin@ceph-deploy:~$ ceph auth add client.dzzz mon 'allow r' osd 'allow rwx pool=rbd-ibm'
added key for client.dzzz
cephadmin@ceph-deploy:~$ ceph auth get client.dzzz
[client.dzzz]
        key = AQCtWPFlDatSAxAADaqJjSC1iW/r6BYE+Eg4hQ==
        caps mon = "allow r"
        caps osd = "allow rwx pool=rbd-ibm"
exported keyring for client.dzzz
cephadmin@ceph-deploy:~$ ceph-authtool --create-keyring ceph.client.dzzz.keyring
creating ceph.client.dzzz.keyring
cephadmin@ceph-deploy:~$ ceph auth get client.dzzz -o ceph.client.dzzz.keyring
exported keyring for client.dzzz
cephadmin@ceph-deploy:~$ cat ceph.client.dzzz.keyring
[client.dzzz]
        key = AQCtWPFlDatSAxAADaqJjSC1iW/r6BYE+Eg4hQ==
        caps mon = "allow r"
        caps osd = "allow rwx pool=rbd-ibm"

#安装ceph客户端
Centos:
[root@k8s-haproxy02 ~]# yum install epel-release
[root@k8s-haproxy02 ~]# yum install -y https://mirrors.aliyun.com/ceph/rpm-octopus/el7/noarch/ceph-release-1-1.el7.noarch.rpm
[root@k8s-haproxy02 ~]# yum install -y ceph-common
cephadmin@ceph-deploy:~$ scp ceph.conf ceph.client.dzzz.keyring root@192.168.40.110:/etc/ceph #从部署服务器同步认证文件
[root@k8s-haproxy02 ceph]# cd  /etc/ceph/
[root@k8s-haproxy02 ceph]# ls
ceph.client.admin.keyring  ceph.client.dzzz.keyring  ceph.conf  rbdmap
[root@k8s-haproxy02 ceph]# ceph --user dzzz -s
  cluster:
    id:     0d8fb726-ee6d-4aaf-aeca-54c68e2584af
    health: HEALTH_WARN
            1 pool(s) do not have an application enabled

  services:
    mon: 3 daemons, quorum ceph-mon1,ceph-mon2,ceph-mon3 (age 5h)
    mgr: ceph-mgr1(active, since 5h), standbys: ceph-mgr2
    mds: 1/1 daemons up
    osd: 9 osds: 9 up (since 5h), 9 in (since 30h)

  data:
    volumes: 1/1 healthy
    pools:   5 pools, 193 pgs
    objects: 110 objects, 252 MiB
    usage:   3.3 GiB used, 267 GiB / 270 GiB avail
    pgs:     193 active+clean

[root@k8s-haproxy02 ceph]# rbd --user dzzz -p rbd-ibm map data-img2
rbd: warning: image already mapped as /dev/rbd1
/dev/rbd2
[root@k8s-haproxy02 ceph]# fdisk -l /dev/rbd2

Disk /dev/rbd1: 5368 MB, 5368709120 bytes, 10485760 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 4194304 bytes / 4194304 bytes

[root@k8s-haproxy02 ceph]# mkfs.ext4 /dev/rbd2
mke2fs 1.42.9 (28-Dec-2013)
Discarding device blocks: done
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=1024 blocks, Stripe width=1024 blocks
327680 inodes, 1310720 blocks
65536 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=1342177280
40 block groups
32768 blocks per group, 32768 fragments per group
8192 inodes per group
Superblock backups stored on blocks:
        32768, 98304, 163840, 229376, 294912, 819200, 884736

Allocating group tables: done
Writing inode tables: done
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done

[root@k8s-haproxy02 ceph]# mount /dev/rbd2 /IBM2
[root@k8s-haproxy02 ceph]# cp /var/log/messages /IBM2/
[root@k8s-haproxy02 ceph]# ll /IBM2/
total 1100
drwx------ 2 root root   16384 Mar 13 15:46 lost+found
-rw------- 1 root root 1108262 Mar 13 15:47 messages
[root@k8s-haproxy02 ceph]# df -TH
Filesystem              Type      Size  Used Avail Use% Mounted on
devtmpfs                devtmpfs  1.1G     0  1.1G   0% /dev
tmpfs                   tmpfs     1.1G     0  1.1G   0% /dev/shm
tmpfs                   tmpfs     1.1G   11M  1.1G   1% /run
tmpfs                   tmpfs     1.1G     0  1.1G   0% /sys/fs/cgroup
/dev/mapper/centos-root xfs        19G  3.1G   16G  17% /
/dev/sda1               xfs       1.1G  144M  920M  14% /boot
tmpfs                   tmpfs     208M     0  208M   0% /run/user/0
/dev/rbd0               xfs       3.3G  156M  3.1G   5% /IBM1
/dev/rbd2               ext4      5.2G   23M  4.9G   1% /IBM2
overlay                 overlay    19G  3.1G   16G  17% /var/lib/docker/overlay2/56a8e7548b8938ff8f76f3a9d5fa4b56b798459a775b93b4dc2ebc30b248e5a6/merged
[root@k8s-haproxy02 ceph]# rbd ls -p rbd-ibm -l
NAME       SIZE   PARENT  FMT  PROT  LOCK
data-img1  3 GiB            2        excl #施加锁文件。已经被客户端映射
data-img2  5 GiB            2

##验证 ceph 内核模块:
#挂载 rbd 之后系统内核会自动加载 libceph.ko 模块
Centos:
[root@k8s-haproxy02 ceph]# lsmod | grep ceph
libceph               306750  1 rbd
dns_resolver           13140  1 libceph
libcrc32c              12644  5 xfs,ip_vs,libceph,nf_nat,nf_conntrack

RBD镜像空间拉伸

#可以扩展空间,不建议缩小空间
cephadmin@ceph-deploy:~$  rbd ls -p rbd-ibm -l
NAME       SIZE   PARENT  FMT  PROT  LOCK
data-img1  3 GiB            2        excl
data-img2  5 GiB            2
cephadmin@ceph-deploy:~$  rbd resize --pool rbd-ibm --image data-img2 --size 6G
Resizing image: 100% complete...done.
cephadmin@ceph-deploy:~$  rbd ls -p rbd-ibm -l
NAME       SIZE   PARENT  FMT  PROT  LOCK
data-img1  3 GiB            2        excl
data-img2  6 GiB            2
[root@k8s-haproxy02 ceph]# fdisk -l /dev/rbd2

Disk /dev/rbd2: 6442 MB, 6442450944 bytes, 12582912 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 4194304 bytes / 4194304 bytes

#在node节点上 需要对磁盘重新识别
[root@k8s-haproxy02 ceph]# resize2fs /dev/rbd2 #ext4格式
resize2fs 1.42.9 (28-Dec-2013)
Filesystem at /dev/rbd2 is mounted on /IBM2; on-line resizing required
old_desc_blocks = 1, new_desc_blocks = 1
The filesystem on /dev/rbd2 is now 1572864 blocks long.

[root@k8s-haproxy02 ceph]# df -h | grep rbd2
/dev/rbd2                5.8G   22M  5.5G   1% /IBM2

开机自动挂载

[root@k8s-haproxy02 ceph]# cat /etc/rc.d/rc.local
#!/bin/bash
# THIS FILE IS ADDED FOR COMPATIBILITY PURPOSES
#
# It is highly advisable to create own systemd services or udev rules
# to run scripts during boot instead of using this file.
#
# In contrast to previous versions due to parallel execution during boot
# this script will NOT be run after all other services.
#
# Please note that you must run 'chmod +x /etc/rc.d/rc.local' to ensure
# that this script will be executed during boot.

touch /var/lock/subsys/local
rbd --user dzzz -p rbd-ibm map data-img2
mount /dev/rbd0 /IBM2

[root@k8s-haproxy02 ceph]# chmod a+x /etc/rc.d/rc.local
[root@k8s-haproxy02 ceph]# reboot

[root@k8s-haproxy02 IBM2]# ll /IBM2/
total 1100
drwx------ 2 root root   16384 Mar 13 15:46 lost+found
-rw------- 1 root root 1108262 Mar 13 15:47 messages

#查看映射关系
[root@k8s-haproxy02 IBM2]# rbd showmapped
id  pool     namespace  image      snap  device
0   rbd-ibm             data-img2  -     /dev/rbd0

 卸载rbd镜像

[root@k8s-haproxy02 ~]# cd
[root@k8s-haproxy02 ~]# umount /IBM2
[root@k8s-haproxy02 ~]# rbd --user dzzz -p rbd-ibm unmap data-img2
[root@k8s-haproxy02 ~]# df -h
Filesystem               Size  Used Avail Use% Mounted on
devtmpfs                 979M     0  979M   0% /dev
tmpfs                    991M     0  991M   0% /dev/shm
tmpfs                    991M  9.6M  981M   1% /run
tmpfs                    991M     0  991M   0% /sys/fs/cgroup
/dev/mapper/centos-root   17G  2.9G   15G  17% /
/dev/sda1               1014M  138M  877M  14% /boot
tmpfs                    199M     0  199M   0% /run/user/0

删除rbd镜像

#镜像删除后数据也会被删除而且是无法恢复,因此在执行删除操作的时候要慎重。
cephadmin@ceph-deploy:~$ rbd rm --pool rbd-ibm --image data-img2 #删除存储池rbd-ibm 中的 data-img2镜像
Removing image: 100% complete...done.

rbd镜像回收站机制

#删除的镜像数据无法恢复,但是还有另外一种方法可以先把镜像移动到回收站,后期确认删除的时候再从回收站删除即可。
cephadmin@ceph-deploy:~$ rbd status --pool rbd-ibm --image data-img1 #查看使用情况 有个客户端正在挂载使用
Watchers:
        watcher=192.168.40.110:0/3226063138 client.54419 cookie=18446462598732840961

cephadmin@ceph-deploy:~$ rbd trash move --pool rbd-ibm --image data-img1 #移动到回收站
cephadmin@ceph-deploy:~$ rbd trash list --pool rbd-ibm #列出回收站镜像
d5966048d97b data-img1

#如果镜像不用了 可以使用remove 从回收站进行删除
cephadmin@ceph-deploy:~$ rbd trash remove --pool rbd-ibm d5966048d97b

cephadmin@ceph-deploy:~$ rbd trash restore --pool rbd-ibm --image data-img1 --image-id d5966048d97b #还原镜像
cephadmin@ceph-deploy:~$ rbd ls --pool rbd-ibm -l
NAME       SIZE   PARENT  FMT  PROT  LOCK
data-img1  3 GiB            2        excl

镜像快照

cephadmin@ceph-deploy:~$ rbd create data-oracle --size 1G --pool rbd-ibm --image-format 2 --image-feature layering
cephadmin@ceph-deploy:~$ ceph df
--- RAW STORAGE ---
CLASS     SIZE    AVAIL     USED  RAW USED  %RAW USED
hdd    270 GiB  267 GiB  3.3 GiB   3.3 GiB       1.21
TOTAL  270 GiB  267 GiB  3.3 GiB   3.3 GiB       1.21

--- POOLS ---
POOL                   ID  PGS   STORED  OBJECTS     USED  %USED  MAX AVAIL
device_health_metrics   1    1      0 B        0      0 B      0     84 GiB
mypool                  2   64    628 B        2   12 KiB      0     84 GiB
cephfs-metadata         3   32   75 KiB       23  315 KiB      0     84 GiB
cephfs-data             4   64  100 MiB       25  300 MiB   0.12     84 GiB
rbd-ibm                 6   32  126 MiB       48  378 MiB   0.15     84 GiB

cephadmin@ceph-deploy:~$ rbd ls --pool rbd-ibm
data-img1
data-oracle

cephadmin@ceph-deploy:~$ rbd ls --pool rbd-ibm -l
NAME         SIZE   PARENT  FMT  PROT  LOCK
data-img1    3 GiB            2
data-oracle  1 GiB            2

[root@k8s-haproxy02 ~]# rbd -p rbd-ibm map data-oracle
/dev/rbd2
[root@k8s-haproxy02 ~]# lsblk
NAME            MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sda               8:0    0   20G  0 disk
├─sda1            8:1    0    1G  0 part /boot
└─sda2            8:2    0   19G  0 part
  ├─centos-root 253:0    0   17G  0 lvm  /
  └─centos-swap 253:1    0    2G  0 lvm  [SWAP]
sr0              11:0    1 1024M  0 rom
rbd0            252:0    0    3G  0 disk /IBM1
rbd1            252:16   0    3G  0 disk /IBM2
rbd2            252:32   0    1G  0 disk
[root@k8s-haproxy02 ~]#  mkfs.xfs /dev/rbd2
Discarding blocks...Done.
meta-data=/dev/rbd2              isize=512    agcount=8, agsize=32768 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=1        finobt=0, sparse=0
data     =                       bsize=4096   blocks=262144, imaxpct=25
         =                       sunit=1024   swidth=1024 blks
naming   =version 2              bsize=4096   ascii-ci=0 ftype=1
log      =internal log           bsize=4096   blocks=2560, version=2
         =                       sectsz=512   sunit=8 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0
[root@k8s-haproxy02 ~]# mkdir /u01
[root@k8s-haproxy02 ~]# mount /dev/rbd2 /u01/
[root@k8s-haproxy02 u01]# echo "11111111" > test.txt
[root@k8s-haproxy02 u01]# ll
total 4
-rw-r--r-- 1 root root 9 Mar 13 16:26 test.txt
[root@k8s-haproxy02 u01]# cat test.txt
11111111
cephadmin@ceph-deploy:~$ rbd snap create --pool rbd-ibm --image data-oracle --snap oracle-data-img1-202403131616
Creating snap: 100% complete...done.
cephadmin@ceph-deploy:~$ rbd snap list --pool rbd-ibm --image data-oracle
SNAPID  NAME                           SIZE   PROTECTED  TIMESTAMP
     4  oracle-data-img1-202403131616  1 GiB             Wed Mar 13 16:27:12 2024
#删除文件
[root@k8s-haproxy02 u01]# rm -rf /u01/test.txt
[root@k8s-haproxy02 u01]# ll /u01/
total 0
[root@k8s-haproxy02 u01]# cd
[root@k8s-haproxy02 ~]# umount /u01/
[root@k8s-haproxy02 ~]# rbd unmap /dev/rbd2

#回滚命令
cephadmin@ceph-deploy:~$ rbd snap rollback --pool rbd-ibm --image data-oracle --snap oracle-data-img1-202403131616
Rolling back to snapshot: 100% complete...done.
#客户端重新验证数据
[root@k8s-haproxy02 ~]# rbd -p rbd-ibm map data-oracle
/dev/rbd2
[root@k8s-haproxy02 ~]# moutn /dev/rbd
rbd/  rbd0  rbd1  rbd2
[root@k8s-haproxy02 ~]# mount /dev/rbd2 /u01/
[root@k8s-haproxy02 ~]# ll /u01/
total 0
-rw-r--r-- 1 root root 0 Mar 13 16:26 test.txt
[root@k8s-haproxy02 ~]# cat /u01/test.txt
111111

#删除快照
cephadmin@ceph-deploy:~$ rbd snap list --pool rbd-ibm --image data-oracle
SNAPID  NAME                           SIZE   PROTECTED  TIMESTAMP
     4  oracle-data-img1-202403131616  1 GiB             Wed Mar 13 16:27:12 2024
cephadmin@ceph-deploy:~$ rbd snap remove --pool rbd-ibm --image data-oracle --snap oracle-data-img1-202403131616
Removing snap: 100% complete...done.
cephadmin@ceph-deploy:~$ rbd snap list --pool rbd-ibm --image data-oracle
#快照数量限制
cephadmin@ceph-deploy:~$ rbd snap limit set --pool rbd-ibm --image data-oracle --limit 30
#清楚快照数量限制
cephadmin@ceph-deploy:~$ rbd snap limit clear --pool rbd-ibm --image data-oracle

 

posted @ 2024-03-13 11:19  しみずよしだ  阅读(317)  评论(0)    收藏  举报