ceph日常管理

 

1.查看ceph mgr状态

systemctl status ceph-mgr@master.service
systemctl status ceph-mgr@node1.service
systemctl status ceph-mgr@node2.service

 

[root@master ~]# ceph mgr services

 

2.列出pool

[root@master rbd]# ceph osd lspools
1 rbd-static-pool
2 dynamics-pool

  

3.创建池时通常设置 pg_num = pgp_num
ceph osd pool create mypool 128 128 # pg_num=128, pgp_num=128

 

4.获取配置信息值

[root@master rbd]# ceph osd pool get dynamics-pool pg_num
pg_num: 8
[root@master rbd]# ceph osd pool get dynamics-pool pgp_num
pgp_num: 8

pg_num 是"总房间数",决定能放多少数据。
pgp_num 是"可选的房间数",决定新数据能放哪些房间。
两者一致时,所有房间均可自由使用。

 

5.创建osd存储池并查看限额

 

ceph osd pool create dynamics-pool 8
ceph osd pool get-quota dynamics-pool

[root@master rbd]# ceph osd pool get-quota dynamics-pool
quotas for pool 'dynamics-pool':
  max objects: N/A
  max bytes  : N/A

 

 6.删除pool

[root@master /]# ceph osd pool delete rbd-static-pool
Error EPERM: WARNING: this will *PERMANENTLY DESTROY* all data stored in pool rbd-static-pool.  If you are *ABSOLUTELY CERTAIN* that is what you want, pass the pool name *twice*, followed by --yes-i-really-really-mean-it.

[root@master /]# ceph osd pool delete rbd-static-pool rbd-static-pool --yes-i-really-really-mean-it
pool 'rbd-static-pool' removed

[root@master /]# cat /etc/ceph/ceph.conf|grep mon_allow_pool_delete
mon_allow_pool_delete = true

 

7.删除cephfs

[root@master ceph]# ceph fs rm cephfs --yes-i-really-mean-it
Error EINVAL: all MDS daemons must be inactive/failed before removing filesystem. See `ceph fs fail`.
解决办法:
需要先停掉mds
systemctl stop ceph-mds@master
systemctl stop ceph-mds@node1
systemctl stop ceph-mds@node2

 

posted @ 2025-10-14 14:58  slnngk  阅读(10)  评论(0)    收藏  举报