ceph 删除 osd

[ceph@storage-ceph02 ~]$ ceph osd out 0
marked out osd.0. 

[ceph@storage-ceph02 ~]$ sudo systemctl stop ceph-osd@0.service

[ceph@storage-ceph02 ~]$ ceph osd crush remove osd.0
removed item id 0 name 'osd.0' from crush map

[ceph@storage-ceph02 ~]$ ceph auth del osd.0
updated

[ceph@storage-ceph02 ~]$ ceph osd rm 0
removed osd.0

[ceph@storage-ceph02 ~]$ lsblk 
NAME                                                                                                    MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sr0                                                                                                      11:0    1 1024M  0 rom  
vda                                                                                                     253:0    0  128G  0 disk 
├─vda1                                                                                                  253:1    0    1G  0 part /boot
├─vda2                                                                                                  253:2    0    8G  0 part [SWAP]
├─vda3                                                                                                  253:3    0   50G  0 part /
├─vda4                                                                                                  253:4    0  512B  0 part 
└─vda5                                                                                                  253:5    0   69G  0 part 
  └─ceph--43a319ff--96e0--4115--91d0--09a44cc3a6b8-osd--block--019e7cc2--d1b9--4756--894f--1dfc8aef98ca 252:0    0   69G  0 lvm  

[ceph@storage-ceph02 ~]$ sudo dmsetup remove ceph--43a319ff--96e0--4115--91d0--09a44cc3a6b8-osd--block--019e7cc2--d1b9--4756--894f--1dfc8aef98ca

[ceph@storage-ceph02 ~]$ sudo wipefs -af /dev/vda5
/dev/vda5: 8 bytes were erased at offset 0x00000218 (LVM2_member): 4c 56 4d 32 20 30 30 31
posted @ 2022-02-20 23:32  jiaxzeng  阅读(147)  评论(0)    收藏  举报