openstack-cinder集成ceph

ceph端

创建vms和volumes两个pool

创建openstack授权

ceph auth get-or-create client.openstack mon 'allow   r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=volumes,   allow rwx pool=vms'
ceph auth   get-or-create client.openstack > /etc/ceph/ceph.client.openstack.keyring

把 /etc/ceph/ceph.client.openstack.keyring 拷到 controller和compute的/etc/ceph 下

ceph auth get-key   client.openstack > client.openstack.key

把 client.openstack.key 拷到compute节点的 /etc/ceph

controller端

chown cinder:cinder   /etc/ceph/ceph.client.openstack.keyring

compute端

chown nova:nova   /etc/ceph/ceph.client.openstack.keyring

生成一个UUID使用并记录好

cat >secret.xml <<EOF
<secret ephemeral='no' private='no'>
   <uuid>5675b467-b203-4b8a-825c-9b43f0102eb8</uuid>
   <usage type='ceph'>
    <name>client.openstack secret</name>
   </usage>
</secret>
EOF
virsh secret-define --file secret.xml
virsh secret-set-value --secret 5675b467-b203-4b8a-825c-9b43f0102eb8 --base64 $(cat client.openstack.key) && rm -rf client.openstack.key secret.xml

cinder-volume端

修改 /etc/cinder/cinder.conf

[DEFAULT]
enabled_backends = lvm,ceph
[ceph]
default_volume_type=ceph
glance_api_version = 2  
volume_driver = cinder.volume.drivers.rbd.RBDDriver
volume_backend_name = ceph
rbd_pool = volumes
rbd_ceph_conf = /etc/ceph/ceph.conf
rbd_flatten_volume_from_snapshot = false
rbd_max_clone_depth = 5
rbd_store_chunk_size = 4
rados_connect_timeout = -1
rbd_user = cinder
rbd_secret_uuid = f8ffa466-38dd-4b66-b62d-b152d0e6ad60

重启cinder-volume服务

systemctl restart openstack-cinder-volume
posted @ 2022-02-16 11:32  打闹闹酱  阅读(207)  评论(0编辑  收藏  举报