OpenStack Train版-16.对接Ceph存储
1.Ceph集群创建OpenStack所需的存储池和用户(Ceph Mon节点)
创建OpenStack所需的存储池
ceph osd pool create volumes 128 ceph osd pool create images 128 ceph osd pool create backups 128 ceph osd pool create vms 128 rbd pool init volumes rbd pool init images rbd pool init backups rbd pool init vms
同步配置文件
ssh controller tee /etc/ceph/ceph.conf </etc/ceph/ceph.conf ssh compute01 tee /etc/ceph/ceph.conf </etc/ceph/ceph.conf ssh compute02 tee /etc/ceph/ceph.conf </etc/ceph/ceph.conf
创建用户并授权
ceph auth get-or-create client.cinder mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=volumes, allow rwx pool=vms, allow rx pool=images' ceph auth get-or-create client.glance mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=images' ceph auth get-or-create client.cinder-backup mon 'profile rbd' osd 'profile rbd pool=backups' ceph auth get-or-create client.glance >/etc/ceph/ceph.client.glance.keyring ceph auth get-or-create client.cinder >/etc/ceph/ceph.client.cinder.keyring ceph auth get-or-create client.cinder-backup >/etc/ceph/ceph.client.cinder-backup.keyring
同步用户秘钥并修改属主
ssh controller tee /etc/ceph/ceph.client.glance.keyring </etc/ceph/ceph.client.glance.keyring ssh controller chown glance:glance /etc/ceph/ceph.client.glance.keyring ssh controller tee /etc/ceph/ceph.client.cinder.keyring </etc/ceph/ceph.client.cinder.keyring ssh controller chown cinder:cinder /etc/ceph/ceph.client.cinder.keyring ssh compute01 tee /etc/ceph/ceph.client.cinder.keyring </etc/ceph/ceph.client.cinder.keyring ssh compute01 chown cinder:cinder /etc/ceph/ceph.client.cinder.keyring ssh compute02 tee /etc/ceph/ceph.client.cinder.keyring </etc/ceph/ceph.client.cinder.keyring ssh compute02 chown cinder:cinder /etc/ceph/ceph.client.cinder.keyring ssh controller tee /etc/ceph/ceph.client.cinder-backup.keyring </etc/ceph/ceph.client.cinder-backup.keyring ssh controller chown cinder:cinder /etc/ceph/ceph.client.cinder-backup.keyring
2.安装ceph客户端(务必注意版本一致性)并配置libvirt秘钥(存储节点)
rpm -ivh http://mirrors.aliyun.com/ceph/rpm-luminous/el7/noarch/ceph-release-1-1.el7.noarch.rpm # L版本 # rpm -ivh http://mirrors.aliyun.com/ceph/rpm-mimic/el7/noarch/ceph-release-1-1.el7.noarch.rpm # M版本 # rpm -ivh http://mirrors.aliyun.com/ceph/rpm-nautilus/el7/noarch/ceph-release-1-1.el7.noarch.rpm # N版本 # rpm -ivh http://mirrors.aliyun.com/ceph/rpm-octopus/el7/noarch/ceph-release-1-1.el7.noarch.rpm # O版本 yum install -y ceph-common python-rbd
生成UUID,按照集群一致性,每个节点可以使用相同的UUID增加秘钥到libvirt
UUID=$(uuidgen) echo $UUID 02aa5663-b0b5-453d-a0ce-1f24c61716c6
生成并同步secret.xml到所有计算节点(同步及增加秘钥部分略)
cat > /etc/ceph/secret.xml <<EOF <secret ephemeral='no' private='no'> <uuid>${UUID}</uuid> <usage type='ceph'> <name>client.cinder secret</name> </usage> </secret> EOF
添加密钥到libvirt
virsh secret-define --file /etc/ceph/secret.xml virsh secret-set-value --secret ${UUID} --base64 $(cat /etc/ceph/ceph.client.cinder.keyring | grep key | awk -F ' ' '{ print $3 }')
查看添加后的密钥key和value
virsh secret-list virsh secret-get-value $UUID
3.配置Glance集成Ceph作为后端存储并验证(控制节点)
修改glance-api配置文件
vim /etc/glance/glance-api.conf [DEFAULT] show_image_direct_url = True [glance_store] stores = rbd,file,http default_store = rbd filesystem_store_datadir = /var/lib/glance/images/ rbd_store_pool = images rbd_store_user = glance rbd_store_ceph_conf = /etc/ceph/ceph.conf #rbd_store_chunk_size = 8 [paste_deploy] flavor = keystone
重启glance-api服务
systemctl restart openstack-glance-api.service
4.配置Cinder集成Ceph作为后端存储并验证(控制节点和存储节点)
控制节点:修改默认卷类型
vim /etc/cinder/cinder.conf [DEFAULT] default_volume_type = ceph
控制节点:重启cinder-api和cinder-scheduler服务
systemctl restart openstack-cinder-api.service openstack-cinder-scheduler.service
存储节点:修改后端存储
vim /etc/cinder/cinder.conf [DEFAULT] enabled_backends = ceph,lvm [ceph] volume_driver = cinder.volume.drivers.rbd.RBDDriver volume_backend_name = ceph rbd_pool = volumes rbd_ceph_conf = /etc/ceph/ceph.conf rbd_flatten_volume_from_snapshot = false rbd_max_clone_depth = 5 rbd_store_chunk_size = 4 rados_connect_timeout = -1 glance_api_version = 2 rbd_user = cinder rbd_secret_uuid = 02aa5663-b0b5-453d-a0ce-1f24c61716c6 # rbd_secret_uuid的值即为步骤2生成的UUID值
存储节点:重启cinder-volume服务
systemctl restart openstack-cinder-volume.service
5.配置Nova集成Ceph(计算节点)
修改配置文件
vim /etc/nova/nova.conf [DEFAULT] live_migration_flag="VIR_MIGRATE_UNDEFINE_SOURCE,VIR_MIGRATE_PEER2PEER,VIR_MIGRATE_LIVE,VIR_MIGRATE_PERSIST_DEST,VIR_MIGRATE_TUNNELLED" [libvirt] virt_type = kvm inject_partition=-2 virt_type = kvm images_type = rbd images_rbd_pool = vms images_rbd_ceph_conf = /etc/ceph/ceph.conf disk_cachemodes="network=writeback" rbd_user = cinder rbd_secret_uuid = 02aa5663-b0b5-453d-a0ce-1f24c61716c6 # rbd_secret_uuid的值即为步骤2生成的UUID值
重启nova-compute服务
systemctl restart openstack-nova-compute.service
6.配置cinder备份集成Ceph(控制节点、备份服务装在控制节点即可)
修改配置文件/etc/cinder/cinder.conf 在[DEFAULT]部分,增加如下内容
[DEFAULT] backup_driver = cinder.backup.drivers.ceph.CephBackupDriver backup_ceph_conf = /etc/ceph/ceph.conf backup_ceph_user = cinder-backup backup_ceph_chunk_size = 4194304 backup_ceph_pool = backups backup_ceph_stripe_unit = 0 backup_ceph_stripe_count = 0 restore_discard_excess_bytes = true # 在配置参数back_driver时需要注释掉其他backup_driver的配置
修改dashboard配置文件中的OPENSTACK_CINDER_FEATURES中增加如下配置
vim /etc/openstack-dashboard/local_settings OPENSTACK_CINDER_FEATURES = { 'enable_backup': True, } # 重启httpd服务 systemctl restart httpd
启动 cinder backup服务进程并设置成开机自启动
systemctl enable openstack-cinder-backup.service systemctl restart openstack-cinder-backup.service