openstack-rokcy之cinder-extra
一、配置多存储后端,
要启用多存储后端,必须在/etc/cinder/cinder.conf文件中设置 ,配置共享卷驱动程序后端
[DEFAULT]
enabled_backends = lvm,ceph-data
[lvm]
volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver
volume_group = cinder-volumes
iscsi_protocol = iscsi
iscsi_helper = lioadm
[ceph-data]
volume_driver = cinder.volume.drivers.rbd.RBDDriver
volume_backend_name = ceph-data-backend
rbd_pool = volumes
rbd_ceph_conf = /etc/ceph/ceph.conf
rbd_max_clone_depth = 5
max_over_subscription_ratio = 10.0
rbd_store_chunk_size = 4
qos_enabled=false
iops_max=1100
bandwidth_max=100
iops_base=1100
iops_ratio=0.0
bandwidth_base=100
bandwidth_ratio=0
此标志定义不同后端的配置组的名称(以逗号分隔):一个名称与后端的一个配置组相关联
配置块存储调度程序多后端
卷类型
卷类型声明为块存储
openstack volume type create ceph-data
创建一个额外的规范来将卷类型链接到后端名称
openstack volume type set ceph-data --property volume_backend_name=ceph-data-backend
列出额外规范
openstack volume type list --long +--------------------------------------+-----------+-----------+-------------+-----------------------------------------+ | ID | Name | Is Public | Description | Properties | +--------------------------------------+-----------+-----------+-------------+-----------------------------------------+ | a028ec93-b2e3-4217-ac18-0d40d359e9d0 | ceph-data | True | None | volume_backend_name='ceph-data-backend' | +--------------------------------------+-----------+-----------+-------------+-----------------------------------------+
用法:
创建卷时,必须指定卷类型。卷类型的额外规范用于确定必须使用哪个后端
openstack volume create --size 1 --type ceph-data test_multi_backend +---------------------+--------------------------------------+ | Field | Value | +---------------------+--------------------------------------+ | attachments | [] | | availability_zone | nova | | bootable | false | | consistencygroup_id | None | | created_at | 2019-06-05T03:57:28.000000 | | description | None | | encrypted | False | | id | a69586d5-44d0-4ca5-8425-a95109392706 | | migration_status | None | | multiattach | False | | name | test_multi_backend | | properties | | | replication_status | None | | size | 1 | | snapshot_id | None | | source_volid | None | | status | creating | | type | ceph-data | | updated_at | None | | user_id | cf86a58ebc3f462c9465beda84ec705c | +---------------------+--------------------------------------+
openstack volume show a69586d5-44d0-4ca5-8425-a95109392706 +--------------------------------+--------------------------------------+ | Field | Value | +--------------------------------+--------------------------------------+ | attachments | [] | | availability_zone | nova | | bootable | false | | consistencygroup_id | None | | created_at | 2019-06-05T03:57:28.000000 | | description | None | | encrypted | False | | id | a69586d5-44d0-4ca5-8425-a95109392706 | | migration_status | None | | multiattach | False | | name | test_multi_backend | | os-vol-host-attr:host | ceph3@ceph-data#ceph-data-backend | | os-vol-mig-status-attr:migstat | None | | os-vol-mig-status-attr:name_id | None | | os-vol-tenant-attr:tenant_id | c1e6cbf1502141dca4a70c7f500688f3 | | properties | | | replication_status | None | | size | 1 | | snapshot_id | None | | source_volid | None | | status | available | | type | ceph-data | | updated_at | 2019-06-05T03:57:31.000000 | | user_id | cf86a58ebc3f462c9465beda84ec705c | +--------------------------------+--------------------------------------+
后端确认卷存在于ceph集群中
rbd ls volumes volume-a69586d5-44d0-4ca5-8425-a95109392706 [root@ceph3 ceph]# rbd info volumes/volume-a69586d5-44d0-4ca5-8425-a95109392706 rbd image 'volume-a69586d5-44d0-4ca5-8425-a95109392706': size 1GiB in 256 objects order 22 (4MiB objects) block_name_prefix: rbd_data.20c181f383f50 format: 2 features: layering, exclusive-lock, object-map, fast-diff, deep-flatten flags: create_timestamp: Wed Jun 5 11:57:31 2019 [root@ceph3 ceph]# rbd du volumes/volume-a69586d5-44d0-4ca5-8425-a95109392706 NAME PROVISIONED USED volume-a69586d5-44d0-4ca5-8425-a95109392706 1GiB 0B
该过程中遇到过的问题,权限不足:如下报错
2019-06-05 11:55:07.741 23075 ERROR cinder.service [-] Manager for service cinder-volume ceph3@ceph-data is reporting problems, not sending heartbeat. Service will appear "down". 2019-06-05 11:55:08.574 23075 WARNING cinder.volume.manager [req-72c373ce-1503-4861-b340-40dd7aca9fe4 - - - - -] Update driver status failed: (config name ceph-data) is uninitialized. +--------------------------------+--------------------------------------+ | Field | Value | +--------------------------------+--------------------------------------+ | attachments | [] | | availability_zone | nova | | bootable | false | | consistencygroup_id | None | | created_at | 2019-06-05T03:36:33.000000 | | description | None | | encrypted | False | | id | 8abfdd3c-9086-4e64-81b6-ac12bfe196b6 | | migration_status | None | | multiattach | False | | name | test_multi_backend | | os-vol-host-attr:host | None | | os-vol-mig-status-attr:migstat | None | | os-vol-mig-status-attr:name_id | None | | os-vol-tenant-attr:tenant_id | c1e6cbf1502141dca4a70c7f500688f3 | | properties | | | replication_status | None | | size | 1 | | snapshot_id | None | | source_volid | None | | status | error | | type | ceph-data | | updated_at | 2019-06-05T03:36:33.000000 | | user_id | cf86a58ebc3f462c9465beda84ec705c | +--------------------------------+--------------------------------------+
二、迁移卷
列出可用后端
# cinder get-pools +----------+-----------------------------------+ | Property | Value | +----------+-----------------------------------+ | name | ceph3@ceph-data#ceph-data-backend | +----------+-----------------------------------+ +----------+---------------+ | Property | Value | +----------+---------------+ | name | ceph3@lvm#LVM | +----------+---------------+ cinder-manage host list Deprecated: Option "logdir" from group "DEFAULT" is deprecated. Use option "log-dir" from group "DEFAULT". host zone ceph1 nova ceph3@lvm nova ceph3@ceph-data nova
属性:
os-vol-host-attr:host - 卷的当前后端。
os-vol-mig-status-attr:migstat - 此卷的迁移状态(“无”表示当前未进行迁移)。
os-vol-mig-status-attr:name_id - 后端此卷名称所基于的卷ID。
在迁移卷之前,后端存储上的名称可能基于卷的ID(请参阅 volume_name_template配置参数)。
例如,如果 volume_name_template保留为默认值(volume-%s),则第一个LVM后端具有一个名为
volume-6088f80a-f116-4331-ad48-9afb0dfb196c的逻辑卷 。
在迁移过程中,如果您创建卷并复制数据,则卷将获取新名称但保留其原始ID。这是由name_id 属性公开的。
迁移卷至第二个后端
openstack volume migrate $volume_id --host server2@lvmstorage-2#lvmstorage-2
演示lvm迁移至ceph,失败记录:待查
[root@ceph1 ~]# openstack volume list +--------------------------------------+--------------------+-----------+------+-------------+ | ID | Name | Status | Size | Attached to | +--------------------------------------+--------------------+-----------+------+-------------+ | a69586d5-44d0-4ca5-8425-a95109392706 | test_multi_backend | available | 1 | | | 29bf3e76-6b29-4c86-b1ad-5539d1be0248 | volume1 | available | 1 | | +--------------------------------------+--------------------+-----------+------+-------------+ [root@ceph1 ~]# openstack volume show 29bf3e76-6b29-4c86-b1ad-5539d1be0248 +--------------------------------+--------------------------------------+ | Field | Value | +--------------------------------+--------------------------------------+ | attachments | [] | | availability_zone | nova | | bootable | false | | consistencygroup_id | None | | created_at | 2019-04-12T06:43:35.000000 | | description | None | | encrypted | False | | id | 29bf3e76-6b29-4c86-b1ad-5539d1be0248 | | migration_status | None | | multiattach | False | | name | volume1 | | os-vol-host-attr:host | ceph3@lvm#LVM | | os-vol-mig-status-attr:migstat | None | | os-vol-mig-status-attr:name_id | None | | os-vol-tenant-attr:tenant_id | c1e6cbf1502141dca4a70c7f500688f3 | | properties | | | replication_status | None | | size | 1 | | snapshot_id | None | | source_volid | None | | status | available | | type | None | | updated_at | 2019-04-13T09:26:09.000000 | | user_id | cf86a58ebc3f462c9465beda84ec705c | +--------------------------------+--------------------------------------+ [root@ceph1 ~]# openstack volume migrate 29bf3e76-6b29-4c86-b1ad-5539d1be0248 --host ceph3@ceph-data#ceph-data-backend [root@ceph1 ~]# openstack volume show 29bf3e76-6b29-4c86-b1ad-5539d1be0248 +--------------------------------+--------------------------------------+ | Field | Value | +--------------------------------+--------------------------------------+ | attachments | [] | | availability_zone | nova | | bootable | false | | consistencygroup_id | None | | created_at | 2019-04-12T06:43:35.000000 | | description | None | | encrypted | False | | id | 29bf3e76-6b29-4c86-b1ad-5539d1be0248 | | migration_status | error | | multiattach | False | | name | volume1 | | os-vol-host-attr:host | ceph3@lvm#LVM | | os-vol-mig-status-attr:migstat | error | | os-vol-mig-status-attr:name_id | None | | os-vol-tenant-attr:tenant_id | c1e6cbf1502141dca4a70c7f500688f3 | | properties | | | replication_status | None | | size | 1 | | snapshot_id | None | | source_volid | None | | status | available | | type | None | | updated_at | 2019-06-14T03:40:15.000000 | | user_id | cf86a58ebc3f462c9465beda84ec705c | +--------------------------------+--------------------------------------+
使用openstack volume show命令查看迁移的状态。迁移时,该migstat属性显示诸如migrating或的状态 completing。
出错时,migstat设置为None,主机属性显示原始属性host。成功时
目前不允许迁移具有快照的卷
三、备份和还原卷和快照
创建卷备份
openstack volume backup create [--incremental] [--force] VOLUME
--incremental,创建增量,当然第一个备份必须是全量,增量是基于父备份的
--force,如果不加,默认仅在available时才能备份,添加--force则in-use也可以
还原
openstack volume backup restore BACKUP_ID VOLUME_ID
从完整备份还原时,它是完全还原。
从增量备份还原时,将根据父备份的ID构建备份列表。首先根据完全备份执行完全还原,然后根据增量备份完成还原,按顺序放置在其上。
查看备份列表
openstack volume backup list
阐明备份状态的可选参数包括
--name 名称
--status 状态
--volume 卷ID
用于过滤备份,搜索以查找与--all-projects列出的备份关联的项目的详细信息
重置备份状态
在创建或恢复备份时,由于数据库或rabbitmq出现问题,有时它可能会陷入创建或恢复状态。在这些情况下,重置备份状态可以将其恢复到功能状态
cinder backup-reset-state [--state STATE] BACKUP_ID-1 BACKUP_ID-2 ...
备份取消,
从Liberty开始,就可以取消对任何Chunked Backup类型驱动程序(如Swift,NFS,Google,GlusterFS和Posix)的持续备份操作。
要在备份上发出备份取消,我们必须在备份上请求强制删除
openstack volume backup delete --force BACKUP_ID
从Rocky开始,也可以取消对任何Chunked Backup类型驱动程序的持续恢复操作
要发出备份恢复取消,我们需要将其状态更改为除恢复之外的任何其他内容。我们强烈建议使用“错误”状态以避免对还原是否成功产生任何混淆。
openstack volume backup set --state error BACKUP_ID
warning:在恢复操作开始后,如果它被取消,则目标卷是无用的,因为无法知道实际恢复了多少数据,或者是否有任何数据,因此我们建议使用“错误”状态
四、导入和导出备份元数据
导出卷备份的元数据
cinder backup-export BACKUP_ID
导入备份的元数据
cinder backup-import METADATA
五、配额
获取租户ID
openstack project show -f value -c id PROJECT_NAME openstack project show -f value -c id admin c1e6cbf1502141dca4a70c7f500688f3
列出项目的默认配额
openstack quota show --default c1e6cbf1502141dca4a70c7f500688f3 +----------------------+-------+ | Field | Value | +----------------------+-------+ | backup-gigabytes | 1000 | | backups | 10 | | cores | 20 | | fixed-ips | -1 | | floating-ips | 50 | | gigabytes | 1000 | | gigabytes_ceph-data | -1 | | groups | 10 | | health_monitors | None | | injected-file-size | 10240 | | injected-files | 5 | | injected-path-size | 255 | | instances | 10 | | key-pairs | 100 | | l7_policies | None | | listeners | None | | load_balancers | None | | location | None | | name | None | | networks | 100 | | per-volume-gigabytes | -1 | | pools | None | | ports | 500 | | project | None | | project_name | admin | | properties | 128 | | ram | 51200 | | rbac_policies | 10 | | routers | 10 | | secgroup-rules | 100 | | secgroups | 10 | | server-group-members | 10 | | server-groups | 10 | | snapshots | 10 | | snapshots_ceph-data | -1 | | subnet_pools | -1 | | subnets | 100 | | volumes | 10 | | volumes_ceph-data | -1 | +----------------------+-------+
查看块存储配额
cinder quota-usage c1e6cbf1502141dca4a70c7f500688f3 +----------------------+--------+----------+-------+-----------+ | Type | In_use | Reserved | Limit | Allocated | +----------------------+--------+----------+-------+-----------+ | backup_gigabytes | 0 | 0 | 1000 | | | backups | 0 | 0 | 10 | | | gigabytes | 2 | 0 | 1000 | | | gigabytes_ceph-data | 1 | 0 | -1 | | | groups | 0 | 0 | 10 | | | per_volume_gigabytes | 0 | 0 | -1 | | | snapshots | 0 | 0 | 10 | | | snapshots_ceph-data | 0 | 0 | -1 | | | volumes | 2 | 0 | 10 | | | volumes_ceph-data | 1 | 0 | -1 | | +----------------------+--------+----------+-------+-----------+
编辑和更新块存储服务配额
1.更新新项目的默认值, /etc/cinder/cinder.conf cinder.quota 2.更新现有项目的块存储服务配额 openstack quota set --QUOTA_NAME QUOTA_VALUE PROJECT_ID openstack quota set --volumes 15 $PROJECT_ID 3.要清除每个项目的配额限制 cinder quota-delete $PROJECT_ID 管理块存储调度,本人亲测,same是不同,different反而是相同 作为管理用户,可以控制卷所在的卷后端,可以指定俩个卷之间的亲缘关系或反关联, 卷之间的亲和力意味着它们存储在同一后端,而反关联意味着它们存储在不同的后端 示例 1.在与volume_A相同的后端创建新卷 openstack volume create --hint same_host=volume_A-UUID --size SIZE VOLUME_NAME 2.在与volume_A不同的后端创建新卷 openstack volume create --hint different_host=volume_A-UUID --size SIZE VOLUME_NAME 3.在与volume_A和volume_B相同相同的后端创建新卷 openstack volume create --hint same_host=volume_A-UUID --hint same_host=volume_B-UUID --size SIZE VOLUME_NAME 或 openstack volume create --hint same_host="[volume_A-UUID,volume_B-UUID]" --size SIZE VOLUME_NAME 4.在与volume_A和volume_B不同的后端创建新卷 openstack volume create --hint different_host=volume_A-UUID --hint different_host=volume-B-UUID --size SIZE VOLUME_NAME 或 openstack volume create --hint diffetent_host="[volume_A--UUID,volume_B-UUID]" --size SIZE VOLUME_NAME

浙公网安备 33010602011771号