Openstack 集成 Ceph 基础篇(一)

详细配置也可以参考Ceph官方文档:http://docs.ceph.org.cn/rbd/rbd-openstack/

环境介绍:

Openstack   controller+compute

Openstack-compute   compute

ceph001 OSD+Mon+MDS

ceph002 OSD+MDS

以上环境已提前准备完成,下面正式开始。

Ceph 中的配置

(1) 在 ceph 中创建三个 pool 分别给 Cinder,Glance 和 nova 使用

cephadmin@ceph001:~$ ceph osd pool create volumes 64
pool 'volumes' created
cephadmin@ceph001:~$ ceph osd pool create images 64
pool 'images' created
cephadmin@ceph001:~$ ceph osd pool create vms 64
pool 'vms' created

 

(2) 将Ceph配置文件Copy到Openstack各节点中

The nodes running glance-apicinder-volumenova-compute and cinder-backup act as Ceph clients. Each requires the ceph.conffile:

root@ceph001:~# ssh openstack tee /etc/ceph/ceph.conf < /etc/ceph/ceph.conf
root@openstack's password: 
[global]
fsid = 4f1100a0-bc37-4472-b0b0-58b44eabac97
mon_initial_members = ceph001
mon_host = 192.168.20.178
auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx
filestore_xattr_use_omap = true
osd_pool_default_size = 2

root@ceph001:~# ssh openstack-compute tee /etc/ceph/ceph.conf < /etc/ceph/ceph.conf
The authenticity of host 'openstack-compute (192.168.20.182)' can't be established.
ECDSA key fingerprint is b7:bf:c5:81:0d:a0:2a:2d:94:2f:c1:16:78:f3:9f:b2.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'openstack-compute,192.168.20.182' (ECDSA) to the list of known hosts.
root@openstack-compute's password: 
[global]
fsid = 4f1100a0-bc37-4472-b0b0-58b44eabac97
mon_initial_members = ceph001
mon_host = 192.168.20.178
auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx
filestore_xattr_use_omap = true
osd_pool_default_size = 2

 

(3) 在各节点上安装ceph 客户端

 http://docs.ceph.com/docs/master/rbd/rbd-openstack/

On the glance-api node, you’ll need the Python bindings for librbd:

#apt-get install python-ceph   //好像N版openstack已经自带了,并且官方介绍的是安装python-rbd 各种源都找不到这个包,手动下载也无法安装提示和python-ceph有冲突

On the nova-computecinder-backup and on the cinder-volume node, use both the Python bindings and the client command line tools:

#apt-get install ceph-common 

 

(4) SETUP CEPH CLIENT AUTHENTICATION

If you have cephx authentication enabled, create a new user for Nova/Cinder and Glance. Execute the following:

#cinder 用户会被 cinder 和 nova 使用,需要访问三个pool: volumes,vms 为rwx权限,images为rx权限
#glance 用户只会被 Glance 使用,只需要访问 images 这个 pool
root@ceph001:~# ceph auth get-or-create client.cinder mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=volumes, allow rwx pool=vms, allow rx pool=images'
[client.cinder]
	key = AQBJg5hYuD0gHBAAproupKDYIuq0QemTnPMYdA==
root@ceph001:~# ceph auth get-or-create client.glance mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=images'
[client.glance]
	key = AQC4g5hYMEVgGxAAoaVq50KgWV/y6YPzeHteGA==
root@ceph001:~# 

 

(5) 将 client.cinder 和 client.glance 的 keystring 文件拷贝到各节点并设置访问权限

ceph auth get-or-create client.glance | ssh openstack sudo tee /etc/ceph/ceph.client.glance.keyring
ssh openstack  sudo chown glance:glance /etc/ceph/ceph.client.glance.keyring    //我这里的openstack环境是devstack,所以应该将glance用户替换为stack

ceph auth get-or-create client.cinder | ssh openstack  sudo tee /etc/ceph/ceph.client.cinder.keyring
ssh openstack sudo chown cinder:cinder /etc/ceph/ceph.client.cinder.keyring  //我这里的openstack环境是devstack,所以应该将cinder用户替换为stack
ceph auth get-or-create client.cinder | ssh openstack-compute sudo tee /etc/ceph/ceph.client.cinder.keyring

Create a temporary copy of the secret key on the nodes running nova-compute:

ceph auth get-key client.cinder | ssh openstack tee client.cinder.key
ceph auth get-key client.cinder | ssh openstack-compute tee client.cinder.key

Then, on the compute nodes, add the secret key to libvirt and remove the temporary copy of the key:

root@openstack:~# uuidgen
2862d317-e8df-4a5c-af7a-387ab2bc7ef5
root@openstack:/etc/ceph# ls
ceph.client.cinder.keyring  ceph.client.glance.keyring  ceph.conf
root@openstack:/etc/ceph# cat > secret.xml <<EOF
> <secret ephemeral='no' private='no'>
>  <uuid>2862d317-e8df-4a5c-af7a-387ab2bc7ef5</uuid>
>  <usage type='ceph'>
>   <name>client.cinder secret</name>
>  </usage>
> </secret>
> 
> EOF
root@openstack:~#  virsh secret-define --file secret.xml 
Secret 2862d317-e8df-4a5c-af7a-387ab2bc7ef5 created
root@openstack:~# virsh secret-set-value --secret 2862d317-e8df-4a5c-af7a-387ab2bc7ef5 --base64 $(cat client.cinder.key) && rm client.cinder.key secret.xml 
Secret value set
root@openstack:~# 
//在所有nova-compute节点都需要执行以上操作
//openstack-compute uuid c0f5d1a4-b086-4c0c-984e-2f4e84f0f9c5
//Important: You don’t necessarily need the UUID on all the compute nodes. However from a platform consistency perspective, it’s better to keep the same UUID.

CONFIGURE OPENSTACK TO USE CEPH

(1) CONFIGURING GLANCE

Edit /etc/glance/glance-api.conf and add under the [DEFAULT] section:

[glance_store]
stores = glance.store.rbd.Store
default_store = rbd
rbd_store_pool = images
rbd_store_user = glance
rbd_store_ceph_conf = /etc/ceph/ceph.conf
rbd_store_chunk_size = 8

If you want to enable copy-on-write cloning of images, also add under the [DEFAULT] section:
show_image_direct_url = True

(2) CONFIGURING CINDER

OpenStack requires a driver to interact with Ceph block devices. You must also specify the pool name for the block device. On your OpenStack node, edit /etc/cinder/cinder.conf by adding:

[DEFAULT]
...
enabled_backends = ceph
...
[ceph]
volume_driver = cinder.volume.drivers.rbd.RBDDriver
rbd_pool = volumes
rbd_ceph_conf = /etc/ceph/ceph.conf
rbd_flatten_volume_from_snapshot = false
rbd_max_clone_depth = 5
rbd_store_chunk_size = 4
rados_connect_timeout = -1
glance_api_version = 2
rbd_user = stack
rbd_secret_uuid = 2862d317-e8df-4a5c-af7a-387ab2bc7ef5

(3) CONFIGURING NOVA

在每个计算节点上的 /etc/nova/nova.conf 文件中做如下修改:

[libvirt]
images_type = rbd #只有在 boot disk 放在 ceph 中才需要配置这个,否则,设置为 qcow2
images_rbd_pool = vms
images_rbd_ceph_conf = /etc/ceph/ceph.conf
rbd_user = cinder
rbd_secret_uuid = e21a123a-31f8-425a-86db-7204c33a6161

disk_cachemodes="network=writeback"
hw_disk_discard = unmap
inject_password = false
inject_key = false
inject_partition = -2
live_migration_flag="VIR_MIGRATE_UNDEFINE_SOURCE,VIR_MIGRATE_PEER2PEER,VIR_MIGRATE_LIVE,VIR_MIGRATE_PERSIST_DEST,VIR_MIGRATE_TUNNELLED"

 重启服务

glance,cinder,nova

环境验证


未完待续...

posted @ 2017-02-06 16:20  Vincen_shen  阅读(488)  评论(0)    收藏  举报