VSM Import Cluster功能验证二(导入篇)

三 vsm import cluster

3.1登录vsm web UI

登陆 VSM web UI,https://172.16.34.51/dashboard/vsm/,点击Cluster Management菜单中的Import Cluster,页面显示如下:

 

 

 

3.2在ceph节点上启动vms-agent

在ceph各节点执行

python /usr/bin/vsm-agent --config-file /etc/vsm/vsm.conf --log-file /var/log/vsm/vsm-agent.log 2>&1 &

ceph各节点启动vsm-agent后,Import Cluster如下所示:

3.3生成osdkeyring以及修改ceph.conf ceph deploy node

3.3.1.生成osd的keyring。

ceph auth get-or-create osd.0 | tee /home/cephcluster_yhc/keyring.osd.0
ceph auth get-or-create osd.1 | tee /home/cephcluster_yhc/keyring.osd.1
ceph auth get-or-create osd.2 | tee /home/cephcluster_yhc/keyring.osd.2

拷贝到相应节点

cp  /home/cephcluster_yhc/keyring.osd.0 /etc/ceph/
scp /home/cephcluster_yhc/keyring.osd.1 ceph02:/etc/ceph/
scp /home/cephcluster_yhc/keyring.osd.2 ceph03:/etc/ceph/

3.3.2.修改ceph.conf,并推送到各ceph节点

ceph.conf修改如下:

[global]
fsid = add3d8a4-f6aa-4d6b-a3ce-aa285d55ae56
mon_initial_members = ceph01, ceph02, ceph03
mon_host = 192.1.35.52,192.1.35.53,192.1.35.54
auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx

public network = 192.1.35.0/24
cluster network = 192.2.35.0/24
[mon]
mon data = /var/lib/ceph/mon/$cluster-$id
mon clock drift allowed = .200

[mon.ceph01]
host = ceph01
mon addr = 192.1.35.52:6789


[mon.ceph02]
host = ceph02
mon addr = 192.1.35.53:6789

[mon.ceph03]
host = ceph03
mon addr = 192.1.35.54:6789

[osd]
osd mount options xfs = rw,noatime,inode64,logbsize=256k,delaylog
osd crush update on start = false
filestore xattr use omap = true
keyring = /etc/ceph/keyring.$name
osd data = /var/lib/ceph/osd/ceph-$id
osd heartbeat grace = 10
osd heartbeat interval = 10
osd mkfs type = xfs
osd mkfs options xfs = -f
osd journal size = 0

[osd.0]
osd journal = /dev/sdb2
devs = /dev/sdb1
host = ceph01
cluster addr = 192.2.35.52
public addr = 192.1.35.52

[osd.1]
osd journal = /dev/sdb2
devs = /dev/sdb1
host = ceph02
cluster addr = 192.2.35.53
public addr = 192.1.35.53


[osd.2]
osd journal = /dev/sdb2
devs = /dev/sdb1
host = ceph03
cluster addr = 192.2.35.54
public addr = 192.1.35.54

推送到各ceph节点

ceph-deploy --overwrite-conf admin ceph01 ceph02 ceph03

到各ceph节点重启ceph 进程

service ceph restart

3.4执行Import Cluster操作

1)  点击 Import Cluster 页面的Import Cluster 按钮,出现页面如下

2)  页面的Crushmap旁的AutoDetect按钮,选择Monitor Host 为ceph01 , Monitor Keyring 填入/etc/ceph/ceph.client.admin.keyring,点击AutoDetect按钮。 如下图所示。

3)点击AutoDetect按钮后,crushmap自动填入ceph osd crush dump的输出,页面如下

4)在Ceph.conf里填入ceph集群的配置信息,点击“Validate”按钮,如果配置没问题,页面右上角就会弹出“Validate Cluster Successfully!”,同时会展示出Crushmap的拓扑结构。

5)点击“Submit”,提交。如果导入成功,右上角会弹出提示语句,然后跳转到Cluster Status页面,页面如下:

四、Cluster Status页面的Performance 显示

4.1 设置显示属性

在VSM Managemnet菜单的settings页面设置里设置一下CPU_DIAMOND_COLLECT_INTERVAL和CEPH_DIAMOND_COLLECT_INTERVAL这两个属性,设为5.

posted @ 2017-02-17 15:17  蓝色的海008  阅读(909)  评论(3)    收藏  举报