ceph之换盘1

-1、准备工作

## 1. 四-二换盘操作情景
1)执行ceph -s,发现osd的osds数量与up数量不符
2)执行ceph osd tree |grep down 找到down掉的osd
3)执行ceph osd find $osd_id    找到该osd所在节点$host_ip
4)登录$host_ip执行lsblk,osd对应的盘符如sdl,一般坏盘时,lsblk已经看不到该盘符,根据前日巡检记录,即可对应出缺失盘符未sdl
5)通知全通同事,联系厂商更换硬盘。
### 1.1 数据盘(OSD)替换操作流
1,登录换盘所在主机,检查新换的盘的性能。

 

http://blog.itpub.net/26855487/viewspace-754346/

 

1、检查新盘性能

tmp]$ cat fio-baseline-hdd.cfg 
[global]
buffered=0
group_reporting=1
unified_rw_reporting=1
norandommap=1
thread=1
time_based=1
wait_for_previous=1
ramp_time=15
runtime=30
direct=1
filename=/dev/sdl
size=10G

[hdd_4kws]
blocksize=4k
rw=randwrite
sync=1
numjobs=32
iodepth=1

[hdd_4kwd]
blocksize=4k
rw=randwrite
sync=0
numjobs=32
iodepth=2

[hdd_4krd]
blocksize=4k
rw=randread
sync=1
numjobs=32
iodepth=1

[hdd_1mwd]
blocksize=1m
rw=write
sync=0
numjobs=1
iodepth=2

[hdd_1mwd]
blocksize=1m
rw=read
sync=0
numjobs=1
iodepth=2
env DEVICE=/dev/sdl 
sudo fio /tmp/fio-baseline-hdd.cfg --output fio_sdl.res
cat fio_sdl.res  | grep “mixed:”
[onest@BFJD-PSC-oNest-YW-SV114 ~]$ cat fio_sdl.res 
cat: fio_sdl.res: Permission denied
[onest@BFJD-PSC-oNest-YW-SV114 ~]$ sudo cat fio_sdl.res 
hdd_4kws: (g=0): rw=randwrite, bs=4K-4K/4K-4K/4K-4K, ioengine=sync, iodepth=1
...
hdd_4kwd: (g=1): rw=randwrite, bs=4K-4K/4K-4K/4K-4K, ioengine=sync, iodepth=2
...
hdd_4krd: (g=2): rw=randread, bs=4K-4K/4K-4K/4K-4K, ioengine=sync, iodepth=1
...
hdd_1mwd: (g=3): rw=write, bs=1M-1M/1M-1M/1M-1M, ioengine=sync, iodepth=2
hdd_1mwd: (g=4): rw=read, bs=1M-1M/1M-1M/1M-1M, ioengine=sync, iodepth=2
fio-2.2.8
Starting 98 threads

fio: terminating on signal 2

hdd_4kws: (groupid=0, jobs=32): err= 0: pid=53378: Wed May 17 17:35:24 2017
  mixed: io=48896KB, bw=1628.5KB/s, iops=407, runt= 30026msec
    clat (msec): min=57, max=128, avg=78.59, stdev=11.32
     lat (msec): min=57, max=128, avg=78.59, stdev=11.32
    clat percentiles (msec):
     |  1.00th=[   62],  5.00th=[   64], 10.00th=[   65], 20.00th=[   70],
     | 30.00th=[   72], 40.00th=[   75], 50.00th=[   78], 60.00th=[   81],
     | 70.00th=[   84], 80.00th=[   88], 90.00th=[   95], 95.00th=[   99],
     | 99.00th=[  110], 99.50th=[  114], 99.90th=[  122], 99.95th=[  123],
     | 99.99th=[  129]
    bw (KB  /s): min=    0, max=   59, per=3.05%, avg=49.60, stdev= 7.46
    lat (msec) : 100=95.52%, 250=4.48%
  cpu          : usr=0.02%, sys=0.09%, ctx=37025, majf=0, minf=10
  IO depths    : 1=151.5%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued    : total=r=12224/w=0/d=0, short=r=0/w=0/d=0, drop=r=0/w=0/d=0
     latency   : target=0, window=0, percentile=100.00%, depth=1
hdd_4kwd: (groupid=1, jobs=32): err= 0: pid=53471: Wed May 17 17:35:24 2017
  mixed: io=47232KB, bw=1570.4KB/s, iops=392, runt= 30084msec
    clat (msec): min=7, max=785, avg=81.49, stdev=45.98
     lat (msec): min=7, max=785, avg=81.49, stdev=45.98
    clat percentiles (msec):
     |  1.00th=[   14],  5.00th=[   62], 10.00th=[   64], 20.00th=[   69],
     | 30.00th=[   73], 40.00th=[   75], 50.00th=[   78], 60.00th=[   81],
     | 70.00th=[   84], 80.00th=[   89], 90.00th=[   96], 95.00th=[  105],
     | 99.00th=[  196], 99.50th=[  515], 99.90th=[  758], 99.95th=[  766],
     | 99.99th=[  783]
    bw (KB  /s): min=    0, max=   79, per=3.09%, avg=48.53, stdev=10.39
    lat (msec) : 10=0.13%, 20=1.89%, 50=0.94%, 100=90.04%, 250=6.46%
    lat (msec) : 750=0.43%, 1000=0.11%
  cpu          : usr=0.01%, sys=0.04%, ctx=17860, majf=0, minf=0
  IO depths    : 1=150.6%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued    : total=r=11808/w=0/d=0, short=r=0/w=0/d=0, drop=r=0/w=0/d=0
     latency   : target=0, window=0, percentile=100.00%, depth=2
hdd_4krd: (groupid=2, jobs=32): err= 0: pid=53670: Wed May 17 17:35:24 2017
  mixed: io=60564KB, bw=2011.2KB/s, iops=502, runt= 30114msec
    clat (msec): min=1, max=995, avg=63.48, stdev=74.26
     lat (msec): min=1, max=995, avg=63.48, stdev=74.26
    clat percentiles (msec):
     |  1.00th=[    4],  5.00th=[    6], 10.00th=[    8], 20.00th=[   13],
     | 30.00th=[   20], 40.00th=[   28], 50.00th=[   38], 60.00th=[   52],
     | 70.00th=[   72], 80.00th=[  100], 90.00th=[  149], 95.00th=[  206],
     | 99.00th=[  359], 99.50th=[  429], 99.90th=[  594], 99.95th=[  685],
     | 99.99th=[  898]
    bw (KB  /s): min=    0, max=  151, per=3.11%, avg=62.47, stdev=25.65
    lat (msec) : 2=0.02%, 4=1.73%, 10=13.23%, 20=16.19%, 50=27.64%
    lat (msec) : 100=21.29%, 250=16.91%, 500=2.73%, 750=0.22%, 1000=0.04%
  cpu          : usr=0.01%, sys=0.04%, ctx=22503, majf=0, minf=0
  IO depths    : 1=148.1%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued    : total=r=15141/w=0/d=0, short=r=0/w=0/d=0, drop=r=0/w=0/d=0
     latency   : target=0, window=0, percentile=100.00%, depth=1
hdd_1mwd: (groupid=3, jobs=1): err= 0: pid=53778: Wed May 17 17:35:24 2017
  mixed: io=1422.0MB, bw=181246KB/s, iops=176, runt=  8034msec
    clat (usec): min=4614, max=13857, avg=5613.53, stdev=457.03
     lat (usec): min=5078, max=13874, avg=5645.03, stdev=456.43
    clat percentiles (usec):
     |  1.00th=[ 5088],  5.00th=[ 5152], 10.00th=[ 5280], 20.00th=[ 5344],
     | 30.00th=[ 5344], 40.00th=[ 5472], 50.00th=[ 5536], 60.00th=[ 5600],
     | 70.00th=[ 5664], 80.00th=[ 5792], 90.00th=[ 6112], 95.00th=[ 6112],
     | 99.00th=[ 7584], 99.50th=[ 7776], 99.90th=[ 8032], 99.95th=[13888],
     | 99.99th=[13888]
    bw (KB  /s): min=   68, max=186368, per=93.81%, avg=170032.69, stdev=45510.55
    lat (msec) : 10=99.93%, 20=0.07%
  cpu          : usr=1.90%, sys=5.14%, ctx=4141, majf=0, minf=0
  IO depths    : 1=287.3%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued    : total=r=1422/w=0/d=0, short=r=0/w=0/d=0, drop=r=0/w=0/d=0
     latency   : target=0, window=0, percentile=100.00%, depth=2

Run status group 0 (all jobs):
  MIXED: io=48896KB, aggrb=1628KB/s, minb=1628KB/s, maxb=1628KB/s, mint=30026msec, maxt=30026msec

Run status group 1 (all jobs):
  MIXED: io=47232KB, aggrb=1570KB/s, minb=1570KB/s, maxb=1570KB/s, mint=30084msec, maxt=30084msec

Run status group 2 (all jobs):
  MIXED: io=60564KB, aggrb=2011KB/s, minb=2011KB/s, maxb=2011KB/s, mint=30114msec, maxt=30114msec

Run status group 3 (all jobs):
  MIXED: io=1422.0MB, aggrb=181245KB/s, minb=181245KB/s, maxb=181245KB/s, mint=8034msec, maxt=8034msec

Run status group 4 (all jobs):

Disk stats (read/write):
  sdl: ios=22593/71119, merge=0/0, ticks=1477583/2925983, in_queue=4403520, util=98.97%

  

 

2、给新盘分区

sudo parted -a optimal -s /dev/sdl mktable gpt; 
sudo parted -a optimal -s /dev/sdl mkpart ceph 0% 20GB;
sudo parted -a optimal -s /dev/sdl mkpart ceph 20GB 100%;

  

3、删除旧有OSD认证信息

ceph auth del osd.1129

  

4、获取osd的uuid

ceph osd dump | grep “^osd.1129” 

52c57f1a-834e-4b58-b350-c29663be95c7

  

5、重新格式化文件系统

sudo mkfs.xfs  -f -n size=64k -i size=512 -d agcount=24 -l size=1024m  /dev/sdl2
新格式法:
sudo mkfs.xfs  -f -i size=2048 -b size=4096  -d agcount=32  /dev/sdn2

 

  

6、设置uuid,跟之前一样

sudo xfs_admin -U 52c57f1a-834e-4b58-b350-c29663be95c7  /dev/sdl2

  

7、清空journal

sudo dd if=/dev/zero of=/dev/sdl1 bs=100M count=1

  

8、更新磁盘wwn号,更新site.pp(/etc/puppet/manifests/site.pp)文件 (只有该步骤实在SV1即puppet服务器上操作,其他均在坏盘服务器操作)

cd /home/onest/deploy

vi osd-nodes.txt

将主机名改为更换磁盘所在主机

BFJD-PSC-oNest-YW-SV114

cat get-wwns.sh
#!/bin/sh

for node in `cat osd-nodes.txt`
do
    ssh $node 'ls -l /dev/disk/by-id/ | grep wwn-0x500 | grep -v part' | awk '{print $9}' > wwn/${node}-disk-wwn.txt
done

[onest@BFJD-PSC-oNest-YW-SV1 deploy]$ sh get-wwns.sh
SSH warring: Authorized users only. All activity may be monitored and reported 

[onest@BFJD-PSC-oNest-YW-SV1 deploy]$ ls
add-osd.sh  crushmap  generate.sh  mon-nodes.txt  osd            pool  sw01  xx.pp
bakosd      ddddd.sh  get-wwns.sh  nodes.txt      osd-nodes.txt  rgws  wwn
[onest@BFJD-PSC-oNest-YW-SV1 deploy]$ cd wwn/
[onest@BFJD-PSC-oNest-YW-SV1 wwn]$ ls
BFJD-PSC-oNest-YW-SV114-disk-wwn.txt  BFJD-PSC-oNest-YW-SV21-disk-wwn.txt


[onest@BFJD-PSC-oNest-YW-SV1 wwn]$ cat BFJD-PSC-oNest-YW-SV114-disk-wwn.txt 
wwn-0x5000c500929c9827
wwn-0x50014ee20ce5c2e9
wwn-0x50014ee20cea8b58
wwn-0x50014ee2623fd42a
wwn-0x50014ee2623fe3f3
wwn-0x50014ee2624046af
wwn-0x50014ee26240b085
wwn-0x50014ee2b794eac7
wwn-0x50014ee2b7955f54
wwn-0x50014ee2b795a290
wwn-0x50014ee2b795d0ee
wwn-0x50014ee2b7961342
先备份site.pp

sudo cp site.pp site.pp.bak-20170517

修改site.pp内wwn号

修改后使用命令做验证,比较修改后的site.pp与备份文件的不同之处,wwn号

sudo diff site.pp site.pp.bak-20170517

  

9、如果坏盘 mount 信息仍在,则

umount /var/lib/ceph/osd/ceph-1129

  

10、坏盘服务器执行

# sudo puppet agent -vt

输出日志信息如下

[onest@BFJD-PSC-oNest-YW-SV114 ~]$ sudo puppet agent -vt
Info: Retrieving pluginfacts
Info: Retrieving plugin
Info: Loading facts in /var/lib/puppet/lib/facter/service_provider.rb
Info: Loading facts in /var/lib/puppet/lib/facter/facter_dot_d.rb
Info: Loading facts in /var/lib/puppet/lib/facter/root_home.rb
Info: Loading facts in /var/lib/puppet/lib/facter/ceph_osd.rb
Info: Loading facts in /var/lib/puppet/lib/facter/package_provider.rb
Info: Loading facts in /var/lib/puppet/lib/facter/puppet_vardir.rb
Info: Loading facts in /var/lib/puppet/lib/facter/pe_version.rb
Error: NetworkManager is not running.
Info: Caching catalog for BFJD-PSC-oNest-YW-SV114
Info: Applying configuration version '1495015624'
Notice: debug is false
Notice: /Stage[main]/Ceph::Debug/Notify[debug is false]/message: defined 'message' as 'debug is false'
Notice: /Stage[main]/Wwn_osds/Ceph::Osd[wwn-0x50014ee2623fe3f3-part2]/Exec[sdi_block_scheduler]/returns: executed successfully
Notice: /Stage[main]/Wwn_osds/Ceph::Osd[wwn-0x5000c500929c9827-part2]/Exec[sdl_block_scheduler]/returns: executed successfully
Notice: /Stage[main]/Wwn_osds/Ceph::Osd[wwn-0x50014ee2b795a290-part2]/Exec[sde_block_scheduler]/returns: executed successfully
Notice: /Stage[main]/Wwn_osds/Ceph::Osd[wwn-0x50014ee20ce5c2e9-part2]/Exec[sda_block_scheduler]/returns: executed successfully
Notice: /Stage[main]/Wwn_osds/Ceph::Osd[wwn-0x50014ee2b7961342-part2]/Exec[sdj_block_scheduler]/returns: executed successfully
Notice: /Stage[main]/Wwn_osds/Ceph::Osd[wwn-0x50014ee26240b085-part2]/Exec[sdd_block_scheduler]/returns: executed successfully
Notice: /Stage[main]/Wwn_osds/Ceph::Osd[wwn-0x50014ee2b7955f54-part2]/Exec[sdf_block_scheduler]/returns: executed successfully
Notice: /Stage[main]/Wwn_osds/Ceph::Osd[wwn-0x50014ee20cea8b58-part2]/Exec[sdk_block_scheduler]/returns: executed successfully
Notice: /Stage[main]/Wwn_osds/Ceph::Osd[wwn-0x50014ee2b795d0ee-part2]/Exec[sdh_block_scheduler]/returns: executed successfully
Notice: /Stage[main]/Wwn_osds/Ceph::Osd[wwn-0x50014ee2624046af-part2]/Exec[sdb_block_scheduler]/returns: executed successfully
Notice: /Stage[main]/Wwn_osds/Ceph::Osd[wwn-0x50014ee2623fd42a-part2]/Exec[sdg_block_scheduler]/returns: executed successfully
Notice: /Stage[main]/Wwn_osds/Ceph::Osd[wwn-0x5000c500929c9827-part2]/Ceph_config[osd.1129/osd_journal]/value: value changed '/dev/disk/by-id/wwn-0x50014ee2b795277c-part1' to '/dev/disk/by-id/wwn-0x5000c500929c9827-part1'
Notice: /Stage[main]/Wwn_osds/Ceph::Osd[wwn-0x5000c500929c9827-part2]/Ceph_config[osd.1129/onest_osd_name]/value: value changed 'wwn-0x50014ee2b795277c-part2' to 'wwn-0x5000c500929c9827-part2'
Notice: /Stage[main]/Wwn_osds/Ceph::Osd[wwn-0x5000c500929c9827-part2]/Ceph_config[osd.1129/onest_osd_journal_name]/value: value changed 'wwn-0x50014ee2b795277c-part1' to 'wwn-0x5000c500929c9827-part1'
Notice: /Stage[main]/Wwn_osds/Ceph::Osd[wwn-0x5000c500929c9827-part2]/Ceph_config[osd.1129/onest_osd_mount_device]/value: value changed '/dev/disk/by-id/wwn-0x50014ee2b795277c-part2' to '/dev/disk/by-id/wwn-0x5000c500929c9827-part2'
Notice: /Stage[main]/Wwn_osds/Ceph::Osd[wwn-0x50014ee2b795d0ee-part2]/Exec[chown-osd-data-1084-owner-to-ceph]/returns: executed successfully
Notice: /Stage[main]/Wwn_osds/Ceph::Osd[wwn-0x50014ee2b795d0ee-part2]/Exec[chown-osd-1084-journal-owner-to-ceph]/returns: executed successfully
Notice: /Stage[main]/Wwn_osds/Ceph::Osd[wwn-0x50014ee2b7961342-part2]/Exec[chown-osd-1107-journal-owner-to-ceph]/returns: executed successfully
Notice: /Stage[main]/Wwn_osds/Ceph::Osd[wwn-0x50014ee2b7961342-part2]/Exec[chown-osd-data-1107-owner-to-ceph]/returns: executed successfully
Notice: /Stage[main]/Wwn_osds/Ceph::Osd[wwn-0x50014ee2b7961342-part2]/Service[ceph-osd.1107]/ensure: ensure changed 'stopped' to 'running'
Info: /Stage[main]/Wwn_osds/Ceph::Osd[wwn-0x50014ee2b7961342-part2]/Service[ceph-osd.1107]: Unscheduling refresh on Service[ceph-osd.1107]
Notice: /Stage[main]/Wwn_osds/Ceph::Osd[wwn-0x50014ee2b795d0ee-part2]/Service[ceph-osd.1084]/ensure: ensure changed 'stopped' to 'running'
Info: /Stage[main]/Wwn_osds/Ceph::Osd[wwn-0x50014ee2b795d0ee-part2]/Service[ceph-osd.1084]: Unscheduling refresh on Service[ceph-osd.1084]
Notice: /Stage[main]/Wwn_osds/Ceph::Osd[wwn-0x50014ee2624046af-part2]/Exec[chown-osd-data-1148-owner-to-ceph]/returns: executed successfully
Notice: /Stage[main]/Wwn_osds/Ceph::Osd[wwn-0x50014ee2624046af-part2]/Exec[chown-osd-1148-journal-owner-to-ceph]/returns: executed successfully
Notice: /Stage[main]/Wwn_osds/Ceph::Osd[wwn-0x50014ee2624046af-part2]/Service[ceph-osd.1148]/ensure: ensure changed 'stopped' to 'running'
Info: /Stage[main]/Wwn_osds/Ceph::Osd[wwn-0x50014ee2624046af-part2]/Service[ceph-osd.1148]: Unscheduling refresh on Service[ceph-osd.1148]
Notice: /Stage[main]/Wwn_osds/Ceph::Osd[wwn-0x5000c500929c9827-part2]/Exec[mount-/var/lib/ceph/osd/ceph-1129]/returns: executed successfully
Notice: /Stage[main]/Wwn_osds/Ceph::Osd[wwn-0x5000c500929c9827-part2]/Exec[chown-osd-1129-journal-owner-to-ceph]/returns: executed successfully
Notice: /Stage[main]/Wwn_osds/Ceph::Osd[wwn-0x5000c500929c9827-part2]/Exec[chown-osd-data-1129-owner-to-ceph]/returns: executed successfully
Notice: /Stage[main]/Wwn_osds/Ceph::Osd[wwn-0x5000c500929c9827-part2]/Exec[ceph-osd-mkfs-1129]/returns: executed successfully
Notice: /Stage[main]/Wwn_osds/Ceph::Osd[wwn-0x50014ee2623fe3f3-part2]/Exec[chown-osd-1088-journal-owner-to-ceph]/returns: executed successfully
Notice: /Stage[main]/Wwn_osds/Ceph::Osd[wwn-0x50014ee2623fe3f3-part2]/Exec[chown-osd-data-1088-owner-to-ceph]/returns: executed successfully
Notice: /Stage[main]/Wwn_osds/Ceph::Osd[wwn-0x50014ee2623fe3f3-part2]/Service[ceph-osd.1088]/ensure: ensure changed 'stopped' to 'running'
Info: /Stage[main]/Wwn_osds/Ceph::Osd[wwn-0x50014ee2623fe3f3-part2]/Service[ceph-osd.1088]: Unscheduling refresh on Service[ceph-osd.1088]
Notice: /Stage[main]/Wwn_osds/Ceph::Osd[wwn-0x50014ee2623fd42a-part2]/Exec[chown-osd-1069-journal-owner-to-ceph]/returns: executed successfully
Notice: /Stage[main]/Wwn_osds/Ceph::Osd[wwn-0x50014ee2623fd42a-part2]/Exec[chown-osd-data-1069-owner-to-ceph]/returns: executed successfully
Notice: /Stage[main]/Wwn_osds/Ceph::Osd[wwn-0x50014ee2623fd42a-part2]/Service[ceph-osd.1069]/ensure: ensure changed 'stopped' to 'running'
Info: /Stage[main]/Wwn_osds/Ceph::Osd[wwn-0x50014ee2623fd42a-part2]/Service[ceph-osd.1069]: Unscheduling refresh on Service[ceph-osd.1069]
Notice: /Stage[main]/Wwn_osds/Ceph::Osd[wwn-0x5000c500929c9827-part2]/Exec[ceph-osd-register-1129]/returns: executed successfully
Notice: /Stage[main]/Wwn_osds/Ceph::Osd[wwn-0x5000c500929c9827-part2]/File[/var/lib/ceph/osd/ceph-1129/sysvinit]/ensure: created
Notice: /Stage[main]/Wwn_osds/Ceph::Osd[wwn-0x5000c500929c9827-part2]/Service[ceph-osd.1129]/ensure: ensure changed 'stopped' to 'running'
Info: /Stage[main]/Wwn_osds/Ceph::Osd[wwn-0x5000c500929c9827-part2]/Service[ceph-osd.1129]: Unscheduling refresh on Service[ceph-osd.1129]
Notice: /Stage[main]/Wwn_osds/Ceph::Osd[wwn-0x50014ee2b795a290-part2]/Exec[chown-osd-data-1169-owner-to-ceph]/returns: executed successfully
Notice: /Stage[main]/Wwn_osds/Ceph::Osd[wwn-0x50014ee2b7955f54-part2]/Exec[chown-osd-1192-journal-owner-to-ceph]/returns: executed successfully
Notice: /Stage[main]/Wwn_osds/Ceph::Osd[wwn-0x50014ee2b7955f54-part2]/Exec[chown-osd-data-1192-owner-to-ceph]/returns: executed successfully
Notice: /Stage[main]/Wwn_osds/Ceph::Osd[wwn-0x50014ee2b7955f54-part2]/Service[ceph-osd.1192]/ensure: ensure changed 'stopped' to 'running'
Info: /Stage[main]/Wwn_osds/Ceph::Osd[wwn-0x50014ee2b7955f54-part2]/Service[ceph-osd.1192]: Unscheduling refresh on Service[ceph-osd.1192]
Notice: /Stage[main]/Wwn_osds/Ceph::Osd[wwn-0x50014ee20cea8b58-part2]/Exec[chown-osd-1206-journal-owner-to-ceph]/returns: executed successfully
Notice: /Stage[main]/Wwn_osds/Ceph::Osd[wwn-0x50014ee20cea8b58-part2]/Exec[chown-osd-data-1206-owner-to-ceph]/returns: executed successfully
Notice: /Stage[main]/Wwn_osds/Ceph::Osd[wwn-0x50014ee20cea8b58-part2]/Service[ceph-osd.1206]/ensure: ensure changed 'stopped' to 'running'
Info: /Stage[main]/Wwn_osds/Ceph::Osd[wwn-0x50014ee20cea8b58-part2]/Service[ceph-osd.1206]: Unscheduling refresh on Service[ceph-osd.1206]
Notice: /Stage[main]/Wwn_osds/Ceph::Osd[wwn-0x50014ee2b795a290-part2]/Exec[chown-osd-1169-journal-owner-to-ceph]/returns: executed successfully
Notice: /Stage[main]/Wwn_osds/Ceph::Osd[wwn-0x50014ee2b795a290-part2]/Service[ceph-osd.1169]/ensure: ensure changed 'stopped' to 'running'
Info: /Stage[main]/Wwn_osds/Ceph::Osd[wwn-0x50014ee2b795a290-part2]/Service[ceph-osd.1169]: Unscheduling refresh on Service[ceph-osd.1169]
Notice: /Stage[main]/Wwn_osds/Ceph::Osd[wwn-0x50014ee20ce5c2e9-part2]/Exec[chown-osd-data-1224-owner-to-ceph]/returns: executed successfully
Notice: /Stage[main]/Wwn_osds/Ceph::Osd[wwn-0x50014ee20ce5c2e9-part2]/Exec[chown-osd-1224-journal-owner-to-ceph]/returns: executed successfully
Notice: /Stage[main]/Wwn_osds/Ceph::Osd[wwn-0x50014ee20ce5c2e9-part2]/Service[ceph-osd.1224]/ensure: ensure changed 'stopped' to 'running'
Info: /Stage[main]/Wwn_osds/Ceph::Osd[wwn-0x50014ee20ce5c2e9-part2]/Service[ceph-osd.1224]: Unscheduling refresh on Service[ceph-osd.1224]
Notice: /Stage[main]/Wwn_osds/Ceph::Osd[wwn-0x50014ee2b794eac7-part2]/Exec[sdc_block_scheduler]/returns: executed successfully
Notice: /Stage[main]/Wwn_osds/Ceph::Osd[wwn-0x50014ee2b794eac7-part2]/Exec[chown-osd-data-1237-owner-to-ceph]/returns: executed successfully
Notice: /Stage[main]/Wwn_osds/Ceph::Osd[wwn-0x50014ee2b794eac7-part2]/Exec[chown-osd-1237-journal-owner-to-ceph]/returns: executed successfully
Notice: /Stage[main]/Wwn_osds/Ceph::Osd[wwn-0x50014ee2b794eac7-part2]/Service[ceph-osd.1237]/ensure: ensure changed 'stopped' to 'running'
Info: /Stage[main]/Wwn_osds/Ceph::Osd[wwn-0x50014ee2b794eac7-part2]/Service[ceph-osd.1237]: Unscheduling refresh on Service[ceph-osd.1237]
Notice: /Stage[main]/Wwn_osds/Ceph::Osd[wwn-0x50014ee26240b085-part2]/Exec[chown-osd-data-1145-owner-to-ceph]/returns: executed successfully
Notice: /Stage[main]/Wwn_osds/Ceph::Osd[wwn-0x50014ee26240b085-part2]/Exec[chown-osd-1145-journal-owner-to-ceph]/returns: executed successfully
Notice: /Stage[main]/Wwn_osds/Ceph::Osd[wwn-0x50014ee26240b085-part2]/Service[ceph-osd.1145]/ensure: ensure changed 'stopped' to 'running'
Info: /Stage[main]/Wwn_osds/Ceph::Osd[wwn-0x50014ee26240b085-part2]/Service[ceph-osd.1145]: Unscheduling refresh on Service[ceph-osd.1145]
Notice: Finished catalog run in 6.04 seconds

  

11、

12、

 

posted @ 2017-05-23 11:09  larlly  阅读(1361)  评论(0)    收藏  举报