ceph简单安装教程

安装环境:Ubuntu14.04 x3

ceph01 192.168.20.178   monitor + OSD

ceph02 192.168.20.179    monitor + OSD

ceph03 192.168.20.180    monitor + OSD

cephadmin 192.168.20.177 ceph-deploy

 

准备工作:

各OSD节点: 需要将OSD磁盘使用fdisk分区,然后使用mkfs.ext4格式化,journal盘不需要做任何操作

 

 

一、更新源并且安装ceph-deploy

需要在ceph-deploy server上做如下配置

直接设置环境变量:

CentOS:
export CEPH_DEPLOY_REPO_URL=http://mirrors.163.com/ceph/rpm-jewel/el7 export CEPH_DEPLOY_GPG_URL=http://mirrors.163.com/ceph/keys/release.asc

Ubuntu:
export CEPH_DEPLOY_REPO_URL=http://mirrors.163.com/ceph/debian-jewel
export CEPH_DEPLOY_GPG_URL=http://mirrors.163.com/ceph/keys/release.asc

root@ceph01:~# sudo apt-get update &&sudo apt-get install ceph-deploy -y

安装后查看版本

root@ceph01:/etc/apt# dpkg -l | grep ceph
ii  ceph-deploy                         1.4.0-0ubuntu1                all          Deployment and configuration of Ceph.
root@ceph01:/etc/apt# 

 

二、配置ssh-key无密码登录

需要在ceph-deploy server上面生成ssh-keygen,使ceph-deploy server可以无密码方式登录到其他ceph-inode server

操作步骤>> ubuntu 配置ssh无密码登录

 

三、创建Ceph集群

1)部署新的monitor节点

创建monitor-node  : ceph-deploy new {initial-monitor-node(s)}

(在执行ceph-deploy的过程中会生成一些配置文件,建议创建一个目录,例如my-cluster)

root@ceph01:~# mkdir my-cluster
root@ceph01:~# ls
my-cluster
root@ceph01:~# cd my-cluster/
root@ceph01:~/my-cluster# ceph-deploy new ceph01 ceph02 ceph03
[ceph_deploy.cli][INFO  ] Invoked (1.4.0): /usr/bin/ceph-deploy new ceph01
[ceph_deploy.new][DEBUG ] Creating new cluster named ceph
[ceph_deploy.new][DEBUG ] Resolving host ceph01
[ceph_deploy.new][DEBUG ] Monitor ceph01 at 192.168.20.177
[ceph_deploy.new][INFO  ] making sure passwordless SSH succeeds
[ceph_deploy.new][DEBUG ] Monitor initial members are ['ceph01']
[ceph_deploy.new][DEBUG ] Monitor addrs are ['192.168.20.177']
[ceph_deploy.new][DEBUG ] Creating a random mon key...
[ceph_deploy.new][DEBUG ] Writing initial config to ceph.conf...
[ceph_deploy.new][DEBUG ] Writing monitor keyring to ceph.mon.keyring...
root@ceph01:~/my-cluster# ls
ceph.conf  ceph.log  ceph.mon.keyring
root@ceph01:~/my-cluster# 

配置Ceph.conf配置文件,示例文件是默认的,可以根据自己情况进行相应调整和添加。

root@ceph01:~/my-cluster# cat ceph.conf 
[global]
fsid = 6414a297-4b20-4c1c-bd61-26dd60fb6b5f
mon_initial_members = ceph01, ceph02, ceph03
mon_host = 192.168.20.177,192.168.20.178,192.168.20.179
auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx
filestore_xattr_use_omap = true

root@ceph01:~/my-cluster# 

2)安装ceph到各节点

root@ceph01:~/my-cluster# ceph-deploy install ceph01 ceph02 ceph03
[ceph_deploy.cli][INFO  ] Invoked (1.4.0): /usr/bin/ceph-deploy install ceph03
[ceph_deploy.install][DEBUG ] Installing stable version emperor on cluster ceph hosts ceph03
[ceph_deploy.install][DEBUG ] Detecting platform for host ceph03 ...
[ceph03][DEBUG ] connected to host: ceph03 
[ceph03][DEBUG ] detect platform information from remote host
[ceph03][DEBUG ] detect machine type
[ceph_deploy.install][INFO  ] Distro info: Ubuntu 14.04 trusty
[ceph03][INFO  ] installing ceph on ceph03
[ceph03][INFO  ] using custom repository location: http://mirrors.aliyun.com/ceph/debian-jewel
[ceph03][INFO  ] Running command: apt-get -q update
[ceph03][DEBUG ] Ign http://cn.archive.ubuntu.com trusty InRelease
[ceph03][DEBUG ] Get:1 http://cn.archive.ubuntu.com trusty-updates InRelease [65.9 kB]
[ceph03][DEBUG ] Get:2 http://cn.archive.ubuntu.com trusty-backports InRelease [65.9 kB]
[ceph03][DEBUG ] Get:3 http://cn.archive.ubuntu.com trusty Release.gpg [933 B]
[ceph03][DEBUG ] Get:4 http://cn.archive.ubuntu.com trusty Release [58.5 kB]
[ceph03][DEBUG ] Get:5 http://cn.archive.ubuntu.com trusty-updates/main Sources [388 kB]
... 后面省略

3)获取密钥key

会在my-cluster目录下生成几个key

root@ceph01:~/my-cluster# ceph-deploy mon create-initial
[ceph_deploy.cli][INFO  ] Invoked (1.4.0): /usr/bin/ceph-deploy mon create-initial
[ceph_deploy.mon][DEBUG ] Deploying mon, cluster ceph hosts ceph01 ceph02 ceph03
[ceph_deploy.mon][DEBUG ] detecting platform for host ceph01 ...
[ceph01][DEBUG ] connected to host: ceph01 
[ceph01][DEBUG ] detect platform information from remote host
[ceph01][DEBUG ] detect machine type
[ceph_deploy.mon][INFO  ] distro info: Ubuntu 14.04 trusty
[ceph01][DEBUG ] determining if provided host has same hostname in remote
[ceph01][DEBUG ] get remote short hostname
[ceph01][DEBUG ] deploying mon to ceph01
[ceph01][DEBUG ] get remote short hostname
[ceph01][DEBUG ] remote hostname: ceph01
...省略

4)初始化磁盘 (这一步好像有错,可以跳过)

在ceph01,ceph02,cphe03上都需要初始化磁盘。

root@ceph01:~/my-cluster# ceph-deploy disk zap ceph02:sdb
[ceph_deploy.cli][INFO  ] Invoked (1.4.0): /usr/bin/ceph-deploy disk zap ceph02:sdb
[ceph_deploy.osd][DEBUG ] zapping /dev/sdb on ceph02
[ceph02][DEBUG ] connected to host: ceph02 
[ceph02][DEBUG ] detect platform information from remote host
[ceph02][DEBUG ] detect machine type
[ceph_deploy.osd][INFO  ] Distro info: Ubuntu 14.04 trusty
[ceph02][DEBUG ] zeroing last few blocks of device
[ceph02][INFO  ] Running command: sgdisk --zap-all --clear --mbrtogpt -- /dev/sdb
[ceph02][DEBUG ] 
[ceph02][DEBUG ] ***************************************************************
[ceph02][DEBUG ] Found invalid GPT and valid MBR; converting MBR to GPT format
[ceph02][DEBUG ] in memory. 
[ceph02][DEBUG ] ***************************************************************
[ceph02][DEBUG ] 
[ceph02][DEBUG ] 
[ceph02][DEBUG ] Warning! Secondary partition table overlaps the last partition by
[ceph02][DEBUG ] 33 blocks!
[ceph02][DEBUG ] You will need to delete this partition or resize it in another utility.
[ceph02][DEBUG ] GPT data structures destroyed! You may now partition the disk using fdisk or
[ceph02][DEBUG ] other utilities.
[ceph02][DEBUG ] The operation has completed successfully.

5)准备OSD

将ceph01,ceph02,ceph03上的sdb1都配置为OSD Prepare

语法格式:ceph-deploy osd prepare {node-name}:{data-disk}[:{journal-disk}]    #其中journal为可选

root@ceph01:~/my-cluster# ceph-deploy osd prepare ceph01:sdb1
[ceph_deploy.cli][INFO  ] Invoked (1.4.0): /usr/bin/ceph-deploy osd prepare ceph02:sdb1
[ceph_deploy.osd][DEBUG ] Preparing cluster ceph disks ceph02:/dev/sdb1:
[ceph02][DEBUG ] connected to host: ceph02 
[ceph02][DEBUG ] detect platform information from remote host
[ceph02][DEBUG ] detect machine type
[ceph_deploy.osd][INFO  ] Distro info: Ubuntu 14.04 trusty
[ceph_deploy.osd][DEBUG ] Deploying osd to ceph02
[ceph02][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph02][INFO  ] Running command: udevadm trigger --subsystem-match=block --action=add
[ceph_deploy.osd][DEBUG ] Preparing host ceph02 disk /dev/sdb1 journal None activate False
[ceph02][INFO  ] Running command: ceph-disk-prepare --fs-type xfs --cluster ceph -- /dev/sdb1
[ceph_deploy.osd][DEBUG ] Host ceph02 is now ready for osd use.

6) 激活OSD

语法格式:ceph-deploy osd activate {node-name}:{data-disk-partition}[:{journal-disk-partition}]    #其中journal为可选

root@cephadmin:~/ceph-cluster# ceph-deploy osd activate ceph001:sdb1:sdc
[ceph_deploy.cli][INFO  ] Invoked (1.4.0): /usr/bin/ceph-deploy osd activate ceph003:sdb1:sdc
[ceph_deploy.osd][DEBUG ] Activating cluster ceph disks ceph003:/dev/sdb1:/dev/sdc
[ceph003][DEBUG ] connected to host: ceph003 
[ceph003][DEBUG ] detect platform information from remote host
[ceph003][DEBUG ] detect machine type
[ceph_deploy.osd][INFO  ] Distro info: Ubuntu 14.04 trusty
[ceph_deploy.osd][DEBUG ] activating host ceph003 disk /dev/sdb1
[ceph_deploy.osd][DEBUG ] will use init type: upstart
[ceph003][INFO  ] Running command: ceph-disk-activate --mark-init upstart --mount /dev/sdb1
[ceph003][WARNIN] got monmap epoch 1
[ceph003][WARNIN] SG_IO: bad/missing sense data, sb[]:  70 00 05 00 00 00 00 0a 00 00 00 00 20 00 00 c0 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
[ceph003][WARNIN] 2017-01-13 17:36:48.350357 7f9c2dd0a800 -1 journal check: ondisk fsid 00000000-0000-0000-0000-000000000000 doesn't match expected e3e6359b-1696-4499-a313-b25f546121f8, invalid (someone else's?) journal
[ceph003][WARNIN] SG_IO: bad/missing sense data, sb[]:  70 00 05 00 00 00 00 0a 00 00 00 00 20 00 00 c0 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
[ceph003][WARNIN] SG_IO: bad/missing sense data, sb[]:  70 00 05 00 00 00 00 0a 00 00 00 00 20 00 00 c0 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
[ceph003][WARNIN] SG_IO: bad/missing sense data, sb[]:  70 00 05 00 00 00 00 0a 00 00 00 00 20 00 00 c0 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
[ceph003][WARNIN] 2017-01-13 17:36:48.371240 7f9c2dd0a800 -1 filestore(/var/lib/ceph/tmp/mnt.Iz8Bow) could not find 23c2fcde/osd_superblock/0//-1 in index: (2) No such file or directory
[ceph003][WARNIN] 2017-01-13 17:36:48.381383 7f9c2dd0a800 -1 created object store /var/lib/ceph/tmp/mnt.Iz8Bow journal /var/lib/ceph/tmp/mnt.Iz8Bow/journal for osd.2 fsid 4f1100a0-bc37-4472-b0b0-58b44eabac97
[ceph003][WARNIN] 2017-01-13 17:36:48.381450 7f9c2dd0a800 -1 auth: error reading file: /var/lib/ceph/tmp/mnt.Iz8Bow/keyring: can't open /var/lib/ceph/tmp/mnt.Iz8Bow/keyring: (2) No such file or directory
[ceph003][WARNIN] 2017-01-13 17:36:48.381772 7f9c2dd0a800 -1 created new key in keyring /var/lib/ceph/tmp/mnt.Iz8Bow/keyring
[ceph003][WARNIN] added key for osd.2
root@cephadmin:~/ceph-cluster# ceph osd tree
The program 'ceph' is currently not installed. You can install it by typing:
apt-get install ceph-common
root@cephadmin:~/ceph-cluster# 
...省略

7)分发key

ceph-deploy 把配置文件和 admin 密钥拷贝到管理节点和 Ceph 节点,这样你每次执行 Ceph 命令行时就无需指定 monitor 地址和 ceph.client.admin.keyring 了.

语法:ceph-deploy admin {admin-node} {ceph-node}

root@cephadmin:~/ceph-cluster# ceph-deploy admin ceph001 ceph002 ceph003
[ceph_deploy.cli][INFO  ] Invoked (1.4.0): /usr/bin/ceph-deploy admin ceph001 ceph002 ceph003
[ceph_deploy.admin][DEBUG ] Pushing admin keys and conf to ceph001
[ceph001][DEBUG ] connected to host: ceph001 
[ceph001][DEBUG ] detect platform information from remote host
[ceph001][DEBUG ] detect machine type
[ceph001][DEBUG ] get remote short hostname
[ceph001][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph_deploy.admin][DEBUG ] Pushing admin keys and conf to ceph002
[ceph002][DEBUG ] connected to host: ceph002 
[ceph002][DEBUG ] detect platform information from remote host
[ceph002][DEBUG ] detect machine type
[ceph002][DEBUG ] get remote short hostname
[ceph002][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph_deploy.admin][DEBUG ] Pushing admin keys and conf to ceph003
[ceph003][DEBUG ] connected to host: ceph003 
[ceph003][DEBUG ] detect platform information from remote host
[ceph003][DEBUG ] detect machine type
[ceph003][DEBUG ] get remote short hostname
[ceph003][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
root@cephadmin:~/ceph-cluster# 

8)查看OSD状态和ceph集群状态

root@ceph001:/etc/ceph# ceph osd tree
# id    weight    type name    up/down    reweight
-1    0.12    root default
-2    0.03999        host ceph001
0    0.03999            osd.0    up    1    
-3    0.03999        host ceph002
1    0.03999            osd.1    up    1    
-4    0.03999        host ceph003
2    0.03999            osd.2    up    1    
root@ceph001:/etc/ceph# ceph health
HEALTH_OK
root@ceph001:/etc/ceph# ceph -s
    cluster 4f1100a0-bc37-4472-b0b0-58b44eabac97
     health HEALTH_OK
     monmap e1: 1 mons at {ceph001=192.168.20.178:6789/0}, election epoch 2, quorum 0 ceph001
     osdmap e12: 3 osds: 3 up, 3 in
      pgmap v24: 192 pgs, 3 pools, 0 bytes data, 0 objects
            104 MB used, 119 GB / 119 GB avail
                 192 active+clean

 9) 查看ceph集群各种信息

 查看ceph的实时运行状态:ceph -w

root@ceph001:/etc/ceph# ceph -w
    cluster 4f1100a0-bc37-4472-b0b0-58b44eabac97
     health HEALTH_OK
     monmap e1: 1 mons at {ceph001=192.168.20.178:6789/0}, election epoch 2, quorum 0 ceph001
     osdmap e12: 3 osds: 3 up, 3 in
      pgmap v24: 192 pgs, 3 pools, 0 bytes data, 0 objects
            104 MB used, 119 GB / 119 GB avail
                 192 active+clean

2017-01-13 17:41:53.356238 mon.0 [INF] pgmap v24: 192 pgs: 192 active+clean; 0 bytes data, 104 MB used, 119 GB / 119 GB avail

查看ceph存储空间: ceph df

root@ceph001:/etc/ceph# ceph df
GLOBAL:
    SIZE     AVAIL     RAW USED     %RAW USED 
    119G      119G         104M          0.09 
POOLS:
    NAME         ID     USED     %USED     MAX AVAIL     OBJECTS 
    data         0         0         0        40903M           0 
    metadata     1         0         0        40903M           0 
    rbd          2         0         0        40903M           0 
root@ceph001:/etc/ceph# 

查看ceph集群monitor信息

root@ceph001:/etc/ceph# ceph mon stat
e1: 1 mons at {ceph001=192.168.20.178:6789/0}, election epoch 2, quorum 0 ceph001
root@ceph001:/etc/ceph# 

 

自此简单的monitor 和 osd已经安装完成,后面的文章开始介绍如何扩展集群

posted @ 2017-01-10 23:08  Vincen_shen  阅读(1979)  评论(0)    收藏  举报