部署ceph

前提:因为ceph部署时要去国外源下载包,导致下载安装时会卡住,因此我们只需通过国内的源找到对应的rpm下载安装。

一、环境准备

4台机器,1台机器当部署节点和客户端,3台ceph节点,ceph节点配置两块硬盘第二块作为osd数据盘。

1、所有节点设置静态域名解析

[root@ceph ~]# cat /etc/hosts

127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1       localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.42.129 node1
192.168.42.130 node2
192.168.42.128 node3
192.168.42.131 ceph

2、所有节点创建cent用户,并给root权限

# useradd cent && echo "123" | passwd --stdin cent
# echo -e 'Defaults:cent !requiretty\ncent ALL = (root) NOPASSWD:ALL' | tee /etc/sudoers.d/ceph
# chmod 440 /etc/sudoers.d/ceph

3、在部署节点设置无密钥登录包括部署节点(root用户和cent用户分别执行一次)

# ssh-keygen
# ssh-copy-id node1
# ssh-copy-id node2
# ssh-copy-id node3
# ssh-copy-id ceph

注:切换到cent用户在执行一次以上命令

4、在部署节点切换到cent用户执行,创建一个文件定义所有节点和用户

# vim~/.ssh/config

Host ceph
Hostname ceph
User cent
Host node1
Hostname node1
User cent
Host node2
Hostname node2
User cent
Host node3
Hostname node3
User cent

# chmod 660 ~/.ssh/config

二、所有节点配置国内ceph源

1、# cat /etc/yum.repos.d/ceph-test.repo

[ceph-yunwei]
name=ceph-yunwei-install
baseurl=https://mirrors.aliyun.com/centos/7/storage/x86_64/ceph-jewel/
enable=1
gpgcheck=0

2、将下列的包下载到所有节点,其中ceph-deploy….只需在部署节点安装,其他节点不需

ceph-10.2.11-0.el7.x86_64.rpm
ceph-base-10.2.11-0.el7.x86_64.rpm
ceph-common-10.2.11-0.el7.x86_64.rpm
ceph-deploy-1.5.39-0.noarch.rpm
ceph-devel-compat-10.2.11-0.el7.x86_64.rpm
cephfs-java-10.2.11-0.el7.x86_64.rpm
ceph-fuse-10.2.11-0.el7.x86_64.rpm
ceph-libs-compat-10.2.11-0.el7.x86_64.rpm
ceph-mds-10.2.11-0.el7.x86_64.rpm
ceph-mon-10.2.11-0.el7.x86_64.rpm
ceph-osd-10.2.11-0.el7.x86_64.rpm
ceph-radosgw-10.2.11-0.el7.x86_64.rpm
ceph-resource-agents-10.2.11-0.el7.x86_64.rpm
ceph-selinux-10.2.11-0.el7.x86_64.rpm
ceph-test-10.2.11-0.el7.x86_64.rpm
libcephfs1-10.2.11-0.el7.x86_64.rpm
libcephfs1-devel-10.2.11-0.el7.x86_64.rpm
libcephfs_jni1-10.2.11-0.el7.x86_64.rpm
libcephfs_jni1-devel-10.2.11-0.el7.x86_64.rpm
librados2-10.2.11-0.el7.x86_64.rpm
librados2-devel-10.2.11-0.el7.x86_64.rpm
libradosstriper1-10.2.11-0.el7.x86_64.rpm
libradosstriper1-devel-10.2.11-0.el7.x86_64.rpm
librbd1-10.2.11-0.el7.x86_64.rpm
librbd1-devel-10.2.11-0.el7.x86_64.rpm
librgw2-10.2.11-0.el7.x86_64.rpm
librgw2-devel-10.2.11-0.el7.x86_64.rpm
python-ceph-compat-10.2.11-0.el7.x86_64.rpm
python-cephfs-10.2.11-0.el7.x86_64.rpm
python-rados-10.2.11-0.el7.x86_64.rpm
python-rbd-10.2.11-0.el7.x86_64.rpm
rbd-fuse-10.2.11-0.el7.x86_64.rpm
rbd-mirror-10.2.11-0.el7.x86_64.rpm
rbd-nbd-10.2.11-0.el7.x86_64.rpm

3、在部署加点的cent用户下安装ceph-deploy

# sudo yum install ceph-deploy

4、切换到root用户在所有的节点安装之前下载的rpm包

5、如果安装报错
a、可将python-distribute remove 再进行安装(或者 yum remove python-setuptools -y
注意:如果不是安装上述方法添加的rpm,用的是网络源,每个节点必须yum install ceph ceph-radosgw -y
b、安装依赖包:python-distribute,再次安装 :ceph-deploy-1.5.39-0.noarch.rpm -y,删除:python2-setuptools-22.0.5-1.el7.noarch

6、在部署节点cent用户下执行

# mkdir ceph
# cd ceph

7、部署节点(cent用户下执行):配置新集群

$ ceph-deploy new node1 node2 node3
$ vim ceph.conf

[cent@ceph ceph]$ cat ceph.conf

[global]
fsid = 442ab1b1-13ab-4c92-ad05-1ffb09d0d24e
mon_initial_members = node1, node2, node3
mon_host = 192.168.42.129,192.168.42.130,192.168.42.128
auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx
osd_pool_default_size = 3
osd_pool_default_min_size = 1
osd_pool_default_pg_num = 128
osd_pool_default_pgp_num = 128
osd_crush_chooseleaf_type = 1

8、在部署节点执行,所有节点安装ceph软件(root用户下)

# ceph-deploy install ceph node1 node2 node3

如果报这样的错:

解决办法:执行# yum remove ceph-release

9、在部署节点初始化集群(cent用户下执行)

$ ceph-deploy mon create-initial

10、

列出节点磁盘:ceph-deploy disk list node1
擦净节点磁盘:ceph-deploy disk zap node1:/dev/vdb1

11、准备Object Storage Daemon

$ ceph-deploy osd prepare node1:/dev/sdb node2:/dev/sdb node3:/dev/sdb

12、激活Object Storage Daemon

$ ceph-deploy osd activate node1:/dev/sdb node2:/dev/sdb node3:/dev/sdb

13、在部署节点将 config files文件传到ceph节点

$ ceph-deploy admin ceph node1 node2 node3
$ sudo chmod 644 /etc/ceph/ceph.client.admin.keyring

14、在ceph集群中任意节点检测

# ceph -s

三、客户端设置

1、客户端创建cent用户

# useradd cent && echo "123" | passwd --stdin cent
# echo-e 'Defaults:cent !requiretty\ncent ALL = (root) NOPASSWD:ALL' | tee /etc/sudoers.d/ceph
# chmod440 /etc/sudoers.d/ceph

2、在部署节点执行,安装ceph客户端及设置

# ceph-deploy install clinet
# ceph-deploy admin clinet

3、客户端执行

# sudo chmod 644 /etc/ceph/ceph.client.admin.keyring

4、客户端执行,块设备rbd配置

# rbd create disk01 --size 5G --image-feature layering #创建rbd
# rbd ls -l #查看rbd
# rbd map disk01 #将rbd映射到镜像地图中
# rbd showmapped #显示map
# mkfs.xfs /dev/rbd0 #格式化disk01文件系统xfs
# mount /dev/rbd0 /mnt #挂载硬盘
# df -hT#验证是否挂着成功

5、文件系统配置

a、在部署节点执行,选择一个node来创建MDS(cent用户执行):
$ ceph-deploy mds create node1

b、以下操作在node1上执行:
# chmod 644 /etc/ceph/ceph.client.admin.keyring
在MDS节点node1上创建 cephfs_data 和 cephfs_metadata 的 pool

# ceph osd pool create cephfs_data 128#用来存放数据的
# ceph osd pool create cephfs_metadata 128#存放元数据的
# ceph osd lspools#列出创建的存储池

开启pool:

# ceph fs new cephfs ceph_data cephfs_metadata
显示ceph fs:
# ceph fs ls
# ceph mds stat

c、如果客户端想要用ceph提供的文件系统需要执行以下操作
1.以下操作在客户端执行,安装ceph-fuse:
$ sudo yum -y install ceph-fuse
2.获取admin key:
$ ssh cent@node1"sudo ceph-authtool -p /etc/ceph/ceph.client.admin.keyring" > admin.key
$ chmod600 admin.key
3.挂载ceph-fs
$ sudo mount-t ceph node1:6789:/ /mnt -o name=admin,secretfile=admin.key
# df -hT #退出到root用户查看

d、停止ceph-mds服务:
1、解挂载
# umount /mnt
2、停止ceph-mds服务,到ceph节点停止服务,我这里到node1上执行
# systemctl stop ceph-mds@node1
# systemctl status ceph-mds@node1
3、重置ceph-mds,node1上执行
# ceph mds fail 0
4、删除cephfs(ceph文件系统)
# ceph fs rm cephfs --yes-i-really-mean-it
5、查看ceph pool
# ceph osd lspools
6、删除ceph存储池
# ceph osd pool rm cephfs_metadata cephfs_metadata --yes-i-really-really-mean-it

四、删除环境

# ceph-deploy purge dlp node1 node2 node3 controller
# ceph-deploy purgedata dlp node1 node2 node3 controller
# ceph-deploy forgetkeys
# rm -rf ceph*

 五、ceph常用命令

 1 ceph -s #查看集群状态
 2 ceph osd tree #查看osd的map
 3 ceph osd lspools #查看ceph集群所有的pool
 4 rbd create name --size 10G --image-feature layering#创建rbd
 5 rbd ls -l #查看rbd列表
 6 rbd remove rbd-name #删除rbd
 7 rbd map disk01 #在客户端映射rbd map
 8 rbd showmapped #显示rbd map
 9 ceph osd pool create poolname pg的数量         #创建存储池
10 ceph osd pool rm poolname poolname --yes-i-really-really-mean-it #删除存储池
11 ceph fs new cephfs cephfs_metadata cephfs_data #开启ceph文件系统的pool
12 ceph fs ls #显示ceph fs(文件系统)
13 ceph mds stat #参看mds 状态
14 ssh cent@mds-nodename "sudo ceph-authtool -p /etc/ceph/ceph.client.admin.keyring" > admin.key   #获取admin key
15 systemctl stop ceph-mds@mds-nodename  #停止ceph-mds服务
16 ceph mds fail 0 #把mds状态重置
17 ceph fs rm cephfs --yes-i-really-mean-it #删除ceph-文件系统

 

posted @ 2019-04-22 08:57  jcclty  阅读(649)  评论(0编辑  收藏  举报