ceph部署

环境:
OS:Centos 7

 

1.关闭防火墙

# 关闭防火樯
systemctl disable firewalld 
systemctl stop firewalld 

# 关闭 selinux 
setenforce 0
sed -i 's/^SELINUX=.*/SELINUX=disabled/' /etc/selinux/config

 

2.SSH配置免密

# 给ceph单节点配置免密
ssh-keygen -t rsa -P '' -f ~/.ssh/id_rsa
cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys

# 权限设置 644
chmod 644 ~/.ssh/authorized_keys

 

3.配置yum源

[root@master yum.repos.d]# more /etc/yum.repos.d/ceph.repo 
[Ceph]
name=Ceph packages for 
baseurl=http://mirrors.aliyun.com/ceph/rpm-nautilus/el7/$basearch
gpgcheck=0
[Ceph-noarch]
name=Ceph noarch packages
baseurl=http://mirrors.aliyun.com/ceph/rpm-nautilus/el7/noarch
gpgcheck=0
[ceph-source]
name=Ceph source packages
baseurl=http://mirrors.aliyun.com/ceph/rpm-nautilus/el7/SRPMS
gpgcheck=0

 

3.安装ceph-deploy

# yum方式安装
yum install -y ceph-deploy

# 检验版本
ceph-deploy --version

[root@master yum.repos.d]# ceph-deploy --version
Traceback (most recent call last):
  File "/usr/bin/ceph-deploy", line 18, in <module>
    from ceph_deploy.cli import main
ModuleNotFoundError: No module named 'ceph_deploy'

解决办法:
我这里安装了python3,需要安装pyhton2
[root@master yum.repos.d]#rm /usr/local/bin/python
[root@master yum.repos.d]#ln -s /usr/bin/python2 /usr/local/bin/python
[root@master yum.repos.d]# python -V
Python 2.7.5

再次执行
[root@master yum.repos.d]# ceph-deploy --version
2.0.1

 

4.创建 ceph 集群
这里我们使用单节点部署,需将集群的副本数量设置为1,创建完后修改 ceph.conf 文件
# 创建一个目录保存ceph配置及密钥

 

[root@master yum.repos.d]#mkdir -p /opt/ceph

# 创建ceph cluster集群
[root@master yum.repos.d]# cd /opt/ceph 
[root@master ceph]# ceph-deploy new master
这里的master是我这里的主机名称 [root@master ceph]# ceph
-deploy new master [ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf [ceph_deploy.cli][INFO ] Invoked (2.0.1): /usr/bin/ceph-deploy new master [ceph_deploy.cli][INFO ] ceph-deploy options: [ceph_deploy.cli][INFO ] username : None [ceph_deploy.cli][INFO ] func : <function new at 0x7fb2ca8b0398> [ceph_deploy.cli][INFO ] verbose : False [ceph_deploy.cli][INFO ] overwrite_conf : False [ceph_deploy.cli][INFO ] quiet : False [ceph_deploy.cli][INFO ] cd_conf : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7fb2ca8dc200> [ceph_deploy.cli][INFO ] cluster : ceph [ceph_deploy.cli][INFO ] ssh_copykey : True [ceph_deploy.cli][INFO ] mon : ['master'] [ceph_deploy.cli][INFO ] public_network : None [ceph_deploy.cli][INFO ] ceph_conf : None [ceph_deploy.cli][INFO ] cluster_network : None [ceph_deploy.cli][INFO ] default_release : False [ceph_deploy.cli][INFO ] fsid : None [ceph_deploy.new][DEBUG ] Creating new cluster named ceph [ceph_deploy.new][INFO ] making sure passwordless SSH succeeds [master][DEBUG ] connected to host: master [master][DEBUG ] detect platform information from remote host [master][DEBUG ] detect machine type [master][DEBUG ] find the location of an executable [master][INFO ] Running command: /usr/sbin/ip link show [master][INFO ] Running command: /usr/sbin/ip addr show [master][DEBUG ] IP addresses found: [u'192.168.1.108', u'10.244.219.64', u'172.17.0.1', u'192.168.122.1', u'192.168.1.103'] [ceph_deploy.new][DEBUG ] Resolving host master [ceph_deploy.new][DEBUG ] Monitor master at 192.168.1.108 [ceph_deploy.new][DEBUG ] Monitor initial members are ['master'] [ceph_deploy.new][DEBUG ] Monitor addrs are ['192.168.1.108'] [ceph_deploy.new][DEBUG ] Creating a random mon key... [ceph_deploy.new][DEBUG ] Writing monitor keyring to ceph.mon.keyring... [ceph_deploy.new][DEBUG ] Writing initial config to ceph.conf...

 

因为是单节点部署,需将集群的副本数量设置为1,修改ceph.conf文件

[root@master ceph]# cd /opt/ceph
[root@ceph ceph]# echo "osd pool default size = 1" >>  ceph.conf
[root@ceph ceph]# echo "osd pool default min size = 1" >>  ceph.conf

 

最后的配置文件如下:

[root@master ceph]# more ceph.conf 
[global]
fsid = 3af028b9-0f59-4070-bd5d-413316ea81e1
mon_initial_members = master
mon_host = 192.168.1.108
auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx

osd pool default size = 1
osd pool default min size = 1

 

5.安装ceph

yum install ceph ceph-mon ceph-mgr ceph-radosgw ceph-mds -y 

 

报错误:
Error: Package: 2:ceph-common-14.2.22-0.el7.x86_64 (Ceph)
           Requires: liboath.so.0()(64bit)
Error: Package: 2:ceph-mgr-14.2.22-0.el7.x86_64 (Ceph)
           Requires: python-bcrypt
Error: Package: 2:ceph-common-14.2.22-0.el7.x86_64 (Ceph)
           Requires: liboath.so.0(LIBOATH_1.2.0)(64bit)
Error: Package: 2:librgw2-14.2.22-0.el7.x86_64 (Ceph)
           Requires: liboath.so.0()(64bit)
Error: Package: 2:librgw2-14.2.22-0.el7.x86_64 (Ceph)
           Requires: liblttng-ust.so.0()(64bit)
Error: Package: 2:ceph-base-14.2.22-0.el7.x86_64 (Ceph)
           Requires: liboath.so.0(LIBOATH_1.12.0)(64bit)
Error: Package: 2:librados2-14.2.22-0.el7.x86_64 (Ceph)
           Requires: liblttng-ust.so.0()(64bit)

 

解决办法:

yum install epel-release   -y
rpm -Uvh epel-release*rpm
yum install lttng-ust -y

 

6.初始化mon
## 初始化 monitor

[root@master ceph]#ceph-deploy mon create-initial

执行完成后会生成如下文件
[root@master ceph]# pwd
/opt/ceph
[root@master ceph]# ls -al
total 44
drwxr-xr-x  2 root root   244 Aug 12 14:28 .
drwxr-xr-x. 8 root root    83 Aug 12 11:30 ..
-rw-------  1 root root   113 Aug 12 14:28 ceph.bootstrap-mds.keyring
-rw-------  1 root root   113 Aug 12 14:28 ceph.bootstrap-mgr.keyring
-rw-------  1 root root   113 Aug 12 14:28 ceph.bootstrap-osd.keyring
-rw-------  1 root root   113 Aug 12 14:28 ceph.bootstrap-rgw.keyring
-rw-------  1 root root   151 Aug 12 14:28 ceph.client.admin.keyring
-rw-r--r--  1 root root   253 Aug 12 11:35 ceph.conf
-rw-r--r--  1 root root 15502 Aug 12 14:28 ceph-deploy-ceph.log
-rw-------  1 root root    73 Aug 12 11:33 ceph.mon.keyring

 

## 把配置文件和密钥拷贝到管理节点和Ceph节点
[root@master ceph]#ceph-deploy admin master

 

## 确保对秘钥环有权限
[root@master ceph]#chmod +r /etc/ceph/ceph.client.admin.keyring

[root@master ceph]#cp /opt/ceph/ceph* /etc/ceph/
[root@master ceph]#chmod +r /etc/ceph/ceph*

# 启动monitor节点后,检查ceph集群

[root@master ceph]# ceph -s
  cluster:
    id:     3af028b9-0f59-4070-bd5d-413316ea81e1
    health: HEALTH_WARN
            mon is allowing insecure global_id reclaim
            mon master is low on available space
 
  services:
    mon: 1 daemons, quorum master (age 4m)
    mgr: no daemons active
    osd: 0 osds: 0 up, 0 in
 
  data:
    pools:   0 pools, 0 pgs
    objects: 0 objects, 0 B
    usage:   0 B used, 0 B / 0 B avail
    pgs: 

 

出现告警项目
mon is allowing insecure global_id reclaim

解决办法:

[root@master ceph]# ceph config set mon auth_allow_insecure_global_id_reclaim false
[root@master ceph]# ceph -s
  cluster:
    id:     3af028b9-0f59-4070-bd5d-413316ea81e1
    health: HEALTH_WARN
            mon master is low on available space
 
  services:
    mon: 1 daemons, quorum master (age 7m)
    mgr: no daemons active
    osd: 0 osds: 0 up, 0 in
 
  data:
    pools:   0 pools, 0 pgs
    objects: 0 objects, 0 B
    usage:   0 B used, 0 B / 0 B avail
    pgs:   

 

7.部署mgr

在ceph-deploy节点上部署mgr(mgr是用来监控各个节点的)


[root@master ceph]#ceph-deploy mgr create master


使用ceph -s,检查一下mgr的状态已经激活

[root@master ceph]# ceph -s
  cluster:
    id:     3af028b9-0f59-4070-bd5d-413316ea81e1
    health: HEALTH_WARN
            OSD count 0 < osd_pool_default_size 1
            mon master is low on available space
 
  services:
    mon: 1 daemons, quorum master (age 13m)
    mgr: master(active, since 44s)
    osd: 0 osds: 0 up, 0 in
 
  data:
    pools:   0 pools, 0 pgs
    objects: 0 objects, 0 B
    usage:   0 B used, 0 B / 0 B avail
    pgs: 

 

 

8.添加OSD 硬盘
lsblk (查看目前空闲硬盘的名称sdb)

[root@master ~]# lsblk
NAME            MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sda               8:0    0   25G  0 disk 
├─sda1            8:1    0    1G  0 part /boot
└─sda2            8:2    0   24G  0 part 
  ├─centos-root 253:0    0 21.5G  0 lvm  /
  └─centos-swap 253:1    0  2.5G  0 lvm  
sdb               8:16   0    2G  0 disk 
sr0              11:0    1 1024M  0 rom  

 

[root@master ~]# ceph-deploy osd create master --data /dev/sdb
报错误:
[ceph_deploy][ERROR ] ConfigError: Cannot load config: [Errno 2] No such file or directory: 'ceph.conf'; has `ceph-deploy new` been run in this directory?
解决办法:

登录到ceph目录下执行
[root@master ceph]# cd /opt/ceph
[root@master ceph]# ceph-deploy osd create master --data /dev/sdb

 

#查看集群状态

[root@master ceph]# ceph -s
  cluster:
    id:     3af028b9-0f59-4070-bd5d-413316ea81e1
    health: HEALTH_WARN
            mon master is low on available space
 
  services:
    mon: 1 daemons, quorum master (age 10m)
    mgr: master(active, since 10m)
    osd: 1 osds: 1 up (since 2m), 1 in (since 2m)
 
  data:
    pools:   0 pools, 0 pgs
    objects: 0 objects, 0 B
    usage:   1.0 GiB used, 1019 MiB / 2.0 GiB avail
    pgs: 

 

9.通过ceph osd tree 查看osd的列表情况

[root@master ceph]# ceph osd tree
ID CLASS WEIGHT  TYPE NAME       STATUS REWEIGHT PRI-AFF 
-1       0.00189 root default                            
-3       0.00189     host master                         
 0   hdd 0.00189         osd.0       up  1.00000 1.00000 

 

posted @ 2025-08-12 17:03  slnngk  阅读(12)  评论(0)    收藏  举报