01Ceph原理和部署

Ceph原理及部署

参考链接
中文文档: http://docs.ceph.org.cn/ (相对较老)
英文文档: https://docs.ceph.com/en/latest/
 
其他博客:
https://llussy.github.io/2019/08/17/ceph-architecture/
https://cloud.tencent.com/developer/article/1174169
 
同学博客:
https://www.cnblogs.com/cculin/articles/15150559.html
https://www.jianshu.com/p/007b0ffc874a
https://www.yuque.com/docs/share/aeae134a-ac41-440d-9d02-b5e7f070167a?# 《从零搭建ceph分布式存储》
https://www.cnblogs.com/haozheyu/p/15149734.html
https://www.cnblogs.com/ty111/p/15145835.html
 
 
Ceph仓库:
清华源:https://mirrors.tuna.tsinghua.edu.cn/ceph
阿里云源:https://mirrors.aliyun.com/ceph/ #阿里云镜像仓库

1.Ceph简介

Ceph是一个开源的基于RADOS(Reliable, Autonomic Distributed Object Store) 无限扩张分布式存储集群,同时支持对象存储、块设备、文件系统。Ceph是一个对象(object)式存储系统,将数据切分为一到多个固定大小(默认4M)的对象数据,并以其原子单元完成数据读写。

Ceph相比其它存储的优势在于其充分利用存储节点的计算能力,计算存储数据的位置,尽量将数据分布均衡,同时由于Ceph的良好设计采用了CRUSH、HASH环等方法,使得它不存在传统的单点故障。

Ceph特点

  • 高性能

    • 摒弃传统集中式存储元数据寻址方案,采用CRUSH算法,数据分布均衡并行度高。
    • 分布式架构考虑了容灾与、各类负载的副本放置、在网络良好情况下能实现跨机房方案
  • 高可用性

    • 副本数可以灵活空间。
    • 支持故障域分隔,数据强一致
    • 多种故障场景自动进行修复自愈
    • 存储节点没有主从概念,不存在单点故障
  • 高扩展性

    • 去中心化。
    • 扩展灵活。
    • 随着节点增加而线性增长。
  • 特性丰富

    • 支持三种存储接口:块存储、文件存储、对象存储
    • 支持自定义接口,支持多种语言驱动

Ceph缺点

待补充

2.Ceph集群组件

OSD

全称Object Storage Daemon,OSD守护进程功能是数据存储,是由存储服务器的磁盘组成的存储空间,可部署若干个

Monitors

Ceph的监视器,至少部署1个。

managers

3.Ceph读写原理

4.Ceph部署

Ceph 版本:16

4.1 机器列表

10段为外部通信网络,192网段为集群内部通信网络

IP(Ubuntu18.04,双网卡) 主机名 角色 备注
10.201.106.31/192.168.6.31 ceph-deploy deploy部署机
10.201.106.32/192.168.6.32 ceph-mon1 Mon监视服务器
10.201.106.33/192.168.6.33 ceph-mon2 Mon监视服务器
10.201.106.34/192.168.6.34 ceph-mon3 Mon监视服务器
10.201.106.35/192.168.6.35 ceph-mgr1 ceph-mgr管理服务器
10.201.106.36/192.168.6.36 ceph-mgr2 ceph-mgr管理服务器
10.201.106.37/192.168.6.37 ceph-node1 OSD存储 5块数据盘
10.201.106.38/192.168.6.38 ceph-node2 OSD存储 5块数据盘
10.201.106.39/192.168.6.39 ceph-node3 OSD存储 5块数据盘
10.201.106.40/192.168.6.40 ceph-node4 OSD存储 5块数据盘

4.2 环境准备

1. 时间同步
2. 关闭selinux和防火墙
3. 更改主机名
4. 配置域名解析(实验通过绑定host,线上需配置dns解析)
5. 设置仓库源
6. 优化系统句柄数
7. 添加普通用户并设置sudo权限,deploy服务器设置到其他节点免密登录
8. 各节点安装Python2

4.2.1 时间同步

sudo apt update
sudo apt install chrony -y
sudo vim /etc/chrony/chrony.conf
# 修改为阿里云时钟同步服务器
# 公网
server ntp.aliyun.com minpoll 4 maxpoll 10 iburst
server ntp1.aliyun.com minpoll 4 maxpoll 10 iburst
server ntp2.aliyun.com minpoll 4 maxpoll 10 iburst
server ntp3.aliyun.com minpoll 4 maxpoll 10 iburst
server ntp4.aliyun.com minpoll 4 maxpoll 10 iburst
server ntp5.aliyun.com minpoll 4 maxpoll 10 iburst
server ntp6.aliyun.com minpoll 4 maxpoll 10 iburst
server ntp7.aliyun.com minpoll 4 maxpoll 10 iburst

4.2.2 关闭selinux和防火墙

 

4.3.3 更改主机名

按照4.1机器列表修改主机名

hostnamectl set-hostname "主机名"

4.3.4 配置域名解析(hosts绑定)

cat >> /etc/hosts<< EOF
# ceph-hosts
10.201.106.31 192.168.6.31 ceph-deploy
10.201.106.32 192.168.6.32 ceph-mon1
10.201.106.33 192.168.6.33 ceph-mon2
10.201.106.34 192.168.6.34 ceph-mon3
10.201.106.35 192.168.6.35 ceph-mgr1
10.201.106.36 192.168.6.36 ceph-mgr2
10.201.106.37 192.168.6.37 ceph-node1
10.201.106.38 192.168.6.38 ceph-node2
10.201.106.39 192.168.6.39 ceph-node3
10.201.106.40 192.168.6.40 ceph-node4
EOF

4.2.5 配置Ceph-yum仓库

替换镜像源

sudo mv /etc/apt/{sources.list,sources.list.old}
sudo cat > /etc/apt/sources.list <<EOF
# 默认注释了源码镜像以提高 apt update 速度,如有需要可自行取消注释
deb https://mirrors.tuna.tsinghua.edu.cn/ubuntu/ bionic main restricted universe multiverse
# deb-src https://mirrors.tuna.tsinghua.edu.cn/ubuntu/ bionic main restricted universe multiverse
deb https://mirrors.tuna.tsinghua.edu.cn/ubuntu/ bionic-updates main restricted universe multiverse
# deb-src https://mirrors.tuna.tsinghua.edu.cn/ubuntu/ bionic-updates main restricted universe multiverse
deb https://mirrors.tuna.tsinghua.edu.cn/ubuntu/ bionic-backports main restricted universe multiverse
# deb-src https://mirrors.tuna.tsinghua.edu.cn/ubuntu/ bionic-backports main restricted universe multiverse
deb https://mirrors.tuna.tsinghua.edu.cn/ubuntu/ bionic-security main restricted universe multiverse
# deb-src https://mirrors.tuna.tsinghua.edu.cn/ubuntu/ bionic-security main restricted universe multiverse
EOF

导入ceph的key文件:

wget -q -O- 'https://mirrors.tuna.tsinghua.edu.cn/ceph/keys/release.asc' | sudo apt-key add -

各节点配置ceph源

sudo echo "deb https://mirrors.tuna.tsinghua.edu.cn/ceph/debian-pacific bionic main" >> /etc/apt/sources.list
 
# 更新镜像源
apt update

#检查源是否生效,无法识别需要重新检查导入key的操作
apt-cache madison ceph-deploy

4.2.6 调整系统句柄数

sudo cat >> /etc/security/limits.conf <<EOF
* soft     nproc          102400
* hard     nproc          102400
* soft     nofile         102400
* hard     nofile         102400
 
root soft     nproc          102400
root hard     nproc          102400
root soft     nofile         102400
root hard     nofile         102400
EOF

4.2.7 创建普通用户管理集群

从Infernalis版本的Ceph部署时会自动创建ceph用户启动进程,应避免创建同名用户

1. 在各ceph节点创建新用户。
groupadd -r -g 2001 cephops && useradd -r -m -s /bin/bash -u 2001 -g 2001 cephops && echo cephops:passwd1234 | chpasswd
 
2. 确保各ceph节点上新创建的用户都有sudo权限
echo "cephops ALL = (root) NOPASSWD:ALL" | sudo tee /etc/sudoers.d/cephops
sudo chmod 0440 /etc/sudoers.d/cephops
 
3、deploy节点cephops用户做密钥免密登录到所有节点
3.1 生成密钥对
cephops@ceph-deploy:~$ ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/home/cephops/.ssh/id_rsa):
Created directory '/home/cephops/.ssh'.
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/cephops/.ssh/id_rsa.
Your public key has been saved in /home/cephops/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:Oxkg2Q7npJ4oZeS+uNR2VLdGdkw5c14GZRI40IvT+F8 cephops@ceph-deploy
The key's randomart image is:
+---[RSA 2048]----+
|         .o.o++o |
|     o    oB .oo |
|  . + =. ++oB o  |
| o   O..++oo .   |
|  + ..o Soo      |
| +.o..  .+ .   E |
|..ooo.  +   . .  |
|.o...    .   .   |
|o..              |
+----[SHA256]-----+
 
3.2 # 取消 ssh 登录指纹验证,分发公钥到各ceph节点
sudo sed -i '/ask/{s/#//;s/ask/no/}' /etc/ssh/ssh_config
ssh-copy-id cephops@10.201.106.X
 

4.2.8 安装Python2.7

#ceph服务依赖python2环境
sudo apt install python2.7 python-setuptools -y
sudo ln -sv /usr/bin/python2.7 /usr/bin/python2

4.3 Ceph集群部署步骤

4.3.1 在deploy节点创建目录,用于保存ceph-deploy生成的配置文件密钥对

mkdir ceph-deploy
cd ceph-deploy/

4.3.2 deploy节点安装ceph部署工具

sudo apt-get install ceph-deploy

4.3.3 各节点部署ceph-common工具,用于管理ceph

sudo apt install ceph-common -y

4.3.4 创建新的ceph集群,并生成配置和认证文件

ceph-deploy new --cluster-network 192.168.6.0/24 --public-network 10.201.106.0/24 ceph-mon1

image-20210820235437661

应该有一个ceph配置文件、一个monitor密钥和一个日志文件

image-20210821001249953

4.3.5 初始化ceph-mon节点

# 在 mon 节点安装 ceph-mon
sudo apt install ceph-mon -y
 
# deploy 节点初始化 mon
ceph-deploy mon create-initial
# 命令输出 :
[ceph_deploy.conf][DEBUG ] found configuration file at: /home/cephops/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (2.0.1): /usr/bin/ceph-deploy mon create-initial
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  subcommand                    : create-initial
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7f222849f640>
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  func                          : <function mon at 0x7f222847dc50>
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  keyrings                      : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.mon][DEBUG ] Deploying mon, cluster ceph hosts ceph-mon1
[ceph_deploy.mon][DEBUG ] detecting platform for host ceph-mon1 ...
[ceph-mon1][DEBUG ] connection detected need for sudo
[ceph-mon1][DEBUG ] connected to host: ceph-mon1
[ceph-mon1][DEBUG ] detect platform information from remote host
[ceph-mon1][DEBUG ] detect machine type
[ceph-mon1][DEBUG ] find the location of an executable
[ceph_deploy.mon][INFO  ] distro info: Ubuntu 18.04 bionic
[ceph-mon1][DEBUG ] determining if provided host has same hostname in remote
[ceph-mon1][DEBUG ] get remote short hostname
[ceph-mon1][DEBUG ] deploying mon to ceph-mon1
[ceph-mon1][DEBUG ] get remote short hostname
[ceph-mon1][DEBUG ] remote hostname: ceph-mon1
[ceph-mon1][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph-mon1][DEBUG ] create the mon path if it does not exist
[ceph-mon1][DEBUG ] checking for done path: /var/lib/ceph/mon/ceph-ceph-mon1/done
[ceph-mon1][DEBUG ] done path does not exist: /var/lib/ceph/mon/ceph-ceph-mon1/done
[ceph-mon1][INFO  ] creating keyring file: /var/lib/ceph/tmp/ceph-ceph-mon1.mon.keyring
[ceph-mon1][DEBUG ] create the monitor keyring file
[ceph-mon1][INFO  ] Running command: sudo ceph-mon --cluster ceph --mkfs -i ceph-mon1 --keyring /var/lib/ceph/tmp/ceph-ceph-mon1.mon.keyring --setuser 64045 --setgroup 64045
[ceph-mon1][INFO  ] unlinking keyring file /var/lib/ceph/tmp/ceph-ceph-mon1.mon.keyring
[ceph-mon1][DEBUG ] create a done file to avoid re-doing the mon deployment
[ceph-mon1][DEBUG ] create the init path if it does not exist
[ceph-mon1][INFO  ] Running command: sudo systemctl enable ceph.target
[ceph-mon1][INFO  ] Running command: sudo systemctl enable ceph-mon@ceph-mon1
[ceph-mon1][WARNIN] Created symlink /etc/systemd/system/ceph-mon.target.wants/ceph-mon@ceph-mon1.service → /lib/systemd/system/ceph-mon@.service.
[ceph-mon1][INFO  ] Running command: sudo systemctl start ceph-mon@ceph-mon1
[ceph-mon1][INFO  ] Running command: sudo ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.ceph-mon1.asok mon_status
[ceph-mon1][DEBUG ] ********************************************************************************
[ceph-mon1][DEBUG ] status for monitor: mon.ceph-mon1
[ceph-mon1][DEBUG ] {
[ceph-mon1][DEBUG ]   "election_epoch": 3,
[ceph-mon1][DEBUG ]   "extra_probe_peers": [],
[ceph-mon1][DEBUG ]   "feature_map": {
[ceph-mon1][DEBUG ]     "mon": {
[ceph-mon1][DEBUG ]       "group": {
[ceph-mon1][DEBUG ]         "features": "0x3ffddff8eeacfffb",
[ceph-mon1][DEBUG ]         "num": 1,
[ceph-mon1][DEBUG ]         "release": "luminous"
[ceph-mon1][DEBUG ]       }
[ceph-mon1][DEBUG ]     }
[ceph-mon1][DEBUG ]   },
[ceph-mon1][DEBUG ]   "features": {
[ceph-mon1][DEBUG ]     "quorum_con": "4611087853746454523",
[ceph-mon1][DEBUG ]     "quorum_mon": [
[ceph-mon1][DEBUG ]       "kraken",
[ceph-mon1][DEBUG ]       "luminous"
[ceph-mon1][DEBUG ]     ],
[ceph-mon1][DEBUG ]     "required_con": "153140804152475648",
[ceph-mon1][DEBUG ]     "required_mon": [
[ceph-mon1][DEBUG ]       "kraken",
[ceph-mon1][DEBUG ]       "luminous"
[ceph-mon1][DEBUG ]     ]
[ceph-mon1][DEBUG ]   },
[ceph-mon1][DEBUG ]   "monmap": {
[ceph-mon1][DEBUG ]     "created": "2021-08-21 06:50:04.677407",
[ceph-mon1][DEBUG ]     "epoch": 1,
[ceph-mon1][DEBUG ]     "features": {
[ceph-mon1][DEBUG ]       "optional": [],
[ceph-mon1][DEBUG ]       "persistent": [
[ceph-mon1][DEBUG ]         "kraken",
[ceph-mon1][DEBUG ]         "luminous"
[ceph-mon1][DEBUG ]       ]
[ceph-mon1][DEBUG ]     },
[ceph-mon1][DEBUG ]     "fsid": "801d4260-20c6-4fcf-91d1-a583e7e6dcea",
[ceph-mon1][DEBUG ]     "modified": "2021-08-21 06:50:04.677407",
[ceph-mon1][DEBUG ]     "mons": [
[ceph-mon1][DEBUG ]       {
[ceph-mon1][DEBUG ]         "addr": "10.201.106.32:6789/0",
[ceph-mon1][DEBUG ]         "name": "ceph-mon1",
[ceph-mon1][DEBUG ]         "public_addr": "10.201.106.32:6789/0",
[ceph-mon1][DEBUG ]         "rank": 0
[ceph-mon1][DEBUG ]       }
[ceph-mon1][DEBUG ]     ]
[ceph-mon1][DEBUG ]   },
[ceph-mon1][DEBUG ]   "name": "ceph-mon1",
[ceph-mon1][DEBUG ]   "outside_quorum": [],
[ceph-mon1][DEBUG ]   "quorum": [
[ceph-mon1][DEBUG ]     0
[ceph-mon1][DEBUG ]   ],
[ceph-mon1][DEBUG ]   "rank": 0,
[ceph-mon1][DEBUG ]   "state": "leader",
[ceph-mon1][DEBUG ]   "sync_provider": []
[ceph-mon1][DEBUG ] }
[ceph-mon1][DEBUG ] ********************************************************************************
[ceph-mon1][INFO  ] monitor: mon.ceph-mon1 is running
[ceph-mon1][INFO  ] Running command: sudo ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.ceph-mon1.asok mon_status
[ceph_deploy.mon][INFO  ] processing monitor mon.ceph-mon1
[ceph-mon1][DEBUG ] connection detected need for sudo
[ceph-mon1][DEBUG ] connected to host: ceph-mon1
[ceph-mon1][DEBUG ] detect platform information from remote host
[ceph-mon1][DEBUG ] detect machine type
[ceph-mon1][DEBUG ] find the location of an executable
[ceph-mon1][INFO  ] Running command: sudo ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.ceph-mon1.asok mon_status
[ceph_deploy.mon][INFO  ] mon.ceph-mon1 monitor has reached quorum!
[ceph_deploy.mon][INFO  ] all initial monitors are running and have formed quorum
[ceph_deploy.mon][INFO  ] Running gatherkeys...
[ceph_deploy.gatherkeys][INFO  ] Storing keys in temp directory /tmp/tmpdHKPHy
[ceph-mon1][DEBUG ] connection detected need for sudo
[ceph-mon1][DEBUG ] connected to host: ceph-mon1
[ceph-mon1][DEBUG ] detect platform information from remote host
[ceph-mon1][DEBUG ] detect machine type
[ceph-mon1][DEBUG ] get remote short hostname
[ceph-mon1][DEBUG ] fetch remote file
[ceph-mon1][INFO  ] Running command: sudo /usr/bin/ceph --connect-timeout=25 --cluster=ceph --admin-daemon=/var/run/ceph/ceph-mon.ceph-mon1.asok mon_status
[ceph-mon1][INFO  ] Running command: sudo /usr/bin/ceph --connect-timeout=25 --cluster=ceph --name mon. --keyring=/var/lib/ceph/mon/ceph-ceph-mon1/keyring auth get client.admin
[ceph-mon1][INFO  ] Running command: sudo /usr/bin/ceph --connect-timeout=25 --cluster=ceph --name mon. --keyring=/var/lib/ceph/mon/ceph-ceph-mon1/keyring auth get-or-create client.admin osd allow * mds allow * mon allow * mgr allow *
[ceph-mon1][INFO  ] Running command: sudo /usr/bin/ceph --connect-timeout=25 --cluster=ceph --name mon. --keyring=/var/lib/ceph/mon/ceph-ceph-mon1/keyring auth get client.bootstrap-mds
[ceph-mon1][INFO  ] Running command: sudo /usr/bin/ceph --connect-timeout=25 --cluster=ceph --name mon. --keyring=/var/lib/ceph/mon/ceph-ceph-mon1/keyring auth get-or-create client.bootstrap-mds mon allow profile bootstrap-mds
[ceph-mon1][INFO  ] Running command: sudo /usr/bin/ceph --connect-timeout=25 --cluster=ceph --name mon. --keyring=/var/lib/ceph/mon/ceph-ceph-mon1/keyring auth get client.bootstrap-mgr
[ceph-mon1][INFO  ] Running command: sudo /usr/bin/ceph --connect-timeout=25 --cluster=ceph --name mon. --keyring=/var/lib/ceph/mon/ceph-ceph-mon1/keyring auth get-or-create client.bootstrap-mgr mon allow profile bootstrap-mgr
[ceph-mon1][INFO  ] Running command: sudo /usr/bin/ceph --connect-timeout=25 --cluster=ceph --name mon. --keyring=/var/lib/ceph/mon/ceph-ceph-mon1/keyring auth get client.bootstrap-osd
[ceph-mon1][INFO  ] Running command: sudo /usr/bin/ceph --connect-timeout=25 --cluster=ceph --name mon. --keyring=/var/lib/ceph/mon/ceph-ceph-mon1/keyring auth get-or-create client.bootstrap-osd mon allow profile bootstrap-osd
[ceph-mon1][INFO  ] Running command: sudo /usr/bin/ceph --connect-timeout=25 --cluster=ceph --name mon. --keyring=/var/lib/ceph/mon/ceph-ceph-mon1/keyring auth get client.bootstrap-rgw
[ceph-mon1][INFO  ] Running command: sudo /usr/bin/ceph --connect-timeout=25 --cluster=ceph --name mon. --keyring=/var/lib/ceph/mon/ceph-ceph-mon1/keyring auth get-or-create client.bootstrap-rgw mon allow profile bootstrap-rgw
[ceph_deploy.gatherkeys][INFO  ] Storing ceph.client.admin.keyring
[ceph_deploy.gatherkeys][INFO  ] Storing ceph.bootstrap-mds.keyring
[ceph_deploy.gatherkeys][INFO  ] Storing ceph.bootstrap-mgr.keyring
[ceph_deploy.gatherkeys][INFO  ] keyring 'ceph.mon.keyring' already exists
[ceph_deploy.gatherkeys][INFO  ] Storing ceph.bootstrap-osd.keyring
[ceph_deploy.gatherkeys][INFO  ] Storing ceph.bootstrap-rgw.keyring
[ceph_deploy.gatherkeys][INFO  ] Destroy temp directory /tmp/tmpdHKPHy

mon1节点查看进程

image-20210821065138295

4.3.6 分发密钥到Node节点

# 在 ceph-deploy 节点把配置文件和 admin 密钥拷贝至 Ceph 集群需要执行 ceph 管理命令的 节点,从而不需要后期通过 ceph 命令对 ceph 集群进行管理配置的时候每次都需要指定 ceph-mon 节点地址 和 ceph.client.admin.keyring 文件, 另外各 ceph-mon 节点也需要同步 ceph 的集群配置文件与认证文件。
 
ceph-deploy admin ceph-deploy ceph-node{1,2,3,4}
 
# 认证文件的属主和属组为了安全考虑,默认设置为了root用户和root组,如果需要cephops用户也能执行ceph命令,那么就需要对cephops用户进行授权,
ssh ceph-node1 'sudo apt install acl -y && sudo setfacl -m u:cephops:rw /etc/ceph/ceph.client.admin.keyring'
ssh ceph-node2 'sudo apt install acl -y && sudo setfacl -m u:cephops:rw /etc/ceph/ceph.client.admin.keyring'
ssh ceph-node3 'sudo apt install acl -y && sudo setfacl -m u:cephops:rw /etc/ceph/ceph.client.admin.keyring'
ssh ceph-node4 'sudo apt install acl -y && sudo setfacl -m u:cephops:rw /etc/ceph/ceph.client.admin.keyring'

image-20210821084911347

查看ceph-mon节点状态

sudo ceph -s

image-20210821085022255

4.3.8 初始化ceph-mgr节点

# 在 manager 节点安装 ceph-mgr
sudo apt install ceph-mgr -y
 
# deploy 节点初始化 ceph-mgr1
ceph-deploy mgr create ceph-mgr1


# 解决 HEALTH_WARN 报警
sudo ceph config set mon auth_allow_insecure_global_id_reclaim false

image-20210821213123708

4.3.9 初始化ceph-node节点

#安装osd
ceph-deploy install --no-adjust-repos --nogpgcheck ceph-node1 ceph-node2 ceph-node3 


# 查看Node节点磁盘
ceph-deploy disk list ceph-node1

image-20210821221729776

4.3.10 擦除ceph-node节点的所有数据盘

ceph-deploy  disk zap ceph-node1 /dev/sd{b,c,d,e,f}
ceph-deploy  disk zap ceph-node2 /dev/sd{b,c,d,e,f}
ceph-deploy  disk zap ceph-node3 /dev/sd{b,c,d,e,f}

image-20210822095918767

4.3.11 添加osd

# 数据分类保存方式: 
# Data:即 ceph 保存的对象数据 
# Block: rocks DB 数据即元数据 
# block-wal:数据库的 wal 日志
# osd 的 ID 从0开始顺序使用

# osd ID: 0-4
ceph-deploy osd create ceph-node1 --data /dev/sdb
ceph-deploy osd create ceph-node1 --data /dev/sdc
ceph-deploy osd create ceph-node1 --data /dev/sdd
ceph-deploy osd create ceph-node1 --data /dev/sde
ceph-deploy osd create ceph-node1 --data /dev/sdf
# osd ID: 5-9
ceph-deploy osd create ceph-node2 --data /dev/sdb
ceph-deploy osd create ceph-node2 --data /dev/sdc
ceph-deploy osd create ceph-node2 --data /dev/sdd
ceph-deploy osd create ceph-node2 --data /dev/sde
ceph-deploy osd create ceph-node2 --data /dev/sdf
# osd ID: 10-14
ceph-deploy osd create ceph-node3 --data /dev/sdb
ceph-deploy osd create ceph-node3 --data /dev/sdc
ceph-deploy osd create ceph-node3 --data /dev/sdd
ceph-deploy osd create ceph-node3 --data /dev/sde
ceph-deploy osd create ceph-node3 --data /dev/sdf

可以看到有5个 OSD 进程,而且编号是我们刚刚添加是生成的编号,

image-20210822115659778

image-20210822115716401

重新查看ceph集群信息:1mon,1mgr,15个osd,没有数据

image-20210822114706587

5.集群高可用

5.1 ceph-mon高可用

# 另外2台mon节点安装ceph-mon
sudo apt install ceph-mon -y

# deploy添加mon
ceph-deploy mon add ceph-mon2
ceph-deploy mon add ceph-mon3 

图形安装选项默认一直回车键

image-20210822144655177

image-20210823002322164

image-20210823002344562

image-20210823002400286

5.2 ceph-mgr高可用

ceph-deploy mgr create ceph-mgr2

# 同 步 配 置 文 件 到 ceph-mg2 节点
ceph-deploy admin ceph-mgr2 

查看集群状态

image-20210823004648696

5.3 添加Node存储节点

# 4.3.6步骤已分发密钥到Node节点

# 安装osd
ceph-deploy install --no-adjust-repos --nogpgcheck ceph-node4

# 擦除新节点所有数据盘
ceph-deploy  disk zap ceph-node4 /dev/sd{b,c,d,e,f}

# 添加OSD
# osd ID: 15-19
ceph-deploy osd create ceph-node4 --data /dev/sdb
ceph-deploy osd create ceph-node4 --data /dev/sdc
ceph-deploy osd create ceph-node4 --data /dev/sdd
ceph-deploy osd create ceph-node4 --data /dev/sde
ceph-deploy osd create ceph-node4 --data /dev/sdf

6. 集群测试

6.1 重启OSD

# 重启第1天 node 服务器
ceph-node1:~# sudo reboot

# 只有15个 osd 服务存活
cephops@ceph-deploy:~$ ceph -s
  cluster:
    id:     72f08082-9af3-44c9-9e50-df42841d8fbc
    health: HEALTH_WARN
            5 osds down
            1 host (5 osds) down
 
  services:
    mon: 3 daemons, quorum ceph-mon1,ceph-mon2,ceph-mon3 (age 33m)
    mgr: ceph-mgr1(active, since 59m), standbys: ceph-mgr2
    osd: 20 osds: 15 up (since 10s), 20 in (since 106s)
 
  data:
    pools:   1 pools, 1 pgs
    objects: 0 objects, 0 B
    usage:   154 MiB used, 2.0 TiB / 2.0 TiB avail
    pgs:     1 active+clean

# 服务器启动后,自动检测添加osd
cephops@ceph-deploy:~$ ceph -s
  cluster:
    id:     72f08082-9af3-44c9-9e50-df42841d8fbc
    health: HEALTH_OK
 
  services:
    mon: 3 daemons, quorum ceph-mon1,ceph-mon2,ceph-mon3 (age 34m)
    mgr: ceph-mgr1(active, since 59m), standbys: ceph-mgr2
    osd: 20 osds: 20 up (since 12s), 20 in (since 2m)
 
  data:
    pools:   1 pools, 1 pgs
    objects: 0 objects, 0 B
    usage:   148 MiB used, 2.0 TiB / 2.0 TiB avail
    pgs:     1 active+clean

6.2 移除OSD


7.Ceph-rbd使用

posted @ 2021-08-20 23:50  仲樂  阅读(279)  评论(0)    收藏  举报