存储卷与数据持久化(三)--RBD存储卷
安装CEPH工具
把 Ceph 仓库添加到 ceph-deploy 管理节点,然后安装 ceph-deploy (master管理节点安装)
在 CentOS 上,可以执行下列命令:
[root@k8s-master01 ~]# sudo yum install -y yum-utils && sudo yum-config-manager --add-repo https://dl.fedoraproject.org/pub/epel/7/x86_64/ && sudo yum install --nogpgcheck -y epel-release && sudo rpm --import /etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7 && sudo rm /etc/yum.repos.d/dl.fedoraproject.org* Loaded plugins: fastestmirror Determining fastest mirrors * base: mirrors.ustc.edu.cn * extras: mirrors.cn99.com * updates: mirrors.cn99.com base | 3.6 kB 00:00:00 docker-ce-stable | 3.5 kB 00:00:00 extras | 2.9 kB 00:00:00 kubernetes | 1.4 kB 00:00:00 nginx | 2.9 kB 00:00:00 updates | 2.9 kB 00:00:00 updates/7/x86_64/primary_db | 9.6 MB 00:00:08 Resolving Dependencies --> Running transaction check ---> Package yum-utils.noarch 0:1.1.31-54.el7_8 will be installed --> Processing Dependency: python-kitchen for package: yum-utils-1.1.31-54.el7_8.noarch --> Processing Dependency: libxml2-python for package: yum-utils-1.1.31-54.el7_8.noarch --> Running transaction check ---> Package libxml2-python.x86_64 0:2.9.1-6.el7.5 will be installed ---> Package python-kitchen.noarch 0:1.1.1-5.el7 will be installed --> Processing Dependency: python-chardet for package: python-kitchen-1.1.1-5.el7.noarch --> Running transaction check ---> Package python-chardet.noarch 0:2.2.1-3.el7 will be installed --> Finished Dependency Resolution Dependencies Resolved ============================================================================================================================================================================================= Package Arch Version Repository Size ============================================================================================================================================================================================= Installing: yum-utils noarch 1.1.31-54.el7_8 base 122 k Installing for dependencies: libxml2-python x86_64 2.9.1-6.el7.5 base 247 k python-chardet noarch 2.2.1-3.el7 base 227 k python-kitchen noarch 1.1.1-5.el7 base 267 k Transaction Summary ============================================================================================================================================================================================= Install 1 Package (+3 Dependent packages) Total download size: 863 k Installed size: 4.3 M Downloading packages: (1/4): yum-utils-1.1.31-54.el7_8.noarch.rpm | 122 kB 00:00:00 (2/4): libxml2-python-2.9.1-6.el7.5.x86_64.rpm | 247 kB 00:00:00 (3/4): python-chardet-2.2.1-3.el7.noarch.rpm | 227 kB 00:00:00 (4/4): python-kitchen-1.1.1-5.el7.noarch.rpm | 267 kB 00:00:01 --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Total 758 kB/s | 863 kB 00:00:01 Running transaction check Running transaction test Transaction test succeeded Running transaction Warning: RPMDB altered outside of yum. Installing : python-chardet-2.2.1-3.el7.noarch 1/4 Installing : python-kitchen-1.1.1-5.el7.noarch 2/4 Installing : libxml2-python-2.9.1-6.el7.5.x86_64 3/4 Installing : yum-utils-1.1.31-54.el7_8.noarch 4/4 Verifying : libxml2-python-2.9.1-6.el7.5.x86_64 1/4 Verifying : python-kitchen-1.1.1-5.el7.noarch 2/4 Verifying : yum-utils-1.1.31-54.el7_8.noarch 3/4 Verifying : python-chardet-2.2.1-3.el7.noarch 4/4 Installed: yum-utils.noarch 0:1.1.31-54.el7_8 Dependency Installed: libxml2-python.x86_64 0:2.9.1-6.el7.5 python-chardet.noarch 0:2.2.1-3.el7 python-kitchen.noarch 0:1.1.1-5.el7 Complete! Loaded plugins: fastestmirror adding repo from: https://dl.fedoraproject.org/pub/epel/7/x86_64/ [dl.fedoraproject.org_pub_epel_7_x86_64_] name=added from: https://dl.fedoraproject.org/pub/epel/7/x86_64/ baseurl=https://dl.fedoraproject.org/pub/epel/7/x86_64/ enabled=1 Loaded plugins: fastestmirror Loading mirror speeds from cached hostfile * base: mirrors.ustc.edu.cn * extras: mirrors.cn99.com * updates: mirrors.cn99.com dl.fedoraproject.org_pub_epel_7_x86_64_ | 4.7 kB 00:00:00 (1/3): dl.fedoraproject.org_pub_epel_7_x86_64_/group_gz | 96 kB 00:00:01 (2/3): dl.fedoraproject.org_pub_epel_7_x86_64_/updateinfo | 1.0 MB 00:00:03 (3/3): dl.fedoraproject.org_pub_epel_7_x86_64_/primary_db | 6.9 MB 00:00:07 Resolving Dependencies --> Running transaction check ---> Package epel-release.noarch 0:7-13 will be installed --> Finished Dependency Resolution Dependencies Resolved ============================================================================================================================================================================================= Package Arch Version Repository Size ============================================================================================================================================================================================= Installing: epel-release noarch 7-13 dl.fedoraproject.org_pub_epel_7_x86_64_ 15 k Transaction Summary ============================================================================================================================================================================================= Install 1 Package Total download size: 15 k Installed size: 25 k Downloading packages: epel-release-7-13.noarch.rpm | 15 kB 00:00:01 Running transaction check Running transaction test Transaction test succeeded Running transaction Installing : epel-release-7-13.noarch 1/1 Verifying : epel-release-7-13.noarch 1/1 Installed: epel-release.noarch 0:7-13 Complete!
把软件包源加入软件仓库。用文本编辑器创建一个 YUM (Yellowdog Updater, Modified) 库文件,其路径为 /etc/yum.repos.d/ceph.repo 。例如:
sudo vim /etc/yum.repos.d/ceph.repo
把如下内容粘帖进去,用 Ceph 的最新主稳定版名字替换 {ceph-stable-release} (如 firefly ),用你的Linux发行版名字替换 {distro} (如 el6 为 CentOS 6 、 el7 为 CentOS 7 、 rhel6 为 Red Hat 6.5 、 rhel7 为 Red Hat 7 、 fc19 是 Fedora 19 、 fc20 是 Fedora 20 )。
{ceph-release}替换成版本号15.2.8
最后保存到 /etc/yum.repos.d/ceph.repo 文件中。
[ceph-noarch] name=Ceph noarch packages baseurl=http://download.ceph.com/rpm-{ceph-release}/{distro}/noarch enabled=1 gpgcheck=1 type=rpm-md gpgkey=https://download.ceph.com/keys/release.asc
更新软件库并安装 ceph-deploy :(所有节点)
sudo yum update && sudo yum install ceph-deploy
安装NTP
我们建议在所有 Ceph 节点上安装 NTP 服务(特别是 Ceph Monitor 节点),以免因时钟漂移导致故障
sudo yum install ntp ntpdate ntp-doc
确保在各 Ceph 节点上启动了 NTP 服务,并且要使用同一个 NTP 服务器
systemctl start ntpdate.service
systemctl enable ntpdate.service
安装 SSH 服务器
[root@k8s-master01 ~]# ssh-keygen Generating public/private rsa key pair. Enter file in which to save the key (/root/.ssh/id_rsa): /root/.ssh/id_rsa already exists. Overwrite (y/n)? y Enter passphrase (empty for no passphrase): Enter same passphrase again: Your identification has been saved in /root/.ssh/id_rsa. Your public key has been saved in /root/.ssh/id_rsa.pub. The key fingerprint is: SHA256:lFfzJHpxo8HWRlKSIxGyNoS6a3xmA+rojSBp0BZaApI root@k8s-master01 The key's randomart image is: +---[RSA 2048]----+ |o. .o o+O=* | |E .. +.o=%o. | |. o . * oooo. | | = .. o o . | |o o . S | |.o o | |+. o o | |+.+ + = | |o+.o + . | +----[SHA256]-----+ [root@k8s-master01 ~]# ssh-copy-id liutao@k8s-node01 /usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/root/.ssh/id_rsa.pub" /usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed /usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys liutao@10.122.138.245's password: Number of key(s) added: 1 Now try logging into the machine, with: "ssh 'liutao@10.122.138.245'" and check to make sure that only the key(s) you wanted were added. [root@k8s-master01 ~]# ssh-copy-id liutao@k8s-node02 /usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/root/.ssh/id_rsa.pub" /usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed /usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys liutao@10.122.138.246's password: Number of key(s) added: 1 Now try logging into the machine, with: "ssh 'liutao@10.122.138.246'" and check to make sure that only the key(s) you wanted were added. [root@k8s-master01 ~]# ssh-copy-id liutao@k8s-node03 /usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/root/.ssh/id_rsa.pub" /usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed /usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys liutao@10.122.138.247's password: Number of key(s) added: 1 Now try logging into the machine, with: "ssh 'liutao@10.122.138.247'" and check to make sure that only the key(s) you wanted were added. [root@k8s-master01 ~]#
(推荐做法)修改 ceph-deploy 管理节点上的 ~/.ssh/config 文件,这样 ceph-deploy 就能用你所建的用户名登录 Ceph 节点了,而无需每次执行 ceph-deploy 都要指定 --username {username} 。这样做同时也简化了 ssh 和 scp 的用法。把 {username} 替换成你创建的用
[root@k8s-master01 ~]# cat ~/.ssh/config
Host k8s-node01
Hostname k8s-node01
User liutao
Host host-node02
Hostname k8s-node02
User liutao
Host k8s-node03
Hostname k8s-node03
User liutao
还要加上masterip
创建部署 CEPH 的用户
sudo useradd -d /home/liutao -m liutao
sudo passwd liutao
密码是lt941208
确保各 Ceph 节点上新创建的用户都有 sudo 权限
[root@k8s-master01 ~]# echo "liutao ALL = (root) NOPASSWD:ALL" | sudo tee /etc/sudoers.d/liutao liutao ALL = (root) NOPASSWD:ALL [root@k8s-master01 ~]# sudo chmod 0440 /etc/sudoers.d/liutao [root@k8s-master01 ~]#
创建集群
先在管理节点上创建一个目录,用于保存 ceph-deploy 生成的配置文件和密钥对
mkdir my-cluster cd my-cluster
在管理节点上,进入刚创建的放置配置文件的目录,用 ceph-deploy 执行如下步骤
[root@k8s-master01 my-cluster]# ceph-deploy new k8s-node01 [ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf [ceph_deploy.cli][INFO ] Invoked (2.0.1): /usr/bin/ceph-deploy new k8s-node01 [ceph_deploy.cli][INFO ] ceph-deploy options: [ceph_deploy.cli][INFO ] username : None [ceph_deploy.cli][INFO ] func : <function new at 0x7fee0ff40de8> [ceph_deploy.cli][INFO ] verbose : False [ceph_deploy.cli][INFO ] overwrite_conf : False [ceph_deploy.cli][INFO ] quiet : False [ceph_deploy.cli][INFO ] cd_conf : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7fee0f6b8758> [ceph_deploy.cli][INFO ] cluster : ceph [ceph_deploy.cli][INFO ] ssh_copykey : True [ceph_deploy.cli][INFO ] mon : ['k8s-node01'] [ceph_deploy.cli][INFO ] public_network : None [ceph_deploy.cli][INFO ] ceph_conf : None [ceph_deploy.cli][INFO ] cluster_network : None [ceph_deploy.cli][INFO ] default_release : False [ceph_deploy.cli][INFO ] fsid : None [ceph_deploy.new][DEBUG ] Creating new cluster named ceph [ceph_deploy.new][INFO ] making sure passwordless SSH succeeds [k8s-node01][DEBUG ] connected to host: k8s-master01 [k8s-node01][INFO ] Running command: ssh -CT -o BatchMode=yes k8s-node01 [k8s-node01][DEBUG ] connection detected need for sudo [k8s-node01][DEBUG ] connected to host: k8s-node01 [k8s-node01][DEBUG ] detect platform information from remote host [k8s-node01][DEBUG ] detect machine type [k8s-node01][DEBUG ] find the location of an executable [k8s-node01][INFO ] Running command: sudo /usr/sbin/ip link show [k8s-node01][INFO ] Running command: sudo /usr/sbin/ip addr show [k8s-node01][DEBUG ] IP addresses found: [u'10.122.138.245', u'172.17.0.1', u'10.244.2.1', u'10.244.2.0'] [ceph_deploy.new][DEBUG ] Resolving host k8s-node01 [ceph_deploy.new][DEBUG ] Monitor k8s-node01 at 10.122.138.245 [ceph_deploy.new][DEBUG ] Monitor initial members are ['k8s-node01'] [ceph_deploy.new][DEBUG ] Monitor addrs are ['10.122.138.245'] [ceph_deploy.new][DEBUG ] Creating a random mon key... [ceph_deploy.new][DEBUG ] Writing monitor keyring to ceph.mon.keyring... [ceph_deploy.new][DEBUG ] Writing initial config to ceph.conf... [root@k8s-master01 my-cluster]# ls ceph.conf ceph-deploy-ceph.log ceph.mon.keyring
在当前目录下用 ls 和 cat 检查 ceph-deploy 的输出,应该有一个 Ceph 配置文件、一个 monitor 密钥环和一个日志文件
安装 Ceph(安装完成这个后就可以使用rbd命令)
ceph-deploy install k8s-master01 k8s-node01 k8s-node02 k8s-node03
ceph安装时报错RuntimeError: NoSectionError 标签: ceph安装时报错runtim 分类: ceph 安装ceph时出错[ceph_deploy][ERROR ] RuntimeError: NoSectionError: No section: 'ceph' 解决办法: # yum remove ceph-release
配置初始 monitor(s)、并收集所有密钥:
[root@k8s-master01 my-cluster]# ceph-deploy mon create-initial [ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf [ceph_deploy.cli][INFO ] Invoked (2.0.1): /usr/bin/ceph-deploy mon create-initial [ceph_deploy.cli][INFO ] ceph-deploy options: [ceph_deploy.cli][INFO ] username : None [ceph_deploy.cli][INFO ] verbose : False [ceph_deploy.cli][INFO ] overwrite_conf : False [ceph_deploy.cli][INFO ] subcommand : create-initial [ceph_deploy.cli][INFO ] quiet : False [ceph_deploy.cli][INFO ] cd_conf : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7f629386d2d8> [ceph_deploy.cli][INFO ] cluster : ceph [ceph_deploy.cli][INFO ] func : <function mon at 0x7f6293acb410> [ceph_deploy.cli][INFO ] ceph_conf : None [ceph_deploy.cli][INFO ] default_release : False [ceph_deploy.cli][INFO ] keyrings : None [ceph_deploy.mon][DEBUG ] Deploying mon, cluster ceph hosts k8s-node01 [ceph_deploy.mon][DEBUG ] detecting platform for host k8s-node01 ... [k8s-node01][DEBUG ] connection detected need for sudo [k8s-node01][DEBUG ] connected to host: k8s-node01 [k8s-node01][DEBUG ] detect platform information from remote host [k8s-node01][DEBUG ] detect machine type [k8s-node01][DEBUG ] find the location of an executable [ceph_deploy.mon][INFO ] distro info: CentOS Linux 7.9.2009 Core [k8s-node01][DEBUG ] determining if provided host has same hostname in remote [k8s-node01][DEBUG ] get remote short hostname [k8s-node01][DEBUG ] deploying mon to k8s-node01 [k8s-node01][DEBUG ] get remote short hostname [k8s-node01][DEBUG ] remote hostname: k8s-node01 [k8s-node01][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf [k8s-node01][DEBUG ] create the mon path if it does not exist [k8s-node01][DEBUG ] checking for done path: /var/lib/ceph/mon/ceph-k8s-node01/done [k8s-node01][DEBUG ] done path does not exist: /var/lib/ceph/mon/ceph-k8s-node01/done [k8s-node01][INFO ] creating keyring file: /var/lib/ceph/tmp/ceph-k8s-node01.mon.keyring [k8s-node01][DEBUG ] create the monitor keyring file [k8s-node01][INFO ] Running command: sudo ceph-mon --cluster ceph --mkfs -i k8s-node01 --keyring /var/lib/ceph/tmp/ceph-k8s-node01.mon.keyring --setuser 167 --setgroup 167 [k8s-node01][INFO ] unlinking keyring file /var/lib/ceph/tmp/ceph-k8s-node01.mon.keyring [k8s-node01][DEBUG ] create a done file to avoid re-doing the mon deployment [k8s-node01][DEBUG ] create the init path if it does not exist [k8s-node01][INFO ] Running command: sudo systemctl enable ceph.target [k8s-node01][INFO ] Running command: sudo systemctl enable ceph-mon@k8s-node01 [k8s-node01][WARNIN] Created symlink from /etc/systemd/system/ceph-mon.target.wants/ceph-mon@k8s-node01.service to /usr/lib/systemd/system/ceph-mon@.service. [k8s-node01][INFO ] Running command: sudo systemctl start ceph-mon@k8s-node01 [k8s-node01][INFO ] Running command: sudo ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.k8s-node01.asok mon_status [k8s-node01][DEBUG ] ******************************************************************************** [k8s-node01][DEBUG ] status for monitor: mon.k8s-node01 [k8s-node01][DEBUG ] { [k8s-node01][DEBUG ] "election_epoch": 3, [k8s-node01][DEBUG ] "extra_probe_peers": [], [k8s-node01][DEBUG ] "feature_map": { [k8s-node01][DEBUG ] "mon": [ [k8s-node01][DEBUG ] { [k8s-node01][DEBUG ] "features": "0x3ffddff8ffacfffb", [k8s-node01][DEBUG ] "num": 1, [k8s-node01][DEBUG ] "release": "luminous" [k8s-node01][DEBUG ] } [k8s-node01][DEBUG ] ] [k8s-node01][DEBUG ] }, [k8s-node01][DEBUG ] "features": { [k8s-node01][DEBUG ] "quorum_con": "4611087854031667195", [k8s-node01][DEBUG ] "quorum_mon": [ [k8s-node01][DEBUG ] "kraken", [k8s-node01][DEBUG ] "luminous", [k8s-node01][DEBUG ] "mimic", [k8s-node01][DEBUG ] "osdmap-prune" [k8s-node01][DEBUG ] ], [k8s-node01][DEBUG ] "required_con": "144115738102218752", [k8s-node01][DEBUG ] "required_mon": [ [k8s-node01][DEBUG ] "kraken", [k8s-node01][DEBUG ] "luminous", [k8s-node01][DEBUG ] "mimic", [k8s-node01][DEBUG ] "osdmap-prune" [k8s-node01][DEBUG ] ] [k8s-node01][DEBUG ] }, [k8s-node01][DEBUG ] "monmap": { [k8s-node01][DEBUG ] "created": "2021-08-31 11:51:03.706580", [k8s-node01][DEBUG ] "epoch": 1, [k8s-node01][DEBUG ] "features": { [k8s-node01][DEBUG ] "optional": [], [k8s-node01][DEBUG ] "persistent": [ [k8s-node01][DEBUG ] "kraken", [k8s-node01][DEBUG ] "luminous", [k8s-node01][DEBUG ] "mimic", [k8s-node01][DEBUG ] "osdmap-prune" [k8s-node01][DEBUG ] ] [k8s-node01][DEBUG ] }, [k8s-node01][DEBUG ] "fsid": "df4e73f2-8a91-40e9-a53b-072470efe547", [k8s-node01][DEBUG ] "modified": "2021-08-31 11:51:03.706580", [k8s-node01][DEBUG ] "mons": [ [k8s-node01][DEBUG ] { [k8s-node01][DEBUG ] "addr": "10.122.138.245:6789/0", [k8s-node01][DEBUG ] "name": "k8s-node01", [k8s-node01][DEBUG ] "public_addr": "10.122.138.245:6789/0", [k8s-node01][DEBUG ] "rank": 0 [k8s-node01][DEBUG ] } [k8s-node01][DEBUG ] ] [k8s-node01][DEBUG ] }, [k8s-node01][DEBUG ] "name": "k8s-node01", [k8s-node01][DEBUG ] "outside_quorum": [], [k8s-node01][DEBUG ] "quorum": [ [k8s-node01][DEBUG ] 0 [k8s-node01][DEBUG ] ], [k8s-node01][DEBUG ] "rank": 0, [k8s-node01][DEBUG ] "state": "leader", [k8s-node01][DEBUG ] "sync_provider": [] [k8s-node01][DEBUG ] } [k8s-node01][DEBUG ] ******************************************************************************** [k8s-node01][INFO ] monitor: mon.k8s-node01 is running [k8s-node01][INFO ] Running command: sudo ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.k8s-node01.asok mon_status [ceph_deploy.mon][INFO ] processing monitor mon.k8s-node01 [k8s-node01][DEBUG ] connection detected need for sudo [k8s-node01][DEBUG ] connected to host: k8s-node01 [k8s-node01][DEBUG ] detect platform information from remote host [k8s-node01][DEBUG ] detect machine type [k8s-node01][DEBUG ] find the location of an executable [k8s-node01][INFO ] Running command: sudo ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.k8s-node01.asok mon_status [ceph_deploy.mon][INFO ] mon.k8s-node01 monitor has reached quorum! [ceph_deploy.mon][INFO ] all initial monitors are running and have formed quorum [ceph_deploy.mon][INFO ] Running gatherkeys... [ceph_deploy.gatherkeys][INFO ] Storing keys in temp directory /tmp/tmpvc1_2l [k8s-node01][DEBUG ] connection detected need for sudo [k8s-node01][DEBUG ] connected to host: k8s-node01 [k8s-node01][DEBUG ] detect platform information from remote host [k8s-node01][DEBUG ] detect machine type [k8s-node01][DEBUG ] get remote short hostname [k8s-node01][DEBUG ] fetch remote file [k8s-node01][INFO ] Running command: sudo /usr/bin/ceph --connect-timeout=25 --cluster=ceph --admin-daemon=/var/run/ceph/ceph-mon.k8s-node01.asok mon_status [k8s-node01][INFO ] Running command: sudo /usr/bin/ceph --connect-timeout=25 --cluster=ceph --name mon. --keyring=/var/lib/ceph/mon/ceph-k8s-node01/keyring auth get client.admin [k8s-node01][INFO ] Running command: sudo /usr/bin/ceph --connect-timeout=25 --cluster=ceph --name mon. --keyring=/var/lib/ceph/mon/ceph-k8s-node01/keyring auth get client.bootstrap-mds [k8s-node01][INFO ] Running command: sudo /usr/bin/ceph --connect-timeout=25 --cluster=ceph --name mon. --keyring=/var/lib/ceph/mon/ceph-k8s-node01/keyring auth get client.bootstrap-mgr [k8s-node01][INFO ] Running command: sudo /usr/bin/ceph --connect-timeout=25 --cluster=ceph --name mon. --keyring=/var/lib/ceph/mon/ceph-k8s-node01/keyring auth get client.bootstrap-osd [k8s-node01][INFO ] Running command: sudo /usr/bin/ceph --connect-timeout=25 --cluster=ceph --name mon. --keyring=/var/lib/ceph/mon/ceph-k8s-node01/keyring auth get client.bootstrap-rgw [ceph_deploy.gatherkeys][INFO ] Storing ceph.client.admin.keyring [ceph_deploy.gatherkeys][INFO ] Storing ceph.bootstrap-mds.keyring [ceph_deploy.gatherkeys][INFO ] Storing ceph.bootstrap-mgr.keyring [ceph_deploy.gatherkeys][INFO ] keyring 'ceph.mon.keyring' already exists [ceph_deploy.gatherkeys][INFO ] Storing ceph.bootstrap-osd.keyring [ceph_deploy.gatherkeys][INFO ] Storing ceph.bootstrap-rgw.keyring [ceph_deploy.gatherkeys][INFO ] Destroy temp directory /tmp/tmpvc1_2l [root@k8s-master01 my-cluster]#
完成上述操作后,当前目录里应该会出现这些密钥环: {cluster-name}.client.admin.keyring {cluster-name}.bootstrap-osd.keyring {cluster-name}.bootstrap-mds.keyring {cluster-name}.bootstrap-rgw.keyring Note 只有在安装 Hammer 或更高版时才会创建 bootstrap-rgw 密钥环
[root@k8s-master01 my-cluster]# ll total 372 -rw------- 1 root root 113 Aug 31 11:51 ceph.bootstrap-mds.keyring -rw------- 1 root root 113 Aug 31 11:51 ceph.bootstrap-mgr.keyring -rw------- 1 root root 113 Aug 31 11:51 ceph.bootstrap-osd.keyring -rw------- 1 root root 113 Aug 31 11:51 ceph.bootstrap-rgw.keyring -rw------- 1 root root 151 Aug 31 11:51 ceph.client.admin.keyring -rw-r--r-- 1 root root 202 Aug 31 11:11 ceph.conf -rw-r--r-- 1 root root 348626 Aug 31 11:51 ceph-deploy-ceph.log -rw------- 1 root root 73 Aug 31 11:11 ceph.mon.keyring
Ceph 集群部署完成后,你可以尝试一下管理功能、 rados 对象存储命令,之后可以继续快速入门手册,了解 Ceph 块设备、 Ceph 文件系统和 Ceph 对象网关
master节点上将秘钥环复制到从节点
ceph-deploy admin k8s-master01 && ceph-deploy admin k8s-node01 && ceph-deploy admin k8s-node02 && ceph-deploy admin k8s-node03
要确保此密钥环文件有读权限
sudo chmod +r /etc/ceph/ceph.client.admin.keyring
ceph命令操作
创建名称为rbd的存储池 ceph osd pool create rbd 128 设置存储池副本数量为3 ceph osd pool set cephrbd size 3 查看存储池副本 ceph osd pool get cephrbd size 重命名存储池 ceph osd pool rename rbd rbd1 删除存储池 ceph osd pool delete rbd1
浙公网安备 33010602011771号