ceph之centos7.6.1810手动部署ceph-mds
1.构建第一个mds
1)为mds元数据服务器创建一个目录
mkdir -p /var/lib/ceph/mds/ceph-ceph4
2)为bootstrap-mds客户端创建一个密钥 注:(如果下面的密钥在目录里已生成可以省略此步骤
ceph-authtool --create-keyring /var/lib/ceph/bootstrap-mds/ceph.keyring --gen-key -n client.bootstrap-mds creating /var/lib/ceph/bootstrap-mds/ceph.keyring cat /var/lib/ceph/bootstrap-mds/ceph.keyring [client.bootstrap-mds] key = AQCOe9pcEuCiIRAAHvGgC98ZGYKE5klCw4kAfA==
3)在ceph auth库中创建bootstrap-mds客户端,赋予权限添加之前创建的密钥 注(查看ceph auth list 用户权限认证列表 如果已有client.bootstrap-mds此用户,此步骤可以省略)
ceph auth add client.bootstrap-mds mon 'allow profile bootstrap-mds' -i /var/lib/ceph/bootstrap-mds/ceph.keyring added key for client.bootstrap-mds ceph auth list |grep -A 2 client.bootstrap-mds installed auth entries: client.bootstrap-mds key: AQCOe9pcEuCiIRAAHvGgC98ZGYKE5klCw4kAfA== caps: [mon] allow profile bootstrap-mds
4)在root家目录里创建ceph.bootstrap-mds.keyring文件
touch /root/ceph.bootstrap-mds.keyring
5)把keyring /var/lib/ceph/bootstrap-mds/ceph.keyring里的密钥导入家目录下的ceph.bootstrap-mds.keyring文件里
ceph-authtool --import-keyring /var/lib/ceph/bootstrap-mds/ceph.keyring ceph.bootstrap-mds.keyring importing contents of /var/lib/ceph/bootstrap-mds/ceph.keyring into ceph.bootstrap-mds.keyring cat /root/ceph.bootstrap-mds.keyring [client.bootstrap-mds] key = AQCOe9pcEuCiIRAAHvGgC98ZGYKE5klCw4kAfA==
6)在ceph auth库中创建mds.ceph4用户,并赋予权限和创建密钥,密钥保存在/var/lib/ceph/mds/ceph-ceph4/keyring文件里
ceph --cluster ceph --name client.bootstrap-mds --keyring /var/lib/ceph/bootstrap-mds/ceph.keyring auth get-or-create mds.ceph4 osd 'allow rwx' mds 'allow' mon 'allow profile mds' -o /var/lib/ceph/mds/ceph-ceph4/keyring cat /var/lib/ceph/mds/ceph-ceph4/keyring [mds.ceph4] key = AQCTfNpcMlgDIhAAM72KJgwcH4w9pv1QllEoCg== ceph auth list |grep -A 4 mds.ceph4 installed auth entries: mds.ceph4 key: AQCTfNpcMlgDIhAAM72KJgwcH4w9pv1QllEoCg== caps: [mds] allow caps: [mon] allow profile mds caps: [osd] allow rwx
7)启动mds,并设置开机自启
systemctl start ceph-mds@ceph4
systemctl enable ceph-mds@ceph4
systemctl status ceph-mds@ceph4
● ceph-mds@ceph4.service - Ceph metadata server daemon
Loaded: loaded (/usr/lib/systemd/system/ceph-mds@.service; enabled; vendor preset: disabled)
Active: active (running) since Tue 2019-05-14 16:31:38 CST; 14s ago
Main PID: 16055 (ceph-mds)
CGroup: /system.slice/system-ceph\x2dmds.slice/ceph-mds@ceph4.service
└─16055 /usr/bin/ceph-mds -f --cluster ceph --id ceph4 --setuser ceph --setgroup ceph
May 14 16:31:38 ceph4 systemd[1]: Started Ceph metadata server daemon.
May 14 16:31:38 ceph4 ceph-mds[16055]: starting mds.ceph4 at -
2.构建第二个,第三个
1)拷贝密钥文件到ceph5,ceph6
scp ceph.bootstrap-mds.keyring ceph5:/root/ceph.bootstrap-mds.keyring scp /var/lib/ceph/bootstrap-mds/ceph.keyring ceph5:/var/lib/ceph/bootstrap-mds/ceph.keyring scp ceph.bootstrap-mds.keyring ceph6:/root/ceph.bootstrap-mds.keyring scp /var/lib/ceph/bootstrap-mds/ceph.keyring ceph6:/var/lib/ceph/bootstrap-mds/ceph.keyring
2)在ceph5,ceph6上创建mds元数据目录
mkdir -p /var/lib/ceph/mds/ceph-ceph5 mkdir -p /var/lib/ceph/mds/ceph-ceph6
3)在ceph auth库中创建mds.ceph5用户,并赋予权限和创建密钥,密钥保存在/var/lib/ceph/mds/ceph-ceph5/keyring文件里
ceph --cluster ceph --name client.bootstrap-mds --keyring /var/lib/ceph/bootstrap-mds/ceph.keyring auth get-or-create mds.ceph5 osd 'allow rwx' mds 'allow' mon 'allow profile mds' -o /var/lib/ceph/mds/ceph-ceph5/keyring ceph --cluster ceph --name client.bootstrap-mds --keyring /var/lib/ceph/bootstrap-mds/ceph.keyring auth get-or-create mds.ceph6 osd 'allow rwx' mds 'allow' mon 'allow profile mds' -o /var/lib/ceph/mds/ceph-ceph6/keyring
4)启动mds,并设置开机自启
systemctl start ceph-mds@ceph5 systemctl enable ceph-mds@ceph5 systemctl start ceph-mds@ceph6 systemctl enable ceph-mds@ceph6
以上仅是将mds服务起来,未完
创建cephfs所需的pool,并创建fs,但我这边pg不够用,故删除不用的pool
删除pool报错
ceph osd pool rm volumes volumes --yes-i-really-really-mean-it Error EPERM: pool deletion is disabled; you must first set the mon_allow_pool_delete config option to true before you can destroy a pool
解决方法:
在mon节点修改配置文件/etc/ceph/ceph.conf
[mon] mon allow pool delete = true
重启ceph-mon后,再次删除
ceph osd pool rm volumes volumes --yes-i-really-really-mean-it pool 'volumes' removed
mds节点操作如下
ceph osd pool create cephfs_data 32
pool 'cephfs_data' created
ceph osd pool create cephfs_metadata 32
pool 'cephfs_metadata' created
ceph fs new cephfs cephfs_metadata cephfs_data
new fs with metadata pool 10 and data pool 9
ceph fs ls
name: cephfs, metadata pool: cephfs_metadata, data pools: [cephfs_data ]
ceph mds stat
cephfs-1/1/1 up {0=ceph6=up:active}, 2 up:standby
ceph -s
cluster:
id: 1f0490cd-f938-4e20-8ea5-d817d941a6e6
health: HEALTH_OK
services:
mon: 1 daemons, quorum ceph4
mgr: ceph4(active)
mds: cephfs-1/1/1 up {0=ceph6=up:active}, 2 up:standby
osd: 6 osds: 6 up, 6 in
rgw: 3 daemons active
data:
pools: 8 pools, 448 pgs
objects: 252 objects, 10.3MiB
usage: 6.29GiB used, 23.1GiB / 29.4GiB avail
pgs: 448 active+clean
客户端mount cephFS
yum -y install ceph-fuse ceph-authtool -p /etc/ceph/ceph.client.admin.keyring > admin.key chmod 600 admin.key
mount.ceph ceph4:6789:/ /mnt -o name=admin,secretfile=admin.key df -Th Filesystem Type Size Used Avail Use% Mounted on /dev/mapper/centos-root xfs 17G 2.0G 16G 12% / devtmpfs devtmpfs 475M 0 475M 0% /dev tmpfs tmpfs 487M 0 487M 0% /dev/shm tmpfs tmpfs 487M 7.8M 479M 2% /run tmpfs tmpfs 487M 0 487M 0% /sys/fs/cgroup /dev/sda1 xfs 1014M 133M 882M 14% /boot /dev/sdc1 xfs 97M 5.4M 92M 6% /var/lib/ceph/osd/ceph-3 /dev/sdb1 xfs 97M 5.4M 92M 6% /var/lib/ceph/osd/ceph-0 tmpfs tmpfs 98M 0 98M 0% /run/user/0 10.1.1.24:6789:/ ceph 7.3G 0 7.3G 0% /mnt

浙公网安备 33010602011771号