基于ctdb的nfs-ganesha+glusterfs
关于nfs-ganesha,
pages之11即关于nfs-ganesha之high availability
nfs-ganesha不提供自己的群集支持,但可以使用linux ha实现ha
fsal gluster
一、基于pacemaker的nfs-ganesha+glusterfs仅适用于glusterfs3.10
二、基于ctdb的nfs-ganesha+glusterfs
使用ctdb为nfs-ganesha设置ha
1.在所有参与节点安装storhaug包,这将安装所有依赖项,如ctdb,nfs-ganesha-gluster,glusterfs及其相关依赖项
yum install storhaug-nfs
2.配置免密
在其中一个参与节点
创建目录
mkdir -p /etc/sysconfig/storhaug.d/
生成密钥
ssh-keygen -f /etc/sysconfig/storhaug.d/secret.pem
将公钥拷贝至其他节点
ssh-copy-id -i /etc/sysconfig/storhaug.d/secret.pem.pub root@nas5
登录确认
ssh -oPasswordAuthentication=no -oStrictHostKeyChecking=no -i /etc/sysconfig/storhaug.d/secret.pem root@nas5
3.填充
使用参与节点的固定ip
/etc/ctdb/nodes 10.1.1.14 10.1.1.15
使用参与节点的浮动ip(即vip),必须不同于/etc/ctdb/nodes内的固定ip
/etc/ctdb/public_addresses 10.1.1.114/24 ens33 10.1.1.115/24 ens33
4.配置ctdb的主配置文件
/etc/ctdb/ctdbd.conf CTDB_MANAGES_SAMBA=yes CTDB_MANAGES_NFS=yes CTDB_NODES=/etc/ctdb/nodes CTDB_PUBLIC_ADDRESSES=/etc/ctdb/public_addresses #额外新增 CTDB_NFS_CALLOUT=/etc/ctdb/nfs-ganesha-callout CTDB_NFS_STATE_FS_TYPE=glusterfs CTDB_NFS_STATE_MNT=/run/gluster/shared_storage CTDB_NFS_SKIP_SHARE_CHECK=yes NFS_HOSTNAME=localhost
5.可以稍后编辑它以设置全局配置选项
touch /etc/ganesha/ganesha.conf echo "### NFS-Ganesha.config" > /etc/ganesha/ganesha.conf
6.创建受信任的存储池并启动gluster共享存储卷
在所有参与节点上
systemctl start glusterd systemctl enable glusterd
在引导节点上,peer探测其它节点
gluster peer probe nas5
启用gluster共享存储卷,之后会自动创建一个卷并挂载,作为配置存放
gluster volume set all cluster.enable-shared-storage enable volume set: success
验证gluster_shared_storage是否已经在/run/gluster/shared_storage
gluster volume list gluster_shared_storage gv0 df -h Filesystem Size Used Avail Use% Mounted on /dev/mapper/centos-root 17G 1.6G 16G 10% / devtmpfs 475M 0 475M 0% /dev tmpfs 487M 38M 449M 8% /dev/shm tmpfs 487M 7.7M 479M 2% /run tmpfs 487M 0 487M 0% /sys/fs/cgroup /dev/sda1 1014M 133M 882M 14% /boot /dev/mapper/datavg-lv1 1.8G 33M 1.8G 2% /data/brick1 tmpfs 98M 0 98M 0% /run/user/0 nas4:/gv0 1.8G 33M 1.8G 2% /mnt nas4:/gv0 1.8G 32M 1.8G 2% /root/test nas4:/gluster_shared_storage 17G 1.6G 16G 10% /run/gluster/shared_storage
7.启动ctdbd和ganesha.nfsd守护进程
systemctl start nfs-ganesha
systemctl enable nfs-ganesha
systemctl start ctdb
systemctl enable ctdb
systemctl status ctdb
● ctdb.service - CTDB
Loaded: loaded (/usr/lib/systemd/system/ctdb.service; enabled; vendor preset: disabled)
Active: active (running) since Fri 2019-05-10 16:00:00 CST; 7min ago
Docs: man:ctdbd(1)
man:ctdb(7)
Main PID: 9581 (ctdbd)
CGroup: /system.slice/ctdb.service
├─9581 /usr/sbin/ctdbd --pidfile=/run/ctdb/ctdbd.pid --nlist=/etc/ctdb/nodes --public-addresses=/etc/ctdb/public_a...
├─9583 /usr/libexec/ctdb/ctdb_eventd -e /etc/ctdb/events.d -s /var/run/ctdb/eventd.sock -P 9581 -l file:/var/log/l...
└─9654 /usr/sbin/ctdbd --pidfile=/run/ctdb/ctdbd.pid --nlist=/etc/ctdb/nodes --public-addresses=/etc/ctdb/public_a...
May 10 15:59:57 nas4 systemd[1]: Starting CTDB...
May 10 15:59:57 nas4 ctdbd_wrapper[9575]: No recovery lock specified. Starting CTDB without split brain prevention.
May 10 16:00:00 nas4 systemd[1]: Started CTDB.
在主导节点:
storhaug setup Setting up nfs-ganesha is already running
观察ctdb
/var/log/log.ctdb
观察ganesha
/var/log/ganesha/ganesha.log
8.导出gluster卷
1)先将盘做成lvm
pvcreate /dev/sdc vgcreate bricks /dev/sdc vgs lvcreate -L 1.9G -T bricks/thinpool Rounding up size to full physical extent 1.90 GiB Thin pool volume with chunk size 64.00 KiB can address at most 15.81 TiB of data. Logical volume "thinpool" created.
块大小为64.00 KiB的瘦池容量最多可以处理15.81 TiB的数据。
做lv用thin瘦卷,指定的卷大小,仅仅意味着可以使用的最大值,并不会直接为你划出一块大卷,否则浪费, -T表示瘦卷,其实是个池。lv在这个瘦卷pool的基础上创建
lvcreate -V 1.9G -T bricks/thinpool -n brick-1 Rounding up size to full physical extent 1.90 GiB Logical volume "brick-1" created.
2)格式化
mkfs.xfs -i size=512 /dev/bricks/brick-1
meta-data=/dev/bricks/brick-1 isize=512 agcount=8, agsize=62336 blks
= sectsz=512 attr=2, projid32bit=1
= crc=1 finobt=0, sparse=0
data = bsize=4096 blocks=498688, imaxpct=25
= sunit=16 swidth=16 blks
naming =version 2 bsize=4096 ascii-ci=0 ftype=1
log =internal log bsize=4096 blocks=2560, version=2
= sectsz=512 sunit=16 blks, lazy-count=1
realtime =none extsz=4096 blocks=0, rtextents=0
3)创建目录
mkdir -p /bricks/vol
4)手动挂载
mount /dev/bricks/brick-1 /bricks/vol/myvol
5)写入自挂
/etc/fstab /dev/bricks/brick-1 /bricks/vol xfs defaults 0 0
6)挂载点下创建sub-directory
mkdir /bricks/vol/myvol
7)创建gluster卷
gluster volume create myvol replica 2 nas4:/bricks/vol/myvol nas5:/bricks/vol/myvol Replica 2 volumes are prone to split-brain. Use Arbiter or Replica 3 to avoid this. See: http://docs.gluster.org/en/latest/Administrator%20Guide/Split%20brain%20and%20ways%20to%20deal%20with%20it/. Do you still want to continue? (y/n) y volume create: myvol: success: please start the volume to access data
8)启动gluster卷
gluster volume start myvol volume start: myvol: success
9)从ganesha导出gluster卷
storhaug export myvol
第一次报错:未手动创建导出目录
/usr/sbin/storhaug: line 247: /run/gluster/shared_storage/nfs-ganesha/exports/export.myvol.conf: No such file or directory ls: cannot access /run/gluster/shared_storage/nfs-ganesha/exports/export.*.conf: No such file or directory sed: can't read /run/gluster/shared_storage/nfs-ganesha/exports/export.myvol.conf: No such file or directory'
处理:创建导出目录
mkdir -p /run/gluster/shared_storage/nfs-ganesha/exports/
第二次报错:之前配置自动导出前,自己手写的导出中Export_Id重复啦,手动指定啦1,然而ganesha自动导出的时候,默认从1开始,
Error org.freedesktop.DBus.Error.InvalidFileContent: Selected entries in /run/gluster/shared_storage/nfs-ganesha/exports/export.myvol.conf already active!!! WARNING: Command failed on 10.1.1.14: dbus-send --print-reply --system --dest=org.ganesha.nfsd /org/ganesha/nfsd/ExportMgr org.ganesha.nfsd.exportmgr.AddExport string:/run/gluster/shared_storage/nfs-ganesha/exports/export.myvol.conf string:EXPORT\(Path=/myvol\)
处理:将自己手动导出的改为101,解决啦Export_Id冲突,就ok
第三次导出:
storhaug export myvol export myvol method return time=1557478537.439483 sender=:1.64 -> destination=:1.66 serial=51 reply_serial=2 string "1 exports added" method return time=1557478538.752212 sender=:1.84 -> destination=:1.87 serial=21 reply_serial=2 string "1 exports added"
查看自动导出的文件
cd /run/gluster/shared_storage/nfs-ganesha/exports/
cat export.myvol.conf
EXPORT {
Export_Id = 1;
Path = "/myvol";
Pseudo = "/myvol";
Access_Type = RW;
Squash = No_root_squash;
Disable_ACL = true;
Protocols = "3","4";
Transports = "UDP","TCP";
SecType = "sys";
FSAL {
Name = "GLUSTER";
Hostname = localhost;
Volume = "myvol";
}
}

浙公网安备 33010602011771号