nfs主从同步+keepalive

 
角色 系统 ip
master centos7 10.221.253.139
slave centos7 10.221.253.140
vip 10.221.253.141
一.服务器环境准备
1、在 Master 和 Slave 上创建共享目录:
mkdir /data0/volumes_test  -p

2、关闭Master、Slave防火墙和selinux

# 关闭防火墙
systemctl stop firewalld
# 关闭开机自启
systemctl disable firewalld

# setenforce 0
# vim /etc/selinux/config
SELINUX=disabled

二、安装NFS并配置

1、在master和slave服务器安装

yum -y install nfs-utils rpcbind
2、配置 NFS 共享目录, 在 Master和slave 上执行
echo '/data0/volumes_test 10.221.253.0/24(rw,sync,all_squash)' >> /etc/exports

3、启动服务并开机自启,在master和slave上执行

# 开启服务
 systemctl start rpcbind && systemctl start nfs
# 设置开机自启
 systemctl enable rpcbind && systemctl enable nfs

三、配置文件同步

1、在 Slave 配置rsync服务,进行同步 master 数据

# 安装 rsync
yum -y install rsync.x86_64
# 修改 /etc/rsyncd.conf 如下,其中 hosts allow 填写 master ip
uid = root
gid = root
port = 873
pid file = /var/run/rsyncd.pid
log file = /var/log/rsyncd.log
lock file = /var/run/rsyncd.lock
use chroot = no
max connections = 200
read only = false
list = false
fake super = yes
ignore errors
[data]
path = /data0/volumes_test
auth users = rsyncuser
secrets file = /etc/rsync_salve.pass
hosts allow = 10.221.253.139

# 生成认证文件
echo 'rsyncuser:rsyncuser123' > /etc/rsync_salve.pass
chmod 600 /etc/rsync_salve.pass
# 修改 文件夹权限
chown -R root:root /data0/volumes_test
# 启动服务
 rsync --daemon --config=/etc/rsyncd.con

2、在master上测试

yum -y install rsync.x86_64
chown -R root:root /data0/volumes_test
echo "rsyncuser123" > /etc/rsync.pass
chmod 600 /etc/rsync.pass
#创建测试文件,测试推送
cd /data/
mkdir testdir_master
rsync -arv /data0/volumes_test/ rsyncuser@10.221.253.139::data --password-file=/etc/rsync.pass
#在 slave 上查看
ls /data 
# 出现 file.txt 即可

3、在master上配置自动同步

 cd /usr/local/
 wget https://dl.qiyuesuo.com/private/nfs/sersync2.5.4_64bit_binary_stable_final.tar.gz
 tar xvf sersync2.5.4_64bit_binary_stable_final.tar.gz
 mv GNU-Linux-x86/ sersync
cd sersync/
 # 修改配置文件,红色为默认,绿色为修改后
sed -ri 's#<delete start="true"/>#<delete start="false"/>#g' confxml.xml
sed -ri '24s#<localpath watch="/opt/tongbu">#<localpath watch="/data0/volumes_test">#g' confxml.xml
sed -ri '25s#<remote ip="127.0.0.1" name="tongbu1"/>#<remote ip="10.221.253.140" name="data"/>#g' confxml.xml
sed -ri '30s#<commonParams params="-artuz"/>#<commonParams params="-az"/>#g' confxml.xml
sed -ri '31s#<auth start="false" users="root" passwordfile="/etc/rsync.pas"/>#<auth start="true" users="rsyncuser" passwordfile="/etc/rsync.pass"/>#g' confxml.xml
sed -ri '33s#<timeout start="false" time="100"/><!-- timeout=100 -->#<timeout start="true" time="100"/><!-- timeout=100 -->#g' confxml.xml
#启动Sersync
/usr/local/sersync/sersync2 -dro /usr/local/sersync/confxml.xml

4、测试自动同步

# 在 master 中的/data0/volumes_test  目录创建文件
touch test_master
# 查看 salve 中的 /data0/volumes_test 是否有该文件

以上就做完了 salve 同步 master 的文件,但是当 master 宕机后恢复,master 无法同步 salve 文件,所以要配置 master 同步 salve 文件

5、在master配置rsync服务,进行同步slave数据

# 修改 /etc/rsyncd.conf 如下,其中 hosts allow 填写 slave ip
uid = root
gid = root
port = 873
pid file = /var/run/rsyncd.pid
log file = /var/log/rsyncd.log
lock file = /var/run/rsyncd.lock
use chroot = no
max connections = 200
read only = false
list = false
fake super = yes
ignore errors
[data]
path = /data0/volumes_test
auth users = rsyncuser
secrets file = /etc/rsync_master.pass
hosts allow = 10.221.253.140


# 生成认证文件
echo 'rsyncuser:rsyncuser123' > /etc/rsync_master.pass
chmod 600 /etc/rsync_master.pass
# 修改 文件夹权限
chown -R root:root  /data0/volumes_test
# 启动服务
 rsync --daemon --config=/etc/rsyncd.conf 

6、在slave上测试

echo "rsyncuser123" > /etc/rsync.pass
chmod 600 /etc/rsync.pass
#创建测试文件,测试推送
cd  /data0/volumes_test
echo "This is test file" > file.2.txt
rsync -arv  /data0/volumes_test rsyncuser@10.221.253.139::data --password-file=/etc/rsync.pass
#在 master上查看
ls /data0/volumes_test
# 出现 file.2.txt 即可

7、在slave上配置自动同步

cd /usr/local/
 wget https://dl.qiyuesuo.com/private/nfs/sersync2.5.4_64bit_binary_stable_final.tar.gz
 tar xvf sersync2.5.4_64bit_binary_stable_final.tar.gz
 mv GNU-Linux-x86/ sersync
 cd sersync/
 # 修改配置文件,红色为默认,绿色为修改后
sed -ri 's#<delete start="true"/>#<delete start="false"/>#g' confxml.xml
sed -ri '24s#<localpath watch="/opt/tongbu">#<localpath watch="/data0/volumes_test">#g' confxml.xml
sed -ri '25s#<remote ip="127.0.0.1" name="tongbu1"/>#<remote ip="10.221.253.139" name="data"/>#g' confxml.xml
sed -ri '30s#<commonParams params="-artuz"/>#<commonParams params="-az"/>#g' confxml.xml
sed -ri '31s#<auth start="false" users="root" passwordfile="/etc/rsync.pas"/>#<auth start="true" users="rsyncuser" passwordfile="/etc/rsync.pass"/>#g' confxml.xml
sed -ri '33s#<timeout start="false" time="100"/><!-- timeout=100 -->#<timeout start="true" time="100"/><!-- timeout=100 -->#g' confxml.xml
#启动Sersync
/usr/local/sersync/sersync2 -dro /usr/local/sersync/confxml.xml

 

至此我们已经做好了主从相互同步的操作。

四、安装keepalive

1、在 Master 上执行

# yum -y install keepalived.x86_64
# cat /etc/keepalived/keepalived.conf
! Configuration File for keepalived

global_defs {
   router_id NFS-Master
}

vrrp_instance VI_1 {
    state MASTER
    interface eth0
    virtual_router_id 51
    priority 150
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass keepalived123
    }
    virtual_ipaddress {
        10.221.253.141  
    }
}

# 启动服务

systemctl start keepalived.service && systemctl enable keepalived.service

2、在slave上执行

# yum -y install keepalived.x86_64
# cat /etc/keepalived/keepalived.conf
! Configuration File for keepalived

global_defs {
   router_id NFS-Slave
}

vrrp_instance VI_1 {
    state MASTER
    interface eth0
    virtual_router_id 51
    priority 100
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass keepalived123
    }
    virtual_ipaddress {
        10.221.253.141  
    }
}
# 启动服务
systemctl start  keepalived.service && systemctl enable keepalived.service

3、检查虚拟ip,master上执行

ip a | grep  10.221.253.141

4、挂载测试和模拟故障

找一台客户机执行

mount -t nfs 10.221.253.141:/data0/volumes_test  /mnt

模拟服务器故障宕机,测试vip是否漂移

# 在 Master 上关闭 keepalived
systemctl stop keepalived.service
# 执行ip a | grep  10.221.253.141会无输出则关闭成功
# 在 Slave 上查看
ip a | grep  10.221.253.141

有ip则说明漂移成功

五、设置keepalive脚本来检测nfs存活

1、ip漂移规则现在是根据keepalive存活来判断的,当nfs服务挂掉,但是keepalive正常时,是无法漂移的,这不是我们想要的结果,所以需要写一个脚本检测nfs存活,根据结果来进行漂移

#!/bin/sh
/usr/bin/systemctl status keepalived &>/dev/null
if [ $? -ne 0 ]
then
    echo "keepalived未启动,无需监测."
    exit 0
else
    /usr/bin/systemctl status nfs &>/dev/null
    if [ $? -ne 0 ]
    then
        /usr/bin/systemctl restart nfs
        /usr/bin/systemctl status nfs &>/dev/null
        if [ $? -ne 0 ]
        then
            /usr/bin/systemctl stop keepalived
        fi
    fi
fi

 

2、加入定时任务

crontab -e
# 输入定时任务
1/* * * * *  /bin/sh /root/check_nfs.sh &> /root/check_nfs.log

 

到此nfs主从+keepalive已经部署完成了。

 

posted @ 2021-07-20 18:37  运维小兵#杨  阅读(938)  评论(0)    收藏  举报