kvm虚拟机部署高可用负载均衡集群(1)

1. 概述

本篇博客主要记录使用kvm虚拟机部署一个高可用负载均衡集群的过程。

高可用软件:keeaplived,负载均衡软件:lvs

lvs主要用来实现对后端服务访问的负载均衡调度,比如后端的80端口服务,22端口服务,443端口服务。而高可用软件keepalived用来对lvs节点实现高可用,避免单点故障,导致业务访问中断

2. 部署过程

本篇博客使用2台虚拟机node13,node14做负载均衡热备集群。即node13和node14共同提供高可用的负载均衡服务。使用node15,node16作为后端的服务节点,对外提供sshd服务。要求node13,node14为node15和node16上面的22端口访问做负载均衡。

2.1 配置负载均衡器(节点)

根据规划,node13和node14作为负载均器,应该部署ipvsadm和keepalived。

以下操作在虚拟机node13和node14均操作。安装过程如下:

yum -y install ipvsadm keepalived

在node13或者node14执行:ipvsadm -ln,结果如下:

$ ipvsadm -ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn

node13作为负载均衡主服务节点,vim /etc/keepalived/keepalived.conf配置keepalived.conf,内容如下:

global_defs {
   router_id LVS_MASTER
}

vrrp_instance VI_1 {
    state MASTER
    interface eth0
    virtual_router_id 51
    priority 100
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
        192.168.80.188
    }
}

virtual_server 192.168.80.188 22 {
    delay_loop 6
    lb_algo rr
    lb_kind DR
    !persistence_timeout 50 #如果负载22端口,这个参数需要取消
    protocol TCP

    real_server 192.168.80.15 22 {
        weight 1
        TCP_CHECK {
            connect_timeout 3
            delay_before_retry 3
            connect_port 22
        }
    }

    real_server 192.168.80.16 22 {
        weight 1
        TCP_CHECK {
            connect_timeout 3
            delay_before_retry 3
            connect_port 22
        }
    }
}

node14作为负载均衡的备份节点,其keepalived.conf的配置文件内容:

global_defs {
   router_id LVS_BACKUP
}

vrrp_instance VI_1 {
    state BACKUP
    interface eth0
    virtual_router_id 51
    priority 90
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
        192.168.80.188
    }
}

virtual_server 192.168.80.188 22 {
    delay_loop 6
    lb_algo rr
    lb_kind DR
    !persistence_timeout 50 #如果负载22端口,这个参数需要取消
    protocol TCP

    real_server 192.168.80.15 22 {
        weight 1
        TCP_CHECK {
            connect_timeout 3
            delay_before_retry 3
            connect_port 22
        }
    }

    real_server 192.168.80.16 22 {
        weight 1
        TCP_CHECK {
            connect_timeout 3
            delay_before_retry 3
            connect_port 22
        }
    }
}

在node13和node14分别执行:

systemctl start keepalived 
systemctl enable keepalived

node13和node14分别执行:vim /etc/rc.local

touch /var/lock/subsys/local
echo "1" > /proc/sys/net/ipv4/ip_forward
exit 0

chmod +x /etc/rc.local

2.2 配置服务节点启动脚本

以下操作需要在被负载的节点操作:node15和node16

接下来创建lvs的启动脚本,vim /etc/init.d/realserver,内容如下:

#!/bin/sh
VIP=192.168.80.188
. /etc/rc.d/init.d/functions
    
case "$1" in
start)
    /sbin/ifconfig lo down
    /sbin/ifconfig lo up
    echo "1" >/proc/sys/net/ipv4/conf/lo/arp_ignore
    echo "2" >/proc/sys/net/ipv4/conf/lo/arp_announce
    echo "1" >/proc/sys/net/ipv4/conf/all/arp_ignore
    echo "2" >/proc/sys/net/ipv4/conf/all/arp_announce
    /sbin/sysctl -p >/dev/null 2>&1
    /sbin/ifconfig lo:0 $VIP netmask 255.255.255.255 up
    /sbin/route add -host $VIP dev lo:0
    echo "LVS-DR real server starts successfully.\n"
    ;;
stop)
    /sbin/ifconfig lo:0 down
    /sbin/route del $VIP >/dev/null 2>&1
    echo "1" >/proc/sys/net/ipv4/conf/lo/arp_ignore
    echo "2" >/proc/sys/net/ipv4/conf/lo/arp_announce
    echo "1" >/proc/sys/net/ipv4/conf/all/arp_ignore
    echo "2" >/proc/sys/net/ipv4/conf/all/arp_announce
echo "LVS-DR real server stopped.\n"
    ;;
status)
    isLoOn=`/sbin/ifconfig lo:0 | grep "$VIP"`
    isRoOn=`/bin/netstat -rn | grep "$VIP"`
    if [ "$isLoON" == "" -a "$isRoOn" == "" ]; then
        echo "LVS-DR real server has run yet."
    else
        echo "LVS-DR real server is running."
    fi
    exit 3
    ;;
*)
    echo "Usage: $0 {start|stop|status}"
    exit 1
esac
exit 0

执行:chmod +x /etc/init.d/realserver

执行:service realserver start

3.测试

[root@node13][~]
$ ipvsadm -ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  192.168.80.188:22 rr
  -> 192.168.80.15:22             Route   1      0          2         
  -> 192.168.80.16:22             Route   1      0          2   

执行2次:ssh root@192.168.80.188,发现分别登陆node15和node16

[root@node13][~]
$ ipvsadm -ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  192.168.80.188:22 rr
  -> 192.168.80.15:22             Route   1      1          2         
  -> 192.168.80.16:22             Route   1      1          2  

执行:virsh destroy node13关闭主服务节点,服务会被node14接管,此时与node15和node16的ssh连接均断开,重新连接后,

在node14执行:ipvsadm -ln,能够看到新的连接

这是直接使用vscode编辑的博文内容

posted @ 2022-07-04 20:12  liwldev  阅读(888)  评论(0编辑  收藏  举报