keepalived+nginx

【转载于https://blog.csdn.net/chenshuai199533/article/details/124791176】

keepalived是什么

keepalived是集群管理中保证集群高可用(HA)的一个服务软件,其功能类似于heartbeat,用来防止单点故障。

keepalived是以VRRP协议为实现基础的,当backup收不到vrrp包时就认为master宕掉了,这时就需要根据VRRP
的优先级来选举一个backup当master。这样我们就可以保证集群的高可用。

VRRP本身是数通方向的协议。
keepalived是以VRRP协议为实现基础的,VRRP全称Virtual Router Redundancy Protocol,即虚拟路由冗余协议。
虚拟路由冗余协议,可以认为是实现路由器高可用的协议,即将N台提供相同功能的路由器组成一个路由器组,这个组
里面有一个master和多个backup,master上面有一个对外提供服务的vip(该路由器所在局域网内其他机器的默认路
由为该vip),master会发组播,当backup收不到vrrp包时就认为master宕掉了,这时就需要根据VRRP的优先级来选
举一个backup当master。这样的话就可以保证路由器的高可用了。

keepalived主要有三个模块,分别是core、check和vrrp。core模块为keepalived的核心,负责主进程的启动、维护
以及全局配置文件的加载和解析。check负责健康检查,包括常见的各种检查方式。vrrp模块是来实现VRRP协议的
HA 高可用    
               client
                    |
            192.168.161.16   vip  vip给谁,要看谁是master,可飘逸的我们叫做资源
                     |
              
masterA---hearbeat--- backupB
 dip          心跳检测         dip
 
主要原因:网络 
脑裂:backup强资源,master不认为自己会死,他俩抢着为客户端服务
解决方案:shoot the other in the head 爆头  master    
        
如何判断谁是master:看vip,vip在谁上谁就是master        
        
什么是脑裂?
脑裂(split-brain):指在一个高可用(HA)系统中,当联系着的两个节点断开联系时,本来为一个整体的系统,
分裂为两个独立节点,这时两个节点开始争抢共享资源,结果会导致系统混乱,数据损坏。
        
对于无状态服务的HA,无所谓脑裂不脑裂;但对有状态服务(比如MySQL)的HA,必须要严格防止脑裂。        
究竟是有状态服务,还是无状态服务,其判断依据——两个来自相同发起者的请求在服务器端是否具备上下文关系。   
        
举例:
   在商城里购买一件商品。需要经过放入购物车、确认订单、付款等多个步骤。由于HTTP协议本身是无状态的,所以
为了实现有状态服务,就需要通过一些额外的方案。比如最常见的session,将用户挑选的商品(购物车),保存到
session中,当付款的时候,再从购物车里取出商品信息 

1.两个服务器安装nginx
[root@k8master opt]# wget https://openresty.org/download/openresty-1.21.4.1.tar.gz
[root@k8master opt]# yum -y install wget gcc zlib-devel openssl-devel pam-devel libselinux-devel make perl-core gcc-c++ pcre-devel
[root@k8master opt]# tar -xf openresty-1.21.4.1.tar.gz 
[root@k8master opt]# cd openresty-1.21.4.1/
[root@k8master opt]# ./configure --prefix=/usr/local/openresty
[root@k8master opt]# make
[root@k8master opt]# make install
[root@k8master opt]# /usr/local/openresty/nginx/sbin/nginx 
[root@k8master opt]# echo 'web-server-1' > /usr/local/openresty/nginx/html/index.html 
测试界面
分别要在web-server上创建一个测试界面
echo "web-server-1" > /usr/local/openresty/nginx/html/index.html
echo "web-server-2" > /usr/local/openresty/nginx/html/index.html
检测web-server是否正常被访问
负载均衡部署(两台都做)
master:192.168.161.131
backup:192.168.161.132

master和backup都要做以下操作:
vim /usr/local/openresty/nginx/conf/nginx.conf
在http模块下第25行下面添加以下内容

  upstream xingdian {
        server 192.168.161.131:80;
        server 192.168.161.132:80;
        }

vim /usr/local/openresty/nginx/conf/default.conf 修改一下内容
    server {
    listen 80;
    server_name  localhost;
    location / {
        proxy_pass http://xingdian;
         proxy_redirect default;
         proxy_set_header Host $http_host;
         proxy_set_header X-Real-IP $remote_addr;
         proxy_set_header REMOTE-HOST $remote_addr;
         proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    }
   }

保证nginx的负载均衡可用,客户端可以访问测试:
keepalived实现调度器HA
1. 主/备调度器安装软件(安装keepalived)
[root@k8master opt]# yum install -y keepalived
[root@k8sworker opt]# yum install -y keepalived
[root@k8master opt]# cp /etc/keepalived/keepalived.conf /etc/keepalived/keepalived.conf.bak (略)
[root@k8master opt]# vim /etc/keepalived/keepalived.conf
! Configuration File for keepalived

global_defs {
   router_id director1   #辅助改为director2
}

vrrp_instance VI_1 {
    state MASTER        #定义主还是备
    interface ens32     #VIP绑定接口
    virtual_router_id 80  #整个集群的调度器一致
    priority 100         #back改为50,优先
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
        192.168.161.16/24
    }
}
    
[root@k8sworker opt]# cp /etc/keepalived/keepalived.conf /etc/keepalived/keepalived.conf.bak
[root@k8sworker opt]# vim /etc/keepalived/keepalived.conf
! Configuration File for keepalived

global_defs {
   router_id directory2
}

vrrp_instance VI_1 {   #实例名称,两台要保持相同
    state BACKUP    #设置为backup
    interface ens32  #心跳网卡
    nopreempt        #设置到back上面,不抢占资源
    virtual_router_id 80  #虚拟路由编号,主备要保持一致
    priority 50   #辅助改为50
    advert_int 1   #检查间隔,单位秒
    authentication {  秘钥认证
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
        192.168.161.16/24
    }
}
启动主备keepalived
[root@k8master opt]# systemctl start keepalived
[root@k8master opt]# systemctl enable keepalived
Created symlink from /etc/systemd/system/multi-user.target.wants/keepalived.service to /usr/lib/systemd/system/keepalived.service.
[root@k8master opt]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
2: ens32: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 00:50:56:2d:b0:fc brd ff:ff:ff:ff:ff:ff
    inet 192.168.161.131/24 brd 192.168.161.255 scope global ens32
       valid_lft forever preferred_lft forever
    inet 192.168.161.16/24 scope global secondary ens32
       valid_lft forever preferred_lft forever
3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default 
    link/ether 02:42:0d:e3:46:d3 brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
       valid_lft forever preferred_lft forever
10: tunl0@NONE: <NOARP,UP,LOWER_UP> mtu 1440 qdisc noqueue state UNKNOWN group default qlen 1000
    link/ipip 0.0.0.0 brd 0.0.0.0
    inet 10.100.125.192/32 brd 10.100.125.192 scope global tunl0
       valid_lft forever preferred_lft forever
到此:
可以解决心跳故障keepalived
不能解决Nginx服务故障
4. 扩展对调度器Nginx健康检查(可选)两台都设置
思路:
让Keepalived以一定时间间隔执行一个外部脚本,脚本的功能是当Nginx
失败,则关闭本机的Keepalived
(1) script
[root@k8master opt]# cat /etc/keepalived/check_nginx_status.sh
#!/bin/bash												        
usr/bin/curl -I http://localhost &>/dev/null	
if [ $? -ne 0 ];then										    
#	/etc/init.d/keepalived stop
	systemctl stop keepalived
fi															        	
[root@nginx-proxy-master ~]# chmod a+x /etc/keepalived/check_nginx_status.sh

(2). keepalived使用script
[root@k8master opt]#  cat /etc/keepalived/keepalived.conf
! Configuration File for keepalived

global_defs {
   router_id director1
}
vrrp_script check_nginx {     #健康检测模块调用
   script "/etc/keepalived/check_nginx_status.sh"  #指定脚本
   interval 5  #检查频率,秒
}

vrrp_instance VI_1 {
    state MASTER
    interface ens33
    virtual_router_id 80
    priority 100
    advert_int 1
    authentication {  
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
        192.168.161.16/24
    }
    track_script { 引用脚本
        check_nginx
    }
}

注:必须先启动nginx,再启动keepalived

拓展学习:
#!/bin/bash
#+检查nginx进程是否存在
counter=$(ps -C nginx --no-heading|wc -l)    #此行有服务名
if [ "${counter}" = "0" ]; then
#尝试启动一次nginx,停止5秒后再次检测
    service nginx start                                          #启动服务
    sleep 5
    counter=$(ps -C httpd --no-heading|wc -l) #此行有服务名
    if [ "${counter}" = "0" ]; then
#如果启动没成功,就杀掉keepalive触发主备切换
        service keepalived stop
    fi
fi



#!/bin/bash
#haproxy健康检查脚本
a=`ps -C haproxy --no-heading|wc -l`
if [ "$a" -eq "0" ];then
        systemctl start haproxy
        sleep 5
        a=`ps -C haproxy --no-heading|wc -l`
        if [ "$a" -eq "0"  ];then
                systemctl stop keepalived
        fi
fi
posted @ 2024-03-13 17:48  w'dwd  阅读(31)  评论(0)    收藏  举报