K8S集群实现高可用

k8s的高可用,主要是实现Master节点的高可用。那么我们看看各个组件是如何解决高可用的。

Kubelet、Kube-proxy:只工作在当前Node节点上,无需高可用。
etcd:etcd如果是放在集群内部的,在kubeadm1.5之后,对于多Master集群,一个Master节点加入集群后将自动实现集群化扩展。所以集群已经自动实现高可用,无需再人工干预。
kube-controller-manager:对于多Master集群,这个组件只会有一个正常工作,其它处于休眠挂起状态。当工作节点发生故障时才会唤醒另一个接管。所以集群已经自动实现高可用,无需再人工干预。
kube-scheduler:与kube-controller-manager情况一样,集群已经自动实现高可用,无需再人工干预。
kube-apiserver:每个Master节点的kube-apiserver都是独立的,没有竞争关系,访问哪个节点效果都是一样的。

综上所述,只有kube-apiserver需要人工解决。解决方案也很简单,既然APIServer以HTTP API提供接口服务,就可以使用常规的四层或七层的代理实现高可用和负载均衡,如使用Nigix、HAProxy等代理服务器。并且可以更进一步,使用vip和keepalive实现,代理服务器的高可用。然后将集群的访问地址从APIServer的地址改为vip的地址。

 master01和master02安装配置nginx

[root@k8s-master01 ~]# yum install nginx -y
[root@k8s-master02 ~]# yum install nginx -y

cat > /etc/nginx/nginx.conf << "EOF" user nginx; worker_processes auto; error_log /var/log/nginx/error.log; pid /run/nginx.pid; include /usr/share/nginx/modules/*.conf; events { worker_connections 1024; } # 四层负载均衡,为两台Master apiserver组件提供负载均衡 stream { log_format main '$remote_addr $upstream_addr - [$time_local] $status $upstream_bytes_sent'; access_log /var/log/nginx/k8s-access.log main; upstream k8s-apiserver { # Master APISERVER IP:PORT server 192.168.0.113:6443; # Master2 APISERVER IP:PORT server 192.168.0.116:6443; } server { listen 16443; proxy_pass k8s-apiserver; } } http { log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; access_log /var/log/nginx/access.log main; sendfile on; tcp_nopush on; tcp_nodelay on; keepalive_timeout 65; types_hash_max_size 2048; include /etc/nginx/mime.types; default_type application/octet-stream; server { listen 80 default_server; server_name _; location / { } } } EOF

 

master01和master02安装配置Keepalived

1.安装keepalived
[root@k8s-master01 ~]# yum install keepalived.x86_64 -y
[root@k8s-master02 ~]# yum install keepalived.x86_64 -y

2.配置keepalived
[root@k8s-master01 ~]# master节点
cat > /etc/keepalived/keepalived.conf << EOF
global_defs {
   notification_email {
     acassen@firewall.loc
     failover@firewall.loc
     sysadmin@firewall.loc
   }
   notification_email_from fage@qq.com
   smtp_server 127.0.0.1
   smtp_connect_timeout 30
   router_id NGINX_MASTER
}
vrrp_script check_nginx {
    script "/etc/keepalived/check_nginx.sh"
}
vrrp_instance VI_1 {
    state MASTER
    interface ens33
    virtual_router_id 51 # VRRP 路由 ID实例,每个实例是唯一的
    priority 100    # 优先级,备服务器设置 90
    advert_int 1    # 指定VRRP 心跳包通告间隔时间,默认1秒
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    # 虚拟IP
    virtual_ipaddress {
        192.168.0.120/24
    }
    track_script {
        check_nginx
    }
}
EOF

#配置检查脚本
cat > /etc/keepalived/check_nginx.sh  << "EOF"
#!/bin/bash
count=$(ps -ef |grep nginx |egrep -cv "grep|$$")
 
if [ "$count" -eq 0 ];then
    exit 1
else
    exit 0
fi
EOF
chmod +x /etc/keepalived/check_nginx.sh [root@k8s
-master02 ~]# backup 节点
cat > /etc/keepalived/keepalived.conf << EOF
global_defs {
   notification_email {
     acassen@firewall.loc
     failover@firewall.loc
     sysadmin@firewall.loc
   }
   notification_email_from fage@qq.com
   smtp_server 127.0.0.1
   smtp_connect_timeout 30
   router_id NGINX_BACKUP
}
vrrp_script check_nginx {
    script "/etc/keepalived/check_nginx.sh"
}
vrrp_instance VI_1 {
    state BACKUP
    interface ens33
    virtual_router_id 51 # VRRP 路由 ID实例,每个实例是唯一的
    priority 90
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
        192.168.0.120/24
    }
    track_script {
        check_nginx
    }
}
EOF


cat > /etc/keepalived/check_nginx.sh  << "EOF"
#!/bin/bash
count=$(ps -ef |grep nginx |egrep -cv "grep|$$")
 
if [ "$count" -eq 0 ];then
    exit 1
else
    exit 0
fi
EOF
chmod +x /etc/keepalived/check_nginx.sh
3.启动
systemctl daemon-reload
systemctl restart nginx && systemctl enable nginx && systemctl status nginx
systemctl restart keepalived && systemctl enable keepalived && systemctl status keepalive

所有node节点kubelet,kube-proxy指向api-server的vip

1.node节点修改kubelet指向虚拟VIP
[root@k8s-node-1 ~]# vim /etc/kubernetes/kubelet
KUBELET_API_SERVER="--api-servers=http://192.168.0.120:8080"

[root@k8s-node-2 ~]# vim /etc/kubernetes/kubelet
KUBELET_API_SERVER="--api-servers=http://192.168.0.120:8080"

2.node节点修改kube-proxy指向虚拟VIP
[root@k8s-node-1 ~]# vim /etc/kubernetes/config
KUBE_MASTER="--master=http://192.168.0.120:8080"

[root@k8s-node-2 ~]# vim /etc/kubernetes/config
KUBE_MASTER="--master=http://192.168.0.120:8080"

systemctl restart kubelet.service kube-proxy.service

 

posted on 2022-09-06 18:26  属于我的梦,明明还在  阅读(651)  评论(0)    收藏  举报