K8S-高可用架构(扩容多Master架构)

Kubernetes作为容器集群系统,通过健康检查+重启策略实现了Pod故障自我修复能力,通过调度算法实现将Pod分布式部署,并保持预期副本数,根据Node失效状态自动在其他Node拉起Pod,实现了应用层的高可用性。

针对Kubernetes集群,高可用性还应包含以下两个层面的考虑:Etcd数据库的高可用性和Kubernetes Master组件的高可用性。而Etcd我们已经采用3个节点组建集群实现高可用,本节将对Master节点高可用进行说明和实施。

Master节点扮演着总控中心的角色,通过不断与工作节点上的Kubelet进行通信来维护整个集群的健康工作状态。如果Master节点故障,将无法使用kubectl工具或者API做任何集群管理。

Master节点主要有三个服务kube-apiserver、kube-controller-mansger和kube-scheduler,其中kube-controller-mansger和kube-scheduler组件自身通过选择机制已经实现了高可用,所以Master高可用主要针对kube-apiserver组件,而该组件是以HTTP API提供服务,因此对他高可用与Web服务器类似,增加负载均衡器对其负载均衡即可,并且可水平扩容。

根据服务器整体规划和测试资源的限制,这次我们高可用涉及的三台服务器部署(橙色字体):

二进制单master部署可以参考:https://www.cnblogs.com/huanglingfa/p/13773234.html

角色

IP

组件

k8s-master-1

192.168.10.160

kube-apiserver,kube-controller-manager,kube-scheduler,etcd

k8s-master-2

192.168.10.166

kube-apiserver,kube-controller-manager,kube-scheduler

k8s-node-1

192.168.10.161

kubelet,kube-proxy,docker etcd

k8s-node-2

192.168.10.162

kubelet,kube-proxy,docker,etcd

K8s-lb-master

192.168.10.164 ,192.168.10.168 (VIP)

Nginx L4 ,keepalived

K8s-lb-backup

192.168.10.165

Nginx L4, keepalived

 

 

 

 

 

 

 

 

 

 

 

 

 

 

多Master架构图:

基础优化

1、时间同步
echo "#time sync by fage at 2020-7-22" >>/var/spool/cron/root && echo "*/5 * * * *  /usr/sbin/ntpdate ntp1.aliyun.com >/dev/null 2>&1" >>/var/spool/cron/root && systemctl restart crond.service

2、关闭防火墙和selinux
systemctl stop firewalld && systemctl disable firewalld && setenforce 0 && sed -i s#SELINUX=enforcing#SELINUX=disable#g /etc/selinux/config

3、更改主机名
hostname k8s-master-2
echo "k8s-master-2" >/etc/hostname

hostname k8s-lb-A
echo "k8s-lb-A" >/etc/hostname

hostname k8s-lb-B
echo " k8s-lb-B" >/etc/hostname

4、更改hosts文件
cat >/etc/hosts <<EOF
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.10.160 k8s-master-1
192.168.10.161 k8s-node-1
192.168.10.162 k8s-node-2
192.168.10.163 k8s-node-3
192.168.10.164 k8s-lb-A
192.168.10.165 k8s-lb-B
192.168.10.166 k8s-master-2
EOF

5、节点node要禁用swap设备  不禁用要配置声明
swapoff -a
sed -i "s@/dev/mapper/centos-swap swap@#/dev/mapper/centos-swap swap@g" /etc/fstab 

6、将桥接的IPv4流量传递到iptables的链: #注意nginx不做
cat > /etc/sysctl.d/k8s.conf << EOF
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sysctl --system

#设置时区
\cp /usr/share/zoneinfo/Asia/Shanghai /etc/localtime
systemctl restart crond.service

提示如果可以的话集群master与node做master连接node的ssh 免密,操作起来会比较方便

 

一、安装Docker

#直接复制master-1上的docker文件到master-2上
cd /usr/bin/
scp -r /usr/lib/systemd/system/docker.service root@192.168.10.166:/usr/lib/systemd/system/
scp -r containerd containerd-shim docker dockerd docker-init  docker-proxy runc root@192.168.10.166:/usr/bin/
scp -r /etc/docker root@192.168.10.166:/etc/
systemctl daemon-reload && systemctl start docker &&systemctl enable docker

#或者使用yum镜像安装
wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -O /etc/yum.repos.d/docker-ce.repo
yum -y install docker-ce-18.06.1.ce-3.el7   
systemctl enable docker && systemctl start docker
docker --version
#docker镜像加速器
mkdir -p /etc/docker
cat > /etc/docker/daemon.json << EOF
{
  "registry-mirrors": ["https://b9pmyelo.mirror.aliyuncs.com"]
}
EOF
systemctl daemon-reload
systemctl restart docker.service
docker info

 

二、部署Master-2 Node192.168.10.166

 Master-2 与已部署的Master1所有操作一致。所以我们只需将Master-1所有K8s文件拷贝过来,再修改下服务器IP和主机名启动即可。

1. 创建etcd证书目录

在Master-2创建etcd证书目录:

mkdir -p /opt/etcd/ssl

2. 拷贝文件(Master1操作)

拷贝Master-1上所有K8s文件和etcd证书到Master-2:

scp -r /opt/kubernetes root@192.168.10.166:/opt
scp -r /opt/cni/ root@192.168.10.166:/opt
scp -r /opt/etcd/ssl root@192.168.10.166:/opt/etcd
scp /usr/lib/systemd/system/kube* root@192.168.10.166:/usr/lib/systemd/system
scp /usr/bin/kubectl  root@192.168.10.166:/usr/bin

3. 删除证书文件(master-2操作)

删除kubelet证书和kubeconfig文件:

rm -f /opt/kubernetes/cfg/kubelet.kubeconfig
rm -f /opt/kubernetes/ssl/kubelet*

4. 修改配置文件IP和主机名(master-2操作)

修改apiserver、kubelet和kube-proxy配置文件为本地IP:

vi /opt/kubernetes/cfg/kube-apiserver.conf
...
--bind-address=192.168.10.166 \
--advertise-address=192.168.10.166 \
...

vi /opt/kubernetes/cfg/kubelet.conf
--hostname-override=k8s-master-2

vi /opt/kubernetes/cfg/kube-proxy-config.yml
hostnameOverride: k8s-master-2

5. 启动设置开机启动(master-2操作)

systemctl daemon-reload
systemctl start kube-apiserver
systemctl start kube-controller-manager
systemctl start kube-scheduler
systemctl start kubelet
systemctl start kube-proxy
systemctl enable kube-apiserver
systemctl enable kube-controller-manager
systemctl enable kube-scheduler
systemctl enable kubelet
systemctl enable kube-proxy

6. 查看集群状态(k8s-master-2查看)

kubectl get cs
NAME                 STATUS    MESSAGE             ERROR
scheduler            Healthy   ok
controller-manager   Healthy   ok
etcd-1               Healthy   {"health":"true"}   
etcd-2               Healthy   {"health":"true"}   
etcd-0               Healthy   {"health":"true"}

7. 批准kubelet证书申请(Master1操作)

kubectl get csr
NAME                                                   AGE   SIGNERNAME                                    REQUESTOR           CONDITION
node-csr-JYNknakEa_YpHz797oKaN-ZTk43nD51Zc9CJkBLcASU   12m   kubernetes.io/kube-apiserver-client-kubelet   kubelet-bootstrap   Pending

kubectl certificate approve node-csr-JYNknakEa_YpHz797oKaN-ZTk43nD51Zc9CJkBLcASU

kubectl get node
NAME          STATUS   ROLES    AGE   VERSION
k8s-master    Ready    <none>   34h   v1.18.6
k8s-master2   Ready    <none>   13m   v1.18.6
k8s-node1     Ready    <none>   33h   v1.18.6
k8s-node2     Ready    <none>   33h   v1.18.6

 

三、 部署Nginx负载均衡器

kube-apiserver高可用架构图:

 

  • Nginx是一个主流Web服务和反向代理服务器,这里用四层实现对apiserver实现负载均衡。
  • Keepalived是一个主流高可用软件,基于VIP绑定实现服务器双机热备,在上述拓扑中,Keepalived主要根据Nginx运行状态判断是否需要故障转移(偏移VIP),例如当Nginx主节点挂掉,VIP会自动绑定在Nginx备节点,从而保证VIP一直可用,实现Nginx高可用

1. 安装软件包(k8s-lb/备)

 yum install epel-release -y
 yum install nginx keepalived -y

2. Nginx配置文件(k8s-lb/备一样

cat > /etc/nginx/nginx.conf << "EOF"
user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log;
pid /run/nginx.pid;
include /usr/share/nginx/modules/*.conf;
events {
    worker_connections 1024;
}
# 四层负载均衡,为两台Master apiserver组件提供负载均衡
stream {
    log_format  main  '$remote_addr $upstream_addr - [$time_local] $status $upstream_bytes_sent';
    access_log  /var/log/nginx/k8s-access.log  main;
    upstream k8s-apiserver {
       server 192.168.10.160:6443;   # Master1 APISERVER IP:PORT
       server 192.168.10.166:6443;   # Master2 APISERVER IP:PORT
    }
    server {
       listen 6443;
       proxy_pass k8s-apiserver;
    }
}

http {
    log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
                      '$status $body_bytes_sent "$http_referer" '
                      '"$http_user_agent" "$http_x_forwarded_for"';
    access_log  /var/log/nginx/access.log  main;
    sendfile            on;
    tcp_nopush          on;
    tcp_nodelay         on;
    keepalive_timeout   65;
    types_hash_max_size 2048;
    include             /etc/nginx/mime.types;
    default_type        application/octet-stream;
    server {
        listen       80 default_server;
        server_name  _;

        location / {
        }
    }
}
EOF

3. keepalived配置文件(k8s-lb-master

cat > /etc/keepalived/keepalived.conf << EOF
global_defs {
   notification_email {
     acassen@firewall.loc
     failover@firewall.loc
     sysadmin@firewall.loc
   }
   notification_email_from fage@qq.com
   smtp_server 127.0.0.1
   smtp_connect_timeout 30
   router_id NGINX_MASTER
}
vrrp_script check_nginx {
    script "/etc/keepalived/check_nginx.sh"
}
vrrp_instance VI_1 {
    state MASTER
    interface eth0
    virtual_router_id 51 # VRRP 路由 ID实例,每个实例是唯一的
    priority 100    # 优先级,备服务器设置 90
    advert_int 1    # 指定VRRP 心跳包通告间隔时间,默认1秒
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    # 虚拟IP
    virtual_ipaddress {
        192.168.10.168/24
    }
    track_script {
        check_nginx
    }
}
EOF
  • vrrp_script:指定检查nginx工作状态脚本(根据nginx状态判断是否故障转移)
  • virtual_ipaddress:虚拟IPVIP

检查nginx状态脚本:

cat > /etc/keepalived/check_nginx.sh  << "EOF"
#!/bin/bash
count=$(ps -ef |grep nginx |egrep -cv "grep|$$")

if [ "$count" -eq 0 ];then
    exit 1
else
    exit 0
fi
EOF
chmod +x /etc/keepalived/check_nginx.sh

4. keepalived配置文件(k8s-lb-backup

cat > /etc/keepalived/keepalived.conf << EOF
global_defs {
   notification_email {
     acassen@firewall.loc
     failover@firewall.loc
     sysadmin@firewall.loc
   }
   notification_email_from fage@qq.com
   smtp_server 127.0.0.1
   smtp_connect_timeout 30
   router_id NGINX_BACKUP
}
vrrp_script check_nginx {
    script "/etc/keepalived/check_nginx.sh"
}
vrrp_instance VI_1 {
    state BACKUP
    interface eth0
    virtual_router_id 51 # VRRP 路由 ID实例,每个实例是唯一的
    priority 90
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
        192.168.10.168/24
    }
    track_script {
        check_nginx
    }
}
EOF

上述配置文件中检查nginx运行状态脚本:

cat > /etc/keepalived/check_nginx.sh  << "EOF"
#!/bin/bash
count=$(ps -ef |grep nginx |egrep -cv "grep|$$")

if [ "$count" -eq 0 ];then
    exit 1
else
    exit 0
fi
EOF
chmod +x /etc/keepalived/check_nginx.sh

注:keepalived根据脚本返回状态码(0为工作正常,非0不正常)判断是否故障转移。

5. 启动并设置开机启动(k8s-lb两个节点都操作)

systemctl daemon-reload
systemctl start nginx && systemctl enable nginx && systemctl status nginx
systemctl start keepalived && systemctl enable keepalived && systemctl status keepalived

6. 查看keepalived工作状态

ip a | grep 192
    inet 192.168.10.164/24 brd 192.168.10.255 scope global noprefixroute eth0
    inet 192.168.10.168/24 scope global secondary eth0

可以看到,在eth0网卡绑定了192.168.10.168 虚拟IP,说明工作正常。

7. Nginx+Keepalived高可用测试

关闭主节点Nginx,测试VIP是否漂移到备节点服务器。

在Nginx Master执行 pkill nginx
在Nginx Backup,ip addr命令查看已成功绑定VIP。

8. 访问负载均衡器测试

找K8s集群中任意一个节点,使用curl查看K8s版本测试,使用VIP访问:

curl -k https://192.168.10.168:6443/version
{
  "major": "1",
  "minor": "18",
  "gitVersion": "v1.18.6",
  "gitCommit": "2e7996e3e2712684bc73f0dec0200d64eec7fe40",
  "gitTreeState": "clean",
  "buildDate": "2020-05-20T12:43:34Z",
  "goVersion": "go1.13.9",
  "compiler": "gc",
  "platform": "linux/amd64"
}

可以正确获取到K8s版本信息,说明负载均衡器搭建正常。该请求数据流程:curl -> vip(nginx) -> apiserver

通过查看Nginx日志也可以看到转发apiserver IP:

tail -f /var/log/nginx/k8s-access.log 
192.168.10.164 192.168.10.160:6443 - [30/May/2020:11:15:10 +0800] 200 422
192.168.10.164 192.168.10.166:6443 - [30/May/2020:11:15:26 +0800] 200 422

到此还没结束,还有下面最关键的一步。

 

四、修改所有Worker Node连接LB VIP

试想下,虽然我们增加了Master-2和负载均衡器,但是我们是从单Master架构扩容的,也就是说目前所有的Node组件连接都还是Master-1,如果不改为连接VIP走负载均衡器,那么Master还是单点故障。

因此接下来就是要改所有Node组件配置文件,由原来192.168.10.160修改为192.168.10.168(VIP)

角色

IP

k8s-master1

192.168.10.160

k8s-master2

192.168.10.166

k8s-node1

192.168.10.161

k8s-node2

192.168.10.162

 

 

 

 

 

 

 

 

 

 

也就是通过kubectl get node命令查看到的节点。

在上述所有Worker Node执行:

sed -i 's#192.168.10.160:6443#192.168.10.168:6443#' /opt/kubernetes/cfg/*
systemctl restart kubelet
systemctl restart kube-proxy

检查节点状态:

kubectl get node
NAME          STATUS   ROLES    AGE    VERSION
k8s-master    Ready    <none>   34h    v1.18.6
k8s-master2   Ready    <none>   18m    v1.18.6
k8s-node1     Ready    <none>   33h    v1.18.6
k8s-node2     Ready    <none>   33h    v1.18.6

至此,一套完整的 Kubernetes 高可用集群就部署完成了!

PS:如果是在公有云上,一般都不支持keepalived,那么可以直接用它们的负载均衡器产品(内网就行,还免费~),架构与上面一样,直接负载均衡多台Master kube-apiserver即可!

posted @ 2020-10-06 15:31  缺个好听的昵称  阅读(5234)  评论(0编辑  收藏  举报