keepalived+haproxy部署高可用k8s集群(containerd)

部署规划

首先准备几台服务器,计划部署3台master,3台 利用keepalived和haproxy做高可用,为了节约服务器,我将keepalived和haproxy和master一起部署。

环境:

centos 7.9
k8s 1.24
containerd 1.6.4

服务器规划如下:

ip 主机名
192.168.100.220 k8s-master1
192.168.100.221 k8s-master2
192.168.100.222 k8s-master3
192.168.100.253 k8svip

 

一、 环境准备

1、升级系统内核

rpm -Uvh http://www.elrepo.org/elrepo-release-7.0-5.el7.elrepo.noarch.rpm
yum --enablerepo=elrepo-kernel install -y kernel-lt
grub2-set-default 0

2、修改主机名

hostnamectl set-hostname k8s-master1
hostnamectl set-hostname k8s-master2
hostnamectl set-hostname k8s-master3


3、修改hosts文件

192.168.100.220 k8s-master1
192.168.100.221 k8s-master2
192.168.100.222 k8s-master3
192.168.100.253 k8svip


4、关闭防火墙、swap、selinux

systemctl stop firewalld && systemctl disable firewalld && systemctl status firewalld
swapoff -a  && sed -i 's/.*swap.*/#&/g' /etc/fstab
setenforce 0 && sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config


5、加载IPVS模块
yum -y install ipset ipvsadm

cat > /etc/sysconfig/modules/ipvs.modules <<EOF
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
EOF

kernel_version=$(uname -r | cut -d- -f1)
echo $kernel_version

if [ `expr $kernel_version \> 4.19` -eq 1 ]
    then
        modprobe -- nf_conntrack
    else
        modprobe -- nf_conntrack_ipv4
fi

chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep -e ip_vs -e nf_conntrack


##1.20+需要开启br_netfilter
sudo modprobe overlay
sudo modprobe br_netfilter


6、配置内核参数,将桥接的IPv4流量传递到iptables的链
cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-iptables  = 1
net.ipv4.ip_forward                 = 1
net.bridge.bridge-nf-call-ip6tables = 1
EOF

sudo sysctl --system

7、同步主机时间(所有机器都需要)
yum install ntpdate -y && timedatectl set-timezone Asia/Shanghai  && ntpdate time.windows.com

8、调整ulimt
ulimit -n 65535
cat > /etc/security/limits.conf <<EOF
* soft noproc 65535
* hard noproc 65535
* soft nofile 65535
* hard nofile 65535
* soft memlock unlimited
* hard memlock unlimited
EOF

 

二、部署Containerd(三台节点同时操作)

cat <<EOF | sudo tee /etc/modules-load.d/containerd.conf
overlay
br_netfilter
EOF

##部署源并安装containerd
wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -O /etc/yum.repos.d/docker-ce.repo
yum list |grep containerd
yum -y install containerd.io-1.6.4-3.1.el7.x86_64
mkdir -p /etc/containerd
containerd config default | sudo tee /etc/containerd/config.toml

##修改cgroup Driver为systemd
sed -ri 's#SystemdCgroup = false#SystemdCgroup = true#' /etc/containerd/config.toml

##更改sandbox_image
sed -ri 's#k8s.gcr.io\/pause:3.6#registry.aliyuncs.com\/google_containers\/pause:3.7#' /etc/containerd/config.toml

##让配置生效
systemctl daemon-reload && systemctl enable containerd --now

 

三、所有master节点安装keepalive和haproxy

1、安装keepelived和haproxy

yum install -y keepelived haproxy

 2、配置keeplived文件

cat <<EOF > /etc/keepalived/keepalived.conf
! /etc/keepalived/keepalived.conf
! Configuration File for keepalived
global_defs {
    router_id LVS_DEVEL
}
vrrp_script check_apiserver {
  script "/etc/keepalived/check_apiserver.sh"   
  interval 3
  weight -2
  fall 10
  rise 2
}

vrrp_instance VI_1 {
    state  ${STATE} 
    interface ${INTERFACE}
    virtual_router_id  ${ROUTER_ID}
    priority ${PRIORITY}
    authentication {
        auth_type PASS
        auth_pass ${AUTH_PASS}
    }
    virtual_ipaddress {
        ${APISERVER_VIP}
    }
    track_script {
        check_apiserver
    }
}
EOF

##在上面的文件中替换自己相应的内容:
/etc/keepalived/check_apiserver.sh   定义检查脚本路径
${STATE}                             如果是主节点 则为MASTER 其他则为 BACKUP。我这里选择k8s-master1为MASTER;k8s-master2 、k8s-master3为BACKUP;
${INTERFACE}                         是网络接口,即服务器网卡的,我的服务器均为eth0;
${ROUTER_ID}                         这个值只要在keepalived集群中保持一致即可,我使用的是默认值51;
${PRIORITY}                          优先级,在master上比在备份服务器上高就行了。我的master设为100,备份服务50;
${AUTH_PASS}                         这个值只要在keepalived集群中保持一致即可;
${APISERVER_VIP}                     就是VIP的地址,我的为:192.168.100.253

 3、配置keeplived健康检查脚本

#!/bin/sh

errorExit() {
    echo "*** $*" 1>&2
    exit 1
}

APISERVER_VIP=192.168.100.253
APISERVER_DEST_PORT=8443

curl --silent --max-time 2 --insecure https://localhost:${APISERVER_DEST_PORT}/ -o /dev/null || errorExit "Error GET https://localhost:${APISERVER_DEST_PORT}/"
if ip addr | grep -q ${APISERVER_VIP}; then
    curl --silent --max-time 2 --insecure https://${APISERVER_VIP}:${APISERVER_DEST_PORT}/ -o /dev/null || errorExit "Error GET https://${APISERVER_VIP}:${APISERVER_DEST_PORT}/"
fi

${APISERVER_VIP} 就是VIP的地址,192.168.100.253;
${APISERVER_DEST_PORT} 这个是定义API Server交互的负载均衡端口,其实就是HAProxy绑定前端负载均衡的端口号,因为HAProxy和k8s一起部署,这里做一个区分,我使用了8443

 4、配置haproxy

cat <<EOF > /etc/haproxy/haproxy.cfg
#---------------------------------------------------------------------
# Example configuration for a possible web application.  See the
# full configuration options online.
#
#   http://haproxy.1wt.eu/download/1.4/doc/configuration.txt
#
#---------------------------------------------------------------------

#---------------------------------------------------------------------
# Global settings
#---------------------------------------------------------------------
global
    # to have these messages end up in /var/log/haproxy.log you will
    # need to:
    #
    # 1) configure syslog to accept network log events.  This is done
    #    by adding the '-r' option to the SYSLOGD_OPTIONS in
    #    /etc/sysconfig/syslog
    #
    # 2) configure local2 events to go to the /var/log/haproxy.log
    #   file. A line like the following can be added to
    #   /etc/sysconfig/syslog
    #
    #    local2.*                       /var/log/haproxy.log
    #
    log         127.0.0.1 local2

    chroot      /var/lib/haproxy
    pidfile     /var/run/haproxy.pid
    maxconn     4000
    user        haproxy
    group       haproxy
    daemon

    # turn on stats unix socket
    stats socket /var/lib/haproxy/stats

#---------------------------------------------------------------------
# common defaults that all the 'listen' and 'backend' sections will
# use if not designated in their block
#---------------------------------------------------------------------
defaults
    mode                    http
    log                     global
    option                  httplog
    option                  dontlognull
    option http-server-close
    option forwardfor       except 127.0.0.0/8
    option                  redispatch
    retries                 3
    timeout http-request    10s
    timeout queue           1m
    timeout connect         10s
    timeout client          1m
    timeout server          1m
    timeout http-keep-alive 10s
    timeout check           10s
    maxconn                 3000

#---------------------------------------------------------------------
# apiserver frontend which proxys to the masters
#---------------------------------------------------------------------
frontend apiserver
    bind *:8443   ##注意这里的端口,其实就是haproxy的端口,后面加入集群的时候需要用到
    mode tcp
    option tcplog
    default_backend apiserver

#---------------------------------------------------------------------
# round robin balancing for apiserver
#---------------------------------------------------------------------
backend apiserver
    option httpchk GET /healthz
    http-check expect status 200
    mode tcp
    option ssl-hello-chk
    balance     roundrobin
        server k8s-master1 192.168.100.220:6443 check
        server k8s-master2 192.168.100.221:6443 check
        server k8s-master3 192.168.100.222:6443 check
EOF

5、启动服务并配置开机自启

systemctl enable haproxy --now
systemctl enable keepalived --now

四、部署Kubernetes

##配置k8s源
cat > /etc/yum.repos.d/kubernetes.repo <<EOF
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=0
EOF

##安装kubeadm  kubectl  kubelet
#kubeadm:用来初始化集群的指令。
#kubelet:在集群中的每个节点上用来启动 pod 和容器等。
#kubectl:用来与集群通信的命令行工具。

yum -y install kubeadm-1.24.0-0 kubelet-1.24.0-0 kubectl-1.24.0-0
systemctl enable --now kubelet

##设置crictl
cat << EOF >> /etc/crictl.yaml
runtime-endpoint: unix:///var/run/containerd/containerd.sock
image-endpoint: unix:///var/run/containerd/containerd.sock
timeout: 10 
debug: false
EOF

 

五、初始化master集群

##生成初始化配置文件
kubeadm config print init-defaults > kubeadm-init.yaml

##修改配置文件
cat > kubeadm-init.yaml << EOF

piVersion: kubeadm.k8s.io/v1beta3
bootstrapTokens:
- groups:
  - system:bootstrappers:kubeadm:default-node-token
  token: abcdef.0123456789abcdef
  ttl: 24h0m0s
  usages:
  - signing
  - authentication
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: 192.168.100.220  #当前主机ip
  bindPort: 6443
nodeRegistration:
  criSocket: unix:///var/run/containerd/containerd.sock
  imagePullPolicy: IfNotPresent
  name: master
  taints:
  - effect: "NoSchedule"
    key: "node-role.kubernetes.io/master"
---
apiServer:
  timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta3
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controlPlaneEndpoint: k8svip:8443  ##重点!! 这里是我们自定义的域名,端口是haproxy的端口,单master可注释掉这个参数
controllerManager: {}
dns: {}
etcd:
  local:
    dataDir: /var/lib/etcd
imageRepository: registry.aliyuncs.com/google_containers
kind: ClusterConfiguration
kubernetesVersion: v1.24.0
networking:
  dnsDomain: cluster.local
  serviceSubnet: 10.96.0.0/12
  podSubnet: 10.244.0.0/16
scheduler: {}
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
mode: ipvs
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
cgroupDriver: systemd

EOF


## 查看所需镜像列表
kubeadm config images list --config kubeadm-init.yaml

## 预拉取镜像
kubeadm config images pull --config kubeadm-init.yaml

##镜像拉取完后我们查看本地镜像是否下载成功
crictl images

#初始化开始
kubeadm init --config=kubeadm-init.yaml | tee kubeadm-init.log
注意:如果是多master执行到  预拉取镜像 这一步就好了,其他master节点不需要初始化

保存好如下内容 

#该内容是其他master加入需要执行的命令
  kubeadm join k8svip:8443 --token abcdef.0123456789abcdef \
        --discovery-token-ca-cert-hash sha256:364c1b1c7fbbd6a680ea5616670a608531da25c5f62b4c5ed2bdce3321ccb83e \
        --control-plane 
#该内容是node节点加入需要执行的命令
  kubeadm join k8svip:8443 --token abcdef.0123456789abcdef \
        --discovery-token-ca-cert-hash sha256:364c1b1c7fbbd6a680ea5616670a608531da25c5f62b4c5ed2bdce3321ccb83e 

配置api所需要文件

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

 

六、master2、master3节点加入集群

##复制密钥及相关文件

cat << EOF >> /root/cpkey.sh

# !/bin/bash
CONTROL_PLANE_IPS="192.168.100.221 192.168.100.222" #masterIP

for host in ${CONTROL_PLANE_IPS}; do
ssh root@${host} mkdir -p /etc/kubernetes/pki/etcd
scp /etc/kubernetes/admin.conf root@${host}:/etc/kubernetes
scp /etc/kubernetes/pki/{ca.*,sa.*,front-proxy-ca.*} root@${host}:/etc/kubernetes/pki
scp /etc/kubernetes/pki/etcd/ca.* root@${host}:/etc/kubernetes/pki/etcd
done

EOF

##在master2、master3 执行在master1上init后输出的join命令,需要带上参数 `--control-plane` 表示把master控制节点加入集群
kubeadm join k8svip:8443 --token abcdef.0123456789abcdef \
        --discovery-token-ca-cert-hash sha256:364c1b1c7fbbd6a680ea5616670a608531da25c5f62b4c5ed2bdce3321ccb83e \
        --control-plane 
        
        
##执行完后输出
This node has joined the cluster and a new control plane instance was created:

* Certificate signing request was sent to apiserver and approval was received.
* The Kubelet was informed of the new secure connection details.
* Control plane (master) label and taint were applied to the new node.
* The Kubernetes control plane instances scaled up.
* A new etcd member was added to the local/stacked etcd cluster.

To start administering your cluster from this node, you need to run the following as a regular user:

        mkdir -p $HOME/.kube
        sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
        sudo chown $(id -u):$(id -g) $HOME/.kube/config

 

七、安装网络插件

curl https://docs.projectcalico.org/manifests/calico.yaml -O
# 修改镜像
sed -i 's#docker.io/calico/cni:v3.22.2#registry.cn-shanghai.aliyuncs.com/wanfei/cni:v3.22.2#' calico.yaml
sed -i 's#docker.io/calico/pod2daemon-flexvol:v3.22.2#registry.cn-shanghai.aliyuncs.com/wanfei/pod2daemon-flexvol:v3.22.2#' calico.yaml
sed -i 's#docker.io/calico/node:v3.22.2#registry.cn-shanghai.aliyuncs.com/wanfei/node:v3.22.2#' calico.yaml
sed -i 's#docker.io/calico/kube-controllers:v3.22.2#registry.cn-shanghai.aliyuncs.com/wanfei/kube-controllers:v3.22.2#' calico.yaml
kubectl apply -f calico.yaml

##如果遇到插件报错:network: stat /var/lib/calico/nodename: no such file or directory: check that the calico/n
rm -rf /etc/cni/net.d/*
rm -rf /var/lib/cni/calico
rm -rf /var/lib/calico
systemctl  restart kubelet
#然后删除相关资源重建

 

八、kubectl加入tab强化字典

#bash环境
yum -y install bash-completion
echo 'source  <(kubectl  completion  bash)' >> ~/.bashrc
##重新加载shell

#zsh环境
source <(kubectl completion zsh)

#设置别名
echo 'alias k=kubectl' >>~/.bashrc
echo 'complete -F __start_kubectl k' >>~/.bashrc

 

八、TOKEN过期后重新获取

# kubeadm token create --print-join-command
kubeadm join --token 5d2dc8.3e93f8449167639b 10.0.2.66:6443 --discovery-token-ca-cert-hash sha256:44a68d4a2c2a86e05cc0d4ee8c9c6b64352c54e450021331c483561e45b34388

成之后在别的 Node 输入该命令然后加入 master 节点即可。

注意,这样生成的 Token 有效期是 24 小时,如果不想过期,可以加上 --ttl=0 这个参数。

生成 Token 之后,可以使用 kubeadm token list 进行查看。

posted @ 2022-05-08 16:07  DongGe丶  阅读(2481)  评论(0)    收藏  举报