部署双主高可用K8S集群
部署双主高可用K8S集群
高可用原理:利用keepalived 的vrrp协议创建vip,kubeadm init的时候用这个vip的6443端口来初始化 另外一台master直接以master角色加入集群
也可以继续增加中间件haproxy或者nginx做负载均衡
环境准备
软件和系统版本:
| k8s | 1.23.3 |
| docker |
20.10.21 |
| linux | centos7.8 |
服务器四台:
| master01 | 192.168.197.130 |
| master02 | 192.168.197.131 |
| node1 | 192.168.197.132 |
| node2 | 192.168.197.132 |
基础优化:略
安装
1.keepalived
两个master节点运行
yum install keepalived -y
master01配置
定义的vip是 192.168.197.200[root@master01 ~]# vim /etc/keepalived/keepalived.confglobal_defs {router_id k-masternotification_email { # 邮件功能 可以不设置wangsiyu@123.com}notification_email_from wangsiyu@123.com # 邮件功能 可以不设置smtp_server smtp.pinuc.com # 邮件功能 可以不设置smtp_connect_timeout 30 # 邮件功能 可以不设置vrrp_skip_check_adv_addrvrrp_strictvrrp_garp_interval 0}vrrp_script check_apiserver {script "/etc/keepalivedcheck-apiserver.sh" # 健康检查脚本interval 3 # 检查次数weight -51 # 脚本返回不是0 当前权重-51}vrrp_instance VI-k-master { # 定义一个名称state MASTER # 当前是master 必须大写interface ens33 # 你的网卡名称virtual_router_id 51 # 设置一个id 自定义 主备必须一样不然脑裂priority 100 # 权重 数值越大优先级越高advert_int 3 # 通信检查间隔时间authentication {auth_type PASS # 通信密文协议 目前有PASS和AHauth_pass 1234 # 通信密码}virtual_ipaddress {192.168.197.200 # vip 如果只有一个网卡 就用当前ip段}track_script {check_apiserver # 调用s汗啊改变定义的脚本}
master02配置
[root@master02 keepalived]# vim keepalived.confglobal_defs {router_id k-backupnotification_email {wangsiyu@123.com}notification_email_from wangsiyu@123.com! Configuration File for keepalivedglobal_defs {router_id k-backupnotification_email {wangsiyu@pinuc.com}notification_email_from wangsiyu@pinuc.comsmtp_server smtp.pinuc.comsmtp_connect_timeout 30router_id LVS_DEVELvrrp_skip_check_adv_addrvrrp_strictvrrp_garp_interval 0}vrrp_script check_apiserver {script "/etc/keepalivedcheck-apiserver.sh"interval 3weight -51}vrrp_instance VI-k-master {state BACKUPinterface ens33virtual_router_id 51priority 50advert_int 3authentication {auth_type PASSauth_pass 1234}virtual_ipaddress {192.168.197.200}track_script {check_apiserver}}
检查脚本
[root@master01 keepalived]# vim check-apiserver.sherrorExit(){echo "$*" 1>&2exit 1}curl --silent --max-time 2 --insecure localhost:6443/ -o /dev/null || errorExit "Error GET localhost:6443"if ip addr|grep -q 192.168.197.200; thencurl --silent --max-time 2 --insecure 192.168.197.200:6443/ -o /dev/null || errorExit "Error GET vip:6443"fi
启动服务
systemctl start keepalivedsystemctl enable keepalive
检查服务和vip是否正常
[inet 192.168.197.200/32 scope global ens33[Unit keeplived.service could not be found.[● keepalived.service - LVS and VRRP High Availability MonitorLoaded: loaded (/usr/lib/systemd/system/keepalived.service; enabled; vendor preset: disabled)Active: active (running) since Thu 2022-11-17 15:20:43 CST; 9min agoProcess: 52236 ExecStart=/usr/sbin/keepalived $KEEPALIVED_OPTIONS (code=exited, status=0/SUCCESS)Main PID: 52237 (keepalived)Tasks: 3Memory: 2.7MCGroup: /system.slice/keepalived.service├─52237 /usr/sbin/keepalived -D├─52238 /usr/sbin/keepalived -D└─52239 /usr/sbin/keepalived -DNov 17 15:20:51 master01 Keepalived_vrrp[52239]: Sending gratuitous ARP on ens33 for 192.168.197.200Nov 17 15:20:51 master01 Keepalived_vrrp[52239]: Sending gratuitous ARP on ens33 for 192.168.197.200Nov 17 15:20:51 master01 Keepalived_vrrp[52239]: Sending gratuitous ARP on ens33 for 192.168.197.200Nov 17 15:20:51 master01 Keepalived_vrrp[52239]: Sending gratuitous ARP on ens33 for 192.168.197.200Nov 17 15:20:56 master01 Keepalived_vrrp[52239]: Sending gratuitous ARP on ens33 for 192.168.197.200Nov 17 15:20:56 master01 Keepalived_vrrp[52239]: VRRP_Instance(VI-k-master) Sending/queueing gratuitous ARPs on ens33 for 192.168.197.200Nov 17 15:20:56 master01 Keepalived_vrrp[52239]: Sending gratuitous ARP on ens33 for 192.168.197.200Nov 17 15:20:56 master01 Keepalived_vrrp[52239]: Sending gratuitous ARP on ens33 for 192.168.197.200Nov 17 15:20:56 master01 Keepalived_vrrp[52239]: Sending gratuitous ARP on ens33 for 192.168.197.200Nov 17 15:20:56 master01 Keepalived_vrrp[52239]: Sending gratuitous ARP on ens33 for 192.168.197.200
以上说明服务正常
2.k8s
四台机器的hosts
/etc/hosts192.168.197.130 master01192.168.197.131 master02192.168.197.132 node01192.168.197.133 node02
两个master节点运行脚本
[root@master01 ~]# cat install_docker_k8s.shcat <<EOF >/etc/yum.repos.d/kubernetes.repo[kubernetes]name=Kubernetesbaseurl=mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/gpgcheck=0gpgkey=mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpgenabled=1EOFwget mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -O /etc/yum.repos.d/docker-ce.repoyum install kubeadm-1.23.3 kubectl-1.23.3 kubelet-1.23.3 docker-ce-20.10.12 -ycp /etc/docker/daemon.json{,.bak$(date +%F)}cat <<EOF >>/etc/docker/daemon.json{"exec-opts":["native.cgroupdriver=systemd"],}EOFsystemctl daemon-reloadsystemctl restart dockersystemctl enable docker# docker version# 安装镜像 欺骗安装# *查看kubeadm默认安装的组件版本用 kubeadm config images listimages=(`kubeadm config images list|awk -F '/' '{print $2}'|head -6`)for img in ${images[@]};dodocker pull registry.cn-hangzhou.aliyuncs.com/google_containers/$imgdocker tag registry.cn-hangzhou.aliyuncs.com/google_containers/$img k8s.gcr.io/$imgdocker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/$imgdonecorednstag=`kubeadm config images list|awk -F 'io' '{print $2}'|tail -1`coredns=`kubeadm config images list|awk -F '/' '{print $3}'|tail -1`docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/$corednsdocker tag registry.cn-hangzhou.aliyuncs.com/google_containers/$coredns k8s.gcr.io$corednstagdocker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/$corednscat <<EOF >/etc/sysconfig/kubeletKUBELET_CGROUP_ARGS="--cgroup-driver=systemd"KUBE_PROXY_MODE="ipvs"KUBELET_EXTRA_ARGS="--fail-swap-on=false"EOFsystemctl enable kubeletecho 'net.bridge.bridge-nf-call-ip6tables = 1' >>/etc/sysctl.confecho 'net.bridge.bridge-nf-call-iptables = 1' >>/etc/sysctl.confecho 'net.ipv4.ip_forward=1' >>/etc/sysctl.confsysctl -pswapoff -a
两个node节点运行(node节点只是不装kubectl 其他的都一样 另外也可以用master的那些镜像 下载镜像那步跳过,直接用docker save ,scp ,docker load)
[root@node01 ~]# cat install_k8s_node.shcat <<EOF >/etc/yum.repos.d/kubernetes.repo[kubernetes]name=Kubernetesbaseurl=mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/gpgcheck=0gpgkey=mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpgenabled=1EOFwget mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -O /etc/yum.repos.d/docker-ce.repoyum install kubeadm-1.23.3 kubelet-1.23.3 -yyum install docker-ce-20.10.12 -ycp /etc/docker/daemon.json{,.bak_$(date +%F)}cat <<EOF >/etc/docker/daemon.json{"exec-opts":["native.cgroupdriver=systemd"],}EOFsystemctl daemon-reloadsystemctl restart dockersystemctl enable docker# docker version# 安装镜像 欺骗安装# *查看kubeadm默认安装的组件版本用 kubeadm config images listimages=(`kubeadm config images list|awk -F '/' '{print $2}'|head -6`)for img in ${images[@]};dodocker pull registry.cn-hangzhou.aliyuncs.com/google_containers/$imgdocker tag registry.cn-hangzhou.aliyuncs.com/google_containers/$img k8s.gcr.io/$imgdocker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/$imgdonecorednstag=`kubeadm config images list|awk -F 'io' '{print $2}'|tail -1`coredns=`kubeadm config images list|awk -F '/' '{print $3}'|tail -1`docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/$corednsdocker tag registry.cn-hangzhou.aliyuncs.com/google_containers/$coredns k8s.gcr.io$corednstagdocker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/$corednscat <<EOF >/etc/sysconfig/kubeletKUBELET_CGROUP_ARGS="--cgroup-driver=systemd"KUBE_PROXY_MODE="ipvs"KUBELET_EXTRA_ARGS="--fail-swap-on=false"EOFsystemctl enable kubeletecho 'net.bridge.bridge-nf-call-ip6tables = 1' >>/etc/sysctl.confecho 'net.bridge.bridge-nf-call-iptables = 1' >>/etc/sysctl.confecho 'net.ipv4.ip_forward=1' >>/etc/sysctl.confsysctl -pswapoff -a
3.初始化集群
apiVersion: kubeadm.k8s.io/v1beta2bootstrapTokens:token: abcdef.0123456789abcdefttl: 24h0m0susages:kind: InitConfigurationlocalAPIEndpoint:advertiseAddress: 192.168.197.200 #VIP的地址bindPort: 6443nodeRegistration:criSocket: /var/run/dockershim.sockname: master01taints:key: node-role.kubernetes.io/master---apiServer: #添加如下两行信息certSANs:timeoutForControlPlane: 4m0sapiVersion: kubeadm.k8s.io/v1beta2certificatesDir: /etc/kubernetes/pkiclusterName: kubernetescontrollerManager: {}dns:type: CoreDNSetcd:local:dataDir: /var/lib/etcdimageRepository: registry.cn-hangzhou.aliyuncs.com/google_containerskind: ClusterConfigurationkubernetesVersion: v1.23.3 #kubernetes版本号networking:dnsDomain: cluster.localserviceSubnet: 10.96.0.0/12 #选择默认podSubnet: 10.244.0.0/16 #添加pod网段scheduler: {}
把上边的VIP 改成你的
主节点运行
kubeadm init --config kubeadm-init.yaml --upload-certs
保存重要信息You should now deploy a pod network to the cluster.Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:kubernetes.io/docs/concepts/cluster-administration/addons/You can now join any number of the control-plane node running the following command on each as root:kubeadm join 192.168.197.200:6443 --token abcdef.0123456789abcdef \--discovery-token-ca-cert-hash sha256:7db5b9c52573ad36507784bd1fc2e5efe975f9c9dfdffa972310b4ecb15c00d4 \--control-plane --certificate-key ef0f358d41e5ead3cc74e183aa2201b1773b605926932170141bd60605c44735Please note that the certificate-key gives access to cluster sensitive data, keep it secret!As a safeguard, uploaded-certs will be deleted in two hours; If necessary, you can use"kubeadm init phase upload-certs --upload-certs" to reload certs afterward.Then you can join any number of worker nodes by running the following on each as root:kubeadm join 192.168.197.200:6443 --token abcdef.0123456789abcdef \--discovery-token-ca-cert-hash sha256:7db5b9c52573ad36507784bd1fc2e5efe975f9c9dfdffa972310b4ecb16c00d4
上边有两个join 意思是 如果你的master角色 就用上边的 否则用下边的
另外一台master执行
kubeadm join 192.168.197.200:6443 --token abcdef.0123456789abcdef \--discovery-token-ca-cert-hash sha256:7db5b9c52573ad36507784bd1fc2e5efe975f9c9dfdffa972310b4ecb15c00d4 \--control-plane --certificate-key ef0f358d41e5ead3cc74e183aa2201b1773b605926932170141bd60605c44735
两个node执行
kubeadm join 192.168.197.200:6443 --token abcdef.0123456789abcdef \--discovery-token-ca-cert-hash sha256:7db5b9c52573ad36507784bd1fc2e5efe975f9c9dfdffa972310b4ecb16c00d4
4.部署网络
只在master01上安装即可
网络插件用flannel 我从github上拿的 直接copy即可
kind: ClusterRoleapiVersion: rbac.authorization.k8s.io/v1metadata:name: flannelrules:resources:verbs:resources:verbs:resources:verbs:---kind: ClusterRoleBindingapiVersion: rbac.authorization.k8s.io/v1metadata:name: flannelroleRef:apiGroup: rbac.authorization.k8s.iokind: ClusterRolename: flannelsubjects:name: flannelnamespace: kube-system---apiVersion: v1kind: ServiceAccountmetadata:name: flannelnamespace: kube-system---kind: ConfigMapapiVersion: v1metadata:name: kube-flannel-cfgnamespace: kube-systemlabels:tier: nodeapp: flanneldata:{{}},{}}]}{}}---apiVersion: apps/v1kind: DaemonSetmetadata:name: kube-flannel-dsnamespace: kube-systemlabels:tier: nodeapp: flannelspec:selector:matchLabels:app: flanneltemplate:metadata:labels:tier: nodeapp: flannelspec:affinity:nodeAffinity:requiredDuringSchedulingIgnoredDuringExecution:nodeSelectorTerms:operator: Invalues:hostNetwork: truepriorityClassName: system-node-criticaltolerations:effect: NoScheduleserviceAccountName: flannelinitContainers:image: rancher/mirrored-flannelcni-flannel-cni-plugin:v1.0.1command:args:volumeMounts:mountPath: /opt/cni/binimage: rancher/mirrored-flannelcni-flannel:v0.16.1command:args:volumeMounts:mountPath: /etc/cni/net.dmountPath: /etc/kube-flannel/containers:image: rancher/mirrored-flannelcni-flannel:v0.16.1command:args:resources:requests:cpu: "100m"memory: "50Mi"limits:cpu: "100m"memory: "50Mi"securityContext:privileged: falsecapabilities:add: ["NET_ADMIN", "NET_RAW"]env:valueFrom:fieldRef:fieldPath: metadata.namevalueFrom:fieldRef:fieldPath: metadata.namespacevalue: "5000"volumeMounts:mountPath: /run/flannelmountPath: /etc/kube-flannel/mountPath: /run/xtables.lockvolumes:hostPath:path: /run/flannelhostPath:path: /opt/cni/binhostPath:path: /etc/cni/net.dconfigMap:name: kube-flannel-cfghostPath:path: /run/xtables.locktype: FileOrCreate
kubectl apply -f flannel.yaml
如果下载镜像太慢 没关系 我打包好了分享给你们
链接:https://pan.baidu.com/s/1Hb-DU5gAKHfkVDbTOde0nQ提取码:1212
使用方法 下载下来之后 rz到服务器
docker load <cni.tar
docker load <cni-flannel.tar
PS:每个节点都需要有网络 所以都执行 !
然后在master01上查看
[root@master01 ~]# kubectl get nodesNAME STATUS ROLES AGE VERSIONmaster01 Ready control-plane,master 163m v1.23.3master02 Ready control-plane,master 161m v1.23.3node01 Ready <none> 156m v1.23.3node02 Ready <none> 159m v1.23.3[root@master01 ~]# kubectl get pod -ANAMESPACE NAME READY STATUS RESTARTS AGEkube-system coredns-65c54cc984-6cwrx 1/1 Running 0 171mkube-system coredns-65c54cc984-bb5fn 1/1 Running 0 171mkube-system etcd-master01 1/1 Running 0 171mkube-system etcd-master02 1/1 Running 0 170mkube-system kube-apiserver-master01 1/1 Running 0 171mkube-system kube-apiserver-master02 1/1 Running 0 170mkube-system kube-controller-manager-master01 1/1 Running 0 171mkube-system kube-controller-manager-master02 1/1 Running 0 170mkube-system kube-proxy-7xkc2 1/1 Running 0 171mkube-system kube-proxy-bk82v 1/1 Running 0 165mkube-system kube-proxy-kvf2p 1/1 Running 0 167mkube-system kube-proxy-wwlk5 1/1 Running 0 170mkube-system kube-scheduler-master01 1/1 Running 0 171mkube-system kube-scheduler-master02 1/1 Running 0 170m
之后可视化 可以用rancher kubesphere kubernetes-dashboard 你们自己选择吧

浙公网安备 33010602011771号