K8S&master高可用集群架构

部署之前环境准备

1.1 主机准备

系统版本ubuntu22.04
10.0.0.115 master115
10.0.0.116 worker116
10.0.0.117 worker117

1.上传证书

[root@master115 ~]# kubeadm init phase upload-certs --upload-certs 
I0925 16:43:00.383868    9902 version.go:256] remote version is much newer: v1.31.0; falling back to: stable-1.23
[upload-certs] Storing the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace
[upload-certs] Using certificate key:
59403bc4bdcd4458b988d9e57e292b71316f236a12843183115d6bdd2ef3b680 #此token用于worker节点加入K8S集群使用
[root@master1115 ~]# 

2.worker116节点还原配置

[root@worker116 ~]# kubeadm reset -f
[preflight] Running pre-flight checks
W0925 16:43:53.183660    7905 removeetcdmember.go:80] [reset] No kubeadm config, using etcd pod spec to get data directory
[reset] No etcd config found. Assuming external etcd
[reset] Please, manually reset etcd to prevent further issues
[reset] Stopping the kubelet service
[reset] Unmounting mounted directories in "/var/lib/kubelet"
[reset] Deleting contents of config directories: [/etc/kubernetes/manifests /etc/kubernetes/pki]
[reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]
[reset] Deleting contents of stateful directories: [/var/lib/kubelet /var/lib/dockershim /var/run/kubernetes /var/lib/cni]

The reset process does not clean CNI configuration. To do so, you must remove /etc/cni/net.d

The reset process does not reset or clean up iptables rules or IPVS tables.
If you wish to reset iptables, you must do so manually by using the "iptables" command.

If your cluster was setup to utilize IPVS, run ipvsadm --clear (or similar)
to reset your system's IPVS tables.

The reset process does not clean your kubeconfig files and you must remove them manually.
Please, check the contents of the $HOME/.kube/config file.
[root@worker116 ~]# 

3.master节点全部安装并配置keepalived【可跳过】

[root@master115 ~]# apt-get -y install keepalived
[root@master115 ~]#
[root@master115 ~]# cat > /etc/keepalived/keepalived.conf <<'EOF'
! Configuration File for keepalived
global_defs {
   router_id 10.0.0.115
}
vrrp_instance VI_1 {
    state MASTER
    interface eth0
    virtual_router_id 251
    priority 100
    advert_int 1
    mcast_src_ip 10.0.0.115
    nopreempt
    authentication {
        auth_type PASS
        auth_pass nolen_k8s
    }
    track_script {
         chk_nginx
    }
    virtual_ipaddress {
        10.0.0.240
    }
}
EOF
systemctl daemon-reload
systemctl enable --now keepalived
systemctl status keepalived


[root@master115 ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 00:0c:29:5d:1d:b4 brd ff:ff:ff:ff:ff:ff
    altname enp2s1
    inet 10.0.0.115/24 brd 10.0.0.255 scope global ens33
       valid_lft forever preferred_lft forever
    inet 10.0.0.240/32 scope global ens33
       valid_lft forever preferred_lft forever
    inet6 fe80::20c:29ff:fe5d:1db4/64 scope link 
       valid_lft forever preferred_lft forever
...

4.修改master节点的地址【其实是master节点的VIP地址】

[root@master115 ~]#  kubectl -n kube-system get cm kubeadm-config -o yaml | grep controlPlaneEndpoint 
[root@master115 ~]# 
[root@master115 ~]# kubectl -n kube-system edit cm kubeadm-config 
...
  ClusterConfiguration: |   #集群配置
    ...
    # 如果没有配置vip地址,就可以先写旧的master先过渡。
    controlPlaneEndpoint: 10.0.0.115:6443
...
[root@master115 ~]# kubectl -n kube-system get cm kubeadm-config -o yaml | grep controlPlaneEndpoint 
    controlPlaneEndpoint: 10.0.0.240:6443
[root@master115 ~]# 
[root@master115 ~]# ss -ntl | grep 6443
LISTEN 0      4096               *:6443             *:*          
[root@master115 ~]# 

5.获取加入集群的token

[root@master115 ~]# kubeadm token create  nolen --print-join-command --ttl 0
kubeadm join 10.0.0.115:6443 --token nolen --discovery-token-ca-cert-hash sha256:c919a67ea1d3ce32091ee7132a1dd1e20cee3442ef5a7e54e1e09779dcb4c4c4 
[root@master231 ~]# 

6.其他节点加入到集群 并标记为master 【此步骤需要FQ下载镜像,或者手动拉取镜像】

6.1 worker116加入到集群标记为master

[root@worker116 ~]# kubeadm join 10.0.0.115:6443 --token nolen --discovery-token-ca-cert-hash sha256:c919a67ea1d3ce32091ee7132a1dd1e20cee3442ef5a7e54e1e09779dcb4c4c4  --control-plane  --certificate-key 
59403bc4bdcd4458b988d9e57e292b71316f236a12843183115d6bdd2ef3b680  #此token是第1步骤上传的token需要拷贝复制在这里,参考第1步修改即可。

... 
This node has joined the cluster and a new control plane instance was created:

* Certificate signing request was sent to apiserver and approval was received.
* The Kubelet was informed of the new secure connection details.
* Control plane (master) label and taint were applied to the new node.
* The Kubernetes control plane instances scaled up.
* A new etcd member was added to the local/stacked etcd cluster.

To start administering your cluster from this node, you need to run the following as a regular user:

	mkdir -p $HOME/.kube
	sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	sudo chown $(id -u):$(id -g) $HOME/.kube/config

Run 'kubectl get nodes' to see this node join the cluster.

[root@worker116 ~]# 

6.2 worker117加入到集群,标记为master

[root@worker117 ~]# kubeadm reset -f
[root@worker117 ~]# 
[root@worker117 ~]# kubeadm join 10.0.0.115:6443 --token oldboy.yinzhengjiejason --discovery-token-ca-cert-hash sha256:c919a67ea1d3ce32091ee7132a1dd1e20cee3442ef5a7e54e1e09779dcb4c4c4  --control-plane  --certificate-key 59403bc4bdcd4458b988d9e57e292b71316f236a12843183115d6bdd2ef3b680
...

To start administering your cluster from this node, you need to run the following as a regular user:

	mkdir -p $HOME/.kube
	sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	sudo chown $(id -u):$(id -g) $HOME/.kube/config

Run 'kubectl get nodes' to see this node join the cluster.

[root@worker117 ~]#

6.验证K8S的高可用

[root@worker117 ~]# mkdir -p $HOME/.kube
[root@worker117 ~]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@worker117 ~]# sudo chown $(id -u):$(id -g) $HOME/.kube/config
[root@worker117 ~]#
[root@worker117 ~]# kubectl get nodes
NAME        STATUS   ROLES                  AGE   VERSION
master115   Ready    control-plane,master   58d   v1.23.17
worker116   Ready    control-plane,master   58d   v1.23.17
worker117   Ready    control-plane,master   58d   v1.23.17
[root@worker233 ~]# 
posted @ 2024-10-16 22:01  Nolen_H  阅读(17)  评论(0)    收藏  举报