23-k8s集群的缩容与扩容及token管理

一、集群的缩容

1.k8s集群的缩容的流程

- 1.1 驱逐已经调度到该节点的Pod;
- 1.2 将驱逐节点的kubelet进行下线;
- 1.3 将驱逐节点的数据进行备份,迁移,重置之后在重新安装操作系统(避免数据泄露,将多余的磁盘进行格式化);
- 1.4 在master节点移除已经驱逐的节点

2.驱逐节点实战案例

2.1 驱逐已经调度到该节点的Pod;

[root@master231 scheduler]# kubectl drain worker233 --ignore-daemonsets 
node/worker233 cordoned
WARNING: ignoring DaemonSet-managed Pods: calico-system/calico-node-d4554, calico-system/csi-node-driver-8vj74, kube-system/kube-proxy-mbdf6, metallb-system/speaker-cpt7s
evicting pod default/scheduler-resources-6d6785785-wz9xq
evicting pod default/scheduler-resources-6d6785785-hmghm
evicting pod calico-system/calico-typha-595f8c6fcb-n7ffv
evicting pod default/scheduler-resources-6d6785785-l5nns
evicting pod default/scheduler-resources-6d6785785-vrch5
pod/calico-typha-595f8c6fcb-n7ffv evicted
pod/scheduler-resources-6d6785785-l5nns evicted
pod/scheduler-resources-6d6785785-vrch5 evicted
pod/scheduler-resources-6d6785785-hmghm evicted
pod/scheduler-resources-6d6785785-wz9xq evicted
node/worker233 drained

[root@master231 scheduler]# kubectl get nodes
NAME        STATUS                     ROLES                  AGE    VERSION
master231   Ready                      control-plane,master   3d1h   v1.23.17
worker232   Ready                      <none>                 3d     v1.23.17
worker233   Ready,SchedulingDisabled   <none>                 3d     v1.23.17

2.2 将驱逐节点的kubelet进行下线;

[root@worker233 ~]# systemctl disable --now kubelet.service 
Removed /etc/systemd/system/multi-user.target.wants/kubelet.service.

2.3 驱逐节点的数据进行备份,迁移,重置之后在重新安装操作系统(避免数据泄露,将多余的磁盘进行格式化);

[root@worker233 ~]# kubeadm reset -f
[preflight] Running pre-flight checks
W0410 12:03:58.005183  211122 removeetcdmember.go:80] [reset] No kubeadm config, using etcd pod spec to get data directory
[reset] No etcd config found. Assuming external etcd
[reset] Please, manually reset etcd to prevent further issues
[reset] Stopping the kubelet service
[reset] Unmounting mounted directories in "/var/lib/kubelet"
[reset] Deleting contents of config directories: [/etc/kubernetes/manifests /etc/kubernetes/pki]
[reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]
[reset] Deleting contents of stateful directories: [/var/lib/kubelet /var/lib/dockershim /var/run/kubernetes /var/lib/cni]

The reset process does not clean CNI configuration. To do so, you must remove /etc/cni/net.d

The reset process does not reset or clean up iptables rules or IPVS tables.
If you wish to reset iptables, you must do so manually by using the "iptables" command.

If your cluster was setup to utilize IPVS, run ipvsadm --clear (or similar)
to reset your system's IPVS tables.

The reset process does not clean your kubeconfig files and you must remove them manually.
Please, check the contents of the $HOME/.kube/config file.

2.4 在master节点移除已经驱逐的节点

[root@master231 scheduler]# kubectl get nodes
NAME        STATUS                        ROLES                  AGE    VERSION
master231   Ready                         control-plane,master   3d1h   v1.23.17
worker232   Ready                         <none>                 3d     v1.23.17
worker233   NotReady,SchedulingDisabled   <none>                 3d     v1.23.17
[root@master231 scheduler]# 
[root@master231 scheduler]# 
[root@master231 scheduler]# kubectl delete nodes worker233 
node "worker233" deleted
[root@master231 scheduler]# 
[root@master231 scheduler]# kubectl get nodes
NAME        STATUS   ROLES                  AGE    VERSION
master231   Ready    control-plane,master   3d1h   v1.23.17
worker232   Ready    <none>                 3d     v1.23.17

二、kubeadm管理token

1.创建token

[root@master231 scheduler]# kubeadm token create
7y9a6t.c4n40ljuec10tk6k

2.查看token列表

[root@master231 scheduler]# kubeadm token list
TOKEN                     TTL         EXPIRES                USAGES                   DESCRIPTION                                                EXTRA GROUPS
7y9a6t.c4n40ljuec10tk6k   23h         2025-04-11T06:44:44Z   authentication,signing   <none>                                                     system:bootstrappers:kubeadm:default-node-token

3.删除token

[root@master231 scheduler]# kubeadm token delete 7y9a6t
bootstrap token "7y9a6t" deleted
[root@master231 scheduler]# 
[root@master231 scheduler]# kubeadm token list
[root@master231 scheduler]# 

4.临时生成token打印终端但不创建

[root@master231 scheduler]# kubeadm token generate
ux5xbj.tw0p1t82022577ko
[root@master231 scheduler]# 
[root@master231 scheduler]# kubeadm token generate
eaeebd.qnzwh0kb1ccuct8v
[root@master231 scheduler]# 
[root@master231 scheduler]# kubeadm token generate
p3k7bd.oo8brc14atcvjshs
[root@master231 scheduler]# 
[root@master231 scheduler]# kubeadm token list

5.创建token可以自定义token

[root@master231 scheduler]# kubeadm token create  oldboy.yinzhengjiejason --print-join-command --ttl 0
kubeadm join 10.0.0.231:6443 --token oldboy.yinzhengjiejason --discovery-token-ca-cert-hash sha256:2617b95e2ce0c94389031841fab9801e5724ed544cd9a60dcd285a9fa7a1b10b 




[root@master231 scheduler]# kubeadm token list
TOKEN                     TTL         EXPIRES   USAGES                   DESCRIPTION                                                EXTRA GROUPS
oldboy.yinzhengjiejason   <forever>   <never>   authentication,signing   <none>                                                     system:bootstrappers:kubeadm:default-node-token

三、集群的扩容

1.扩容流程

- 1.1 被扩容节点需要安装docker|containerd,kubeadm,Kubectl,kubelet工具包,及基础优化(核心数大于2core,内核调优,禁用swap分区)等;
- 1.2 服务端需要创建token,将来被加入集群的worker基于该token进行认证;
- 1.3 将kubelet配置开机自启动;
- 1.4 使用kubeadm join加入集群(bootstrap阶段);
- 1.5 管理节点查看验证;

2.扩容节点案例

2.1 被扩容节点需要安装docker|containerd,kubeadm,Kubectl,kubelet工具包,及基础优化(核心数大于2core,内核调优,禁用swap分区)等;

2.2 服务端需要创建token,将来被加入集群的worker基于该token进行认证;

[root@master231 scheduler]# kubeadm token create  oldboy.yinzhengjiejason --print-join-command --ttl 0
kubeadm join 10.0.0.231:6443 --token oldboy.yinzhengjiejason --discovery-token-ca-cert-hash sha256:2617b95e2ce0c94389031841fab9801e5724ed544cd9a60dcd285a9fa7a1b10b 
[root@master231 scheduler]# 
[root@master231 scheduler]# kubeadm token list
TOKEN                     TTL         EXPIRES   USAGES                   DESCRIPTION                                                EXTRA GROUPS
oldboy.yinzhengjiejason   <forever>   <never>   authentication,signing   <none>                                                     system:bootstrappers:kubeadm:default-node-token

2.3 将kubelet配置开机自启动;

[root@worker233 ~]# systemctl is-enabled kubelet
disabled
[root@worker233 ~]# 
[root@worker233 ~]# systemctl enable --now kubelet
Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /lib/systemd/system/kubelet.service.
[root@worker233 ~]# 
[root@worker233 ~]# systemctl is-enabled kubelet
enabled

2.4 使用kubeadm join加入集群(bootstrap阶段);

[root@worker233 ~]# kubeadm join 10.0.0.231:6443 --token oldboy.yinzhengjiejason --discovery-token-ca-cert-hash sha256:2617b95e2ce0c94389031841fab9801e5724ed544cd9a60dcd285a9fa7a1b10b
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
W0410 14:50:44.610565  213085 utils.go:69] The recommended value for "resolvConf" in "KubeletConfiguration" is: /run/systemd/resolve/resolv.conf; the provided value is: /run/systemd/resolve/resolv.conf
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

[root@worker233 ~]# 

2.5 管理节点查看验证

[root@master231 scheduler]# kubectl get nodes 
NAME        STATUS   ROLES                  AGE    VERSION
master231   Ready    control-plane,master   3d3h   v1.23.17
worker232   Ready    <none>                 3d3h   v1.23.17
worker233   Ready    <none>                 30s    v1.23.17
[root@master231 scheduler]# 
[root@master231 scheduler]# kubectl get nodes -o wide
NAME        STATUS   ROLES                  AGE    VERSION    INTERNAL-IP   EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION       CONTAINER-RUNTIME
master231   Ready    control-plane,master   3d3h   v1.23.17   10.0.0.231    <none>        Ubuntu 22.04.4 LTS   5.15.0-136-generic   docker://20.10.24
worker232   Ready    <none>                 3d3h   v1.23.17   10.0.0.232    <none>        Ubuntu 22.04.4 LTS   5.15.0-119-generic   docker://20.10.24
worker233   Ready    <none>                 31s    v1.23.17   10.0.0.233    <none>        Ubuntu 22.04.4 LTS   5.15.0-119-generic   docker://20.10.24
posted @ 2025-04-10 22:59  丁志岩  阅读(23)  评论(0)    收藏  举报