k8s集群的缩容

1.k8s集群的缩容的流程

	A.将已经调度到待下线节点的Pod驱逐到其他节点;
	B.停止kubelet进程,避免kubelet实时上报数据给apiServer;
	C.如果是二进制部署的话,可以将kube-proxy组件停止;【可选】
	D.待下线节点重置环境,重新安装操作系统(防止数据泄露),然后再将服务器用做其他处理;
	E.master节点移除待下线节点;

2.1 驱逐已经调度到节点的Pod


[root@master231 18-scheduler]# kubectl get nodes
NAME        STATUS                     ROLES                  AGE   VERSION
master231   Ready                      control-plane,master   13d   v1.23.17
worker232   Ready                      <none>                 13d   v1.23.17
worker233   Ready,SchedulingDisabled   <none>        

2.2 停止kubelet进程
systemctl stop kubelet.service
2.3 重新安装操作系统
2.4 删除节点所有的k8s集群数据

 kubeadm reset -f
[preflight] Running pre-flight checks
W0722 16:51:14.020921  335567 removeetcdmember.go:80] [reset] No kubeadm config, using etcd pod spec to get data directory
[reset] No etcd config found. Assuming external etcd
[reset] Please, manually reset etcd to prevent further issues
[reset] Stopping the kubelet service
[reset] Unmounting mounted directories in "/var/lib/kubelet"
[reset] Deleting contents of config directories: [/etc/kubernetes/manifests /etc/kubernetes/pki]
[reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]
[reset] Deleting contents of stateful directories: [/var/lib/kubelet /var/lib/dockershim /var/run/kubernetes /var/lib/cni]

The reset process does not clean CNI configuration. To do so, you must remove /etc/cni/net.d

The reset process does not reset or clean up iptables rules or IPVS tables.
If you wish to reset iptables, you must do so manually by using the "iptables" command.

If your cluster was setup to utilize IPVS, run ipvsadm --clear (or similar)
to reset your system's IPVS tables.

The reset process does not clean your kubeconfig files and you must remove them manually.
Please, check the contents of the $HOME/.kube/config file.
[root@worker233 ~]# 
[root@worker233 ~]# ll /var/lib/kubelet
total 8
drwxr-xr-x  2 root root 4096 Jul 22 16:51 ./
drwxr-xr-x 65 root root 4096 Jul 15 15:06 ../

2.5 移除待下线节点

[root@master231 18-scheduler]# kubectl get nodes
NAME        STATUS                        ROLES                  AGE   VERSION
master231   Ready                         control-plane,master   13d   v1.23.17
worker232   Ready                         <none>                 13d   v1.23.17
worker233   NotReady,SchedulingDisabled   <none>                 13d   v1.23.17
[root@master231 18-scheduler]# 
[root@master231 18-scheduler]# kubectl delete nodes worker233 
node "worker233" deleted
[root@master231 18-scheduler]# 
[root@master231 18-scheduler]# kubectl get nodes
NAME        STATUS   ROLES                  AGE   VERSION
master231   Ready    control-plane,master   13d   v1.23.17
worker232   Ready    <none>                 13d   v1.23.17
[root@master231 18-scheduler]# 
posted @ 2025-07-22 21:11  寻梦行  阅读(11)  评论(0)    收藏  举报