k8s更换master节点ip

环境:

Os:Centos 7

k8s:v1.28.13

 

master原来IP是:192.168.1.108,需要修改为:192.168.1.109

 

1.修改每个节点的hosts文件

[root@master ~]# more /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.1.109 master
192.168.1.105 node1
192.168.1.106 node2

 

2.备份 /etc/kubernetes 目录
cp -Rf /etc/kubernetes/ /etc/kubernetes_bak

 

3.替换 /etc/kubernetes 中所有配置文件的 APIServer 地址

[root@k8s-master ~]# cd /etc/kubernetes
[root@k8s-master kubernetes]# find . -type f | xargs sed -i "s/192.168.1.108/192.168.1.109/g"

 

4.替换$HOME/.kube/config文件中的旧ip为新ip

[root@k8s-master kubernetes]# cd $HOME/.kube/
[root@k8s-master .kube]# find . -type f | xargs sed -i "s/192.168.1.108/192.168.1.109/g"

 

5.修改$HOME/.kube/cache/discovery/ 下的文件夹名改成新的ip

每个节点都需要修改

[root@k8s-master .kube]# cd $HOME/.kube/cache/discovery/
[root@k8s-master discovery]# pwd
/root/.kube/cache/discovery
[root@k8s-master discovery]# ls
192.168.1.108_6443
[root@k8s-master discovery]# mv 192.168.1.108_6443 192.168.1.109_6443

 

 

6.识别 /etc/kubernetes/pki 中以旧的 IP 地址作为 alt name 的证书

cd /etc/kubernetes/pki

for i in $(find /etc/kubernetes/pki -type f -name "*.crt"); do
   echo ${i}
   openssl x509 -in ${i} -text | grep -A1 'Address'
done

输出如下:

/etc/kubernetes/pki/ca.crt
/etc/kubernetes/pki/apiserver-kubelet-client.crt
/etc/kubernetes/pki/front-proxy-ca.crt
/etc/kubernetes/pki/front-proxy-client.crt
/etc/kubernetes/pki/etcd/ca.crt
/etc/kubernetes/pki/etcd/server.crt
                DNS:localhost, DNS:master, IP Address:192.168.1.108, IP Address:127.0.0.1, IP Address:0:0:0:0:0:0:0:1
    Signature Algorithm: sha256WithRSAEncryption
/etc/kubernetes/pki/etcd/healthcheck-client.crt
/etc/kubernetes/pki/etcd/peer.crt
                DNS:localhost, DNS:master, IP Address:192.168.1.108, IP Address:127.0.0.1, IP Address:0:0:0:0:0:0:0:1
    Signature Algorithm: sha256WithRSAEncryption
/etc/kubernetes/pki/apiserver-etcd-client.crt
/etc/kubernetes/pki/apiserver.crt
                DNS:kubernetes, DNS:kubernetes.default, DNS:kubernetes.default.svc, DNS:kubernetes.default.svc.cluster.local, DNS:master, IP Address:10.96.0.1, IP Address:192.168.1.108
    Signature Algorithm: sha256WithRSAEncryption
[root@master pki]# 

 

记录ip的证书就如下这些文件

/etc/kubernetes/pki/apiserver.crt
/etc/kubernetes/pki/etcd/server.crt
/etc/kubernetes/pki/etcd/peer.crt

 

7.删除第6步中 grep 出的证书和私钥,重新生成这些证书

cd /etc/kubernetes/pki
rm /etc/kubernetes/pki/apiserver.crt
rm /etc/kubernetes/pki/apiserver.key


rm /etc/kubernetes/pki/etcd/peer.crt
rm /etc/kubernetes/pki/etcd/peer.key

rm /etc/kubernetes/pki/etcd/server.crt
rm /etc/kubernetes/pki/etcd/server.key

重新生成证书

kubeadm init phase certs all

 

8.找到 kube-system 命名空间中引用旧 IP 的 ConfigMap

[root@k8s-master pki]# kubectl -n kube-system edit cm kubeadm-config
Edit cancelled, no changes made.
[root@k8s-master pki]# kubectl -n kube-system edit cm kube-proxy   需要修改
configmap/kube-proxy edited
[root@k8s-master pki]# kubectl edit cm -n kube-system coredns
Edit cancelled, no changes made.
[root@k8s-master pki]# kubectl edit cm -n kube-public cluster-info   需要修改
configmap/cluster-info edited

 

9.重启master节点服务器,创建加入集群令牌

[root@k8s-master ~]# reboot
[root@master ~]# kubeadm token create --print-join-command
kubeadm join 192.168.1.109:6443 --token m5uzo2.g53abkii0n1nbh6g --discovery-token-ca-cert-hash sha256:5b8d363df9f94737c6f7072ecf1dbfa74e0a6770d95fdd7419e6aa6c840f4da9 

 

 

10.node节点执行

10.1 启动node节点

 

10.2 修改$HOME/.kube/cache/discovery/ 下的文件夹名改成新的ip

[root@node1 ~]# cd $HOME/.kube/cache/discovery/
[root@node1 discovery]# ls
192.168.1.108_6443
[root@node1 discovery]# 
[root@node1 discovery]# mv 192.168.1.108_6443 192.168.1.109_6443

[root@node2 ~]# cd $HOME/.kube/cache/discovery/
[root@node2 discovery]# ls
192.168.1.108_6443
[root@node2 discovery]# mv 192.168.1.108_6443 192.168.1.109_6443
[root@node2 discovery]# 

 

10.3重启

reboot

 

10.4 重新加入

[root@k8s-node01 ~]#kubeadm  reset
[root@k8s-node01 ~]#kubeadm join 192.168.1.109:6443 --token m5uzo2.g53abkii0n1nbh6g --discovery-token-ca-cert-hash sha256:5b8d363df9f94737c6f7072ecf1dbfa74e0a6770d95fdd7419e6aa6c840f4da9 

 

[root@node2 ~]# kubeadm  reset
[reset] Reading configuration from the cluster...
[reset] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
W0903 16:33:28.492657    2777 reset.go:120] [reset] Unable to fetch the kubeadm-config ConfigMap from cluster: failed to get config map: Get "https://192.168.1.108:6443/api/v1/namespaces/kube-system/configmaps/kubeadm-config?timeout=10s": dial tcp 192.168.1.108:6443: connect: no route to host
W0903 16:33:28.498345    2777 preflight.go:56] [reset] WARNING: Changes made to this host by 'kubeadm init' or 'kubeadm join' will be reverted.
[reset] Are you sure you want to proceed? [y/N]: y
[preflight] Running pre-flight checks
W0903 16:33:30.352136    2777 removeetcdmember.go:106] [reset] No kubeadm config, using etcd pod spec to get data directory
[reset] Deleted contents of the etcd data directory: /var/lib/etcd
[reset] Stopping the kubelet service
[reset] Unmounting mounted directories in "/var/lib/kubelet"
W0903 16:33:51.173963    2777 cleanupnode.go:99] [reset] Failed to remove containers: [failed to stop running pod dbe00aeea558adcb617ed619424ded31e4f576a804cc246f3c454cec95f1b3d2: output: E0903 16:33:40.707950    2879 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" podSandboxID="dbe00aeea558adcb617ed619424ded31e4f576a804cc246f3c454cec95f1b3d2"
time="2025-09-03T16:33:40+08:00" level=fatal msg="stopping the pod sandbox \"dbe00aeea558adcb617ed619424ded31e4f576a804cc246f3c454cec95f1b3d2\": rpc error: code = DeadlineExceeded desc = context deadline exceeded"
: exit status 1, failed to stop running pod 29f0cc21fee69c7d43cb992b075e702daf3fa91f39d1aaabae3a4c55f431176d: output: E0903 16:33:51.171678    3012 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" podSandboxID="29f0cc21fee69c7d43cb992b075e702daf3fa91f39d1aaabae3a4c55f431176d"
time="2025-09-03T16:33:51+08:00" level=fatal msg="stopping the pod sandbox \"29f0cc21fee69c7d43cb992b075e702daf3fa91f39d1aaabae3a4c55f431176d\": rpc error: code = DeadlineExceeded desc = context deadline exceeded"
: exit status 1]
[reset] Deleting contents of directories: [/etc/kubernetes/manifests /var/lib/kubelet /etc/kubernetes/pki]
[reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]

The reset process does not clean CNI configuration. To do so, you must remove /etc/cni/net.d

The reset process does not reset or clean up iptables rules or IPVS tables.
If you wish to reset iptables, you must do so manually by using the "iptables" command.

If your cluster was setup to utilize IPVS, run ipvsadm --clear (or similar)
to reset your system's IPVS tables.

The reset process does not clean your kubeconfig files and you must remove them manually.
Please, check the contents of the $HOME/.kube/config file.


[root@node2 ~]# kubeadm join 192.168.1.109:6443 --token m5uzo2.g53abkii0n1nbh6g --discovery-token-ca-cert-hash sha256:5b8d363df9f94737c6f7072ecf1dbfa74e0a6770d95fdd7419e6aa6c840f4da9 
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

 

11.将master节点上的admin.conf文件拷贝到node节点

scp /etc/kubernetes/admin.conf root@192.168.1.105:/etc/kubernetes/
scp /etc/kubernetes/admin.conf root@192.168.1.106:/etc/kubernetes/

 

node节点执行命令

cat <<EOF >> /root/.bashrc
export KUBECONFIG=/etc/kubernetes/admin.conf
EOF

source /root/.bashrc

 

posted @ 2025-09-03 17:44  slnngk  阅读(34)  评论(0)    收藏  举报