k8s证书过期问题处理(Unable to connect to the server: x509: certificate has expired or is not yet valid)

问题:k8s证书过期

[root@nb001 ~]# kubectl get pod -A
Unable to connect to the server: x509: certificate has expired or is not yet valid: current time 2025-05-14T15:52:59+08:00 is after 2025-05-14T02:23:24Z

解决方案

更新证书

执行步骤汇总

# 备份 kubernetes配置
cp -r /etc/kubernetes  /etc/kubernetes_bak
# 检测证书是否过期
kubeadm certs check-expiration
# 更新证书
kubeadm certs renew all

步骤执行详情

root@master:~# date
2025年 05月 14日 星期三 15:55:44 CST
root@master:~# cp -r /etc/kubernetes  /etc/kubernetes_bak
root@master:/etc# kubeadm certs check-expiration
[check-expiration] Reading configuration from the cluster...
[check-expiration] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[check-expiration] Error reading configuration from the Cluster. Falling back to default configuration

CERTIFICATE                EXPIRES                  RESIDUAL TIME   CERTIFICATE AUTHORITY   EXTERNALLY MANAGED
admin.conf                 May 14, 2025 02:23 UTC   <invalid>       ca                      no
apiserver                  May 14, 2025 02:23 UTC   <invalid>       ca                      no
apiserver-etcd-client      May 14, 2025 02:23 UTC   <invalid>       etcd-ca                 no
apiserver-kubelet-client   May 14, 2025 02:23 UTC   <invalid>       ca                      no
controller-manager.conf    May 14, 2025 02:23 UTC   <invalid>       ca                      no
etcd-healthcheck-client    May 14, 2025 02:23 UTC   <invalid>       etcd-ca                 no
etcd-peer                  May 14, 2025 02:23 UTC   <invalid>       etcd-ca                 no
etcd-server                May 14, 2025 02:23 UTC   <invalid>       etcd-ca                 no
front-proxy-client         May 14, 2025 02:23 UTC   <invalid>       front-proxy-ca          no
scheduler.conf             May 14, 2025 02:23 UTC   <invalid>       ca                      no

CERTIFICATE AUTHORITY   EXPIRES                  RESIDUAL TIME   EXTERNALLY MANAGED
ca                      May 12, 2034 02:23 UTC   8y              no
etcd-ca                 May 12, 2034 02:23 UTC   8y              no
front-proxy-ca          May 12, 2034 02:23 UTC   8y              no

如上,发现很多证书都是<invalid>的状态,接着更新证书:

root@master:/etc# kubeadm certs renew all
[renew] Reading configuration from the cluster...
[renew] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[renew] Error reading configuration from the Cluster. Falling back to default configuration

certificate embedded in the kubeconfig file for the admin to use and for kubeadm itself renewed
certificate for serving the Kubernetes API renewed
certificate the apiserver uses to access etcd renewed
certificate for the API server to connect to kubelet renewed
certificate embedded in the kubeconfig file for the controller manager to use renewed
certificate for liveness probes to healthcheck etcd renewed
certificate for etcd nodes to communicate with each other renewed
certificate for serving etcd renewed
certificate for the front proxy client renewed
certificate embedded in the kubeconfig file for the scheduler manager to use renewed

Done renewing certificates. You must restart the kube-apiserver, kube-controller-manager, kube-scheduler and etcd, so that they can use the new certificates.

更新证书后,再执行kubeadm certs check-expiration命令发现到期时间已经延迟365天

root@master:/etc# kubeadm certs check-expiration
[check-expiration] Reading configuration from the cluster...
[check-expiration] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
W0514 16:07:02.140012   22462 utils.go:69] The recommended value for "resolvConf" in "KubeletConfiguration" is: /run/systemd/resolve/resolv.conf; the provided value is: /run/systemd/resolve/resolv.conf

CERTIFICATE                EXPIRES                  RESIDUAL TIME   CERTIFICATE AUTHORITY   EXTERNALLY MANAGED
admin.conf                 May 14, 2026 08:06 UTC   364d            ca                      no
apiserver                  May 14, 2026 08:06 UTC   364d            ca                      no
apiserver-etcd-client      May 14, 2026 08:06 UTC   364d            etcd-ca                 no
apiserver-kubelet-client   May 14, 2026 08:06 UTC   364d            ca                      no
controller-manager.conf    May 14, 2026 08:06 UTC   364d            ca                      no
etcd-healthcheck-client    May 14, 2026 08:06 UTC   364d            etcd-ca                 no
etcd-peer                  May 14, 2026 08:06 UTC   364d            etcd-ca                 no
etcd-server                May 14, 2026 08:06 UTC   364d            etcd-ca                 no
front-proxy-client         May 14, 2026 08:06 UTC   364d            front-proxy-ca          no
scheduler.conf             May 14, 2026 08:06 UTC   364d            ca                      no

CERTIFICATE AUTHORITY   EXPIRES                  RESIDUAL TIME   EXTERNALLY MANAGED
ca                      May 12, 2034 02:23 UTC   8y              no
etcd-ca                 May 12, 2034 02:23 UTC   8y              no
front-proxy-ca          May 12, 2034 02:23 UTC   8y              no

新的问题①及解决方案 error: You must be logged in to the server (Unauthorized)

更新完证书后,再执行kubectl get pod -A命令报错

root@master:~# kubectl  get pod -A
error: You must be logged in to the server (Unauthorized)

备份配置文件 cp -rp $HOME/.kube/config $HOME/.kube/config.bak ,并生成新的配置文件sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
执行kubectl get pod -A查看解决结果

root@master:~# kubectl get pod -A
error: You must be logged in to the server (Unauthorized)
root@master:~# cp -rp $HOME/.kube/config $HOME/.kube/config.bak
root@master:~#
root@master:~# cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
cp:是否覆盖'/root/.kube/config'? y
root@master:~#
root@master:~# kubectl  get pod -A
NAMESPACE     NAME                                                        READY   STATUS    RESTARTS         AGE
default       keycloak-deployment-54bf744955-v9chg                        1/1     Running   0                331d

新的问题②及解决方案

上述问题解决后,执行kubectl apply、kubectl create命令可以正常执行,但无法实际操作资源

换句话说:就是执行了,但没生效
举例: 比如你更新service-user.yaml调整了镜像版本,想重新部署下user服务。执行kubectl apply -f service-user.yaml ,但实际pod还是上次部署的pod,并没有重新部署。其余不生效的情况类似。

解决方案:

  1. 重启kubelet
systemctl restart kubelet
  1. 重启kube-apiserver、kube-controller-manage、kube-scheduler
# 如果是docker作为容器的话,可执行如下命令。其余容器方法类似
docker ps |grep kube-apiserver|grep -v pause|awk '{print $1}'|xargs -i docker restart {}
docker ps |grep kube-controller-manage|grep -v pause|awk '{print $1}'|xargs -i docker restart {}
docker ps |grep kube-scheduler|grep -v pause|awk '{print $1}'|xargs -i docker restart {}
  1. 重新部署user服务即可

至此,由于k8s证书过期引起的问题得到彻底解决。

参考:https://blog.csdn.net/wdy_2099/article/details/128262621

posted @ 2025-05-14 16:34  明月,  阅读(410)  评论(0)    收藏  举报