k8s 证书更新,基于kubernetes v1.19.3
学习记录:
通过kubeadm安装的K8S集群,证书有效期为一年,一年过期后,会导致api service不可用,使用过程中会出现报错:x509: certificate has expired or is not yet valid.
目前证书更新的方法:
1. 官方推荐一年之内使用kubeadm upgrade更新一次kubernetes系统。 2. 源代码编译安装,使得证书的时间延长。 3. 一年内手动更新证书。 4. 启用自动轮换kubelet证书。
重点记录3、4
在master上使用如下命令查看证书过期时间
[root@master]# kubeadm alpha certs check-expiration CERTIFICATE EXPIRES RESIDUAL TIME CERTIFICATE AUTHORITY EXTERNALLY MANAGED admin.conf Oct 19, 2021 09:53 UTC 334d no apiserver Oct 19, 2021 09:52 UTC 334d ca no apiserver-etcd-client Oct 19, 2021 09:53 UTC 334d etcd-ca no apiserver-kubelet-client Oct 19, 2021 09:52 UTC 334d ca no controller-manager.conf Oct 19, 2021 09:53 UTC 334d no etcd-healthcheck-client Oct 19, 2021 09:53 UTC 334d etcd-ca no etcd-peer Oct 19, 2021 09:53 UTC 334d etcd-ca no etcd-server Oct 19, 2021 09:53 UTC 334d etcd-ca no front-proxy-client Oct 19, 2021 09:52 UTC 334d front-proxy-ca no scheduler.conf Oct 19, 2021 09:53 UTC 334d no CERTIFICATE AUTHORITY EXPIRES RESIDUAL TIME EXTERNALLY MANAGED ca Oct 17, 2030 09:52 UTC 9y no etcd-ca Oct 17, 2030 09:53 UTC 9y no front-proxy-ca Oct 17, 2030 09:52 UTC 9y no
一、手动更新证书。(证书还未过期的情况下)
1. 集群还能访问的情况下,使用: kubeadm config view > kube-config.yaml 生成 集群的yaml文件,可以提前准备好免得集群挂掉之后不能生成。
[root@master ~]# cd /etc/kubernetes/manifests/
[root@master manifests]# kubeadm config view > kube-config.yaml
[root@master manifests]# cat kube-config.yaml
apiServer:
extraArgs:
authorization-mode: Node,RBAC
timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta2
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
dns:
type: CoreDNS
etcd:
local:
dataDir: /var/lib/etcd
imageRepository: registry.aliyuncs.com/google_containers
kind: ClusterConfiguration
kubernetesVersion: v1.19.3
networking:
dnsDomain: cluster.local
podSubnet: 10.244.0.0/16
serviceSubnet: 10.1.0.0/16
scheduler: {}
2. 备份原有的证书文件
[root@master manifests]# cp -r /etc/kubernetes/pki/ /etc/kubernetes/pki_backup
3. 开始更新证书
[root@master manifests]# kubeadm alpha certs renew all --config=kube-config.yaml
4. 完成后重启master上kube-apiserver,kube-controller,kube-scheduler,etcd这4个容器,如果有多台master,则将第一台生成的相关证书拷贝到其余master即可
[root@master pki]# docker restart `docker ps | grep etcd | awk '{print $1}'`
8b09bcb64cd0
eb63e6c341e4
[root@master pki]# docker restart `docker ps | grep kube-apiserver | awk '{print $1}'`
6d8afc50d03a
84261c9cb25f
[root@master pki]# docker restart `docker ps | grep kube-controller | awk '{print $1}'`
ba3cc2a57987
[root@master pki]# docker restart `docker ps | grep kube-scheduler | awk '{print $1}'`
5fd115b29da1
8011162e1cc8
5. 查看pod集群状态,检查刚刚重启的status是否为Running(一般会等待2分钟左右)
kubectl get pods --all-namespaces -o wide
6. 查看当前集群的证书时间,RESIDUAL TIME,为364d,续期一年。
[root@master pki]# kubeadm alpha certs check-expiration [check-expiration] Reading configuration from the cluster... [check-expiration] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml' CERTIFICATE EXPIRES RESIDUAL TIME CERTIFICATE AUTHORITY EXTERNALLY MANAGED admin.conf Nov 19, 2021 08:47 UTC 364d no apiserver Nov 19, 2021 08:47 UTC 364d ca no apiserver-etcd-client Nov 19, 2021 08:47 UTC 364d etcd-ca no apiserver-kubelet-client Nov 19, 2021 08:47 UTC 364d ca no controller-manager.conf Nov 19, 2021 08:47 UTC 364d no etcd-healthcheck-client Nov 19, 2021 08:47 UTC 364d etcd-ca no etcd-peer Nov 19, 2021 08:47 UTC 364d etcd-ca no etcd-server Nov 19, 2021 08:47 UTC 364d etcd-ca no front-proxy-client Nov 19, 2021 08:47 UTC 364d front-proxy-ca no scheduler.conf Nov 19, 2021 08:47 UTC 364d no CERTIFICATE AUTHORITY EXPIRES RESIDUAL TIME EXTERNALLY MANAGED ca Oct 17, 2030 09:52 UTC 9y no etcd-ca Oct 17, 2030 09:53 UTC 9y no front-proxy-ca Oct 17, 2030 09:52 UTC 9y no
# 当前通过手动的方式更新证书完成
启动自动轮换kubelet证书
kubelet 证书分为 server 和 client 两种,kubernetes 1.9 开始默认启用了client证书的自动轮换,server证书的自动轮换还需要我们手动开启。(证书的位置)
[root@master ~]# cd /etc/kubernetes/pki [root@master pki]# ls apiserver.crt apiserver-etcd-client.key apiserver-kubelet-client.crt ca.crt etcd front-proxy-ca.key front-proxy-client.key sa.pub apiserver-etcd-client.crt apiserver.key apiserver-kubelet-client.key ca.key front-proxy-ca.crt front-proxy-client.crt sa.key [root@master pki]# cd etcd/ [root@master etcd]# ls ca.crt ca.key healthcheck-client.crt healthcheck-client.key peer.crt peer.key server.crt server.key
1. 增加kubelet参数
[root@master]# vim /etc/sysconfig/kubelet KUBELET_EXTRA_ARGS=--feature-gates=RotateKubeletServerCertificate=true --rotate-server-certificates=true
2. 增加kube-controller-manager.yaml 参数
[root@master]# cd /etc/kubernetes/manifests/
[root@master manifests]# vim kube-controller-manager.yaml
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
component: kube-controller-manager
tier: control-plane
name: kube-controller-manager
namespace: kube-system
spec:
containers:
- command:
- kube-controller-manager
- --experimental-cluster-signing-duration=87600h0m0s #增加证书颁发时间参数
- --feature-gates=RotateKubeletServerCertificate=true #开启server证书签发
- --allocate-node-cidrs=true
…………………………
…………………………
3. 创建rbac ( rbac是kubernetes的一种认证访问授权机制 ) 对象,允许节点轮换kubelet server证书
cat > ca-update.yaml << EOF apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: annotations: rbac.authorization.kubernetes.io/autoupdate: "true" labels: kubernetes.io/bootstrapping: rbac-defaults name: system:certificates.k8s.io:certificatesigningrequests:selfnodeserver rules: - apiGroups: - certificates.k8s.io resources: - certificatesigningrequests/selfnodeserver verbs: - create --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: kubeadm:node-autoapprove-certificate-server roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:certificates.k8s.io:certificatesigningrequests:selfnodeserver subjects: - apiGroup: rbac.authorization.k8s.io kind: Group name: system:nodes EOF
4. 根据yaml文件开启server证书自动轮换
[root@master manifests]# kubectl apply -f ca-update.yaml clusterrolebinding.rbac.authorization.k8s.io/kubeadm:node-autoapprove-certificate-server created
5. 重启kubelet
[root@master ~]# systemctl restart kubelet.service
6. 查看pod的状态
[root@master ~]# kubectl get pods --all-namespaces -o wide