k8s升级(1.14 )
Upgrading kubeadm clusters from v1.13 to v1.14
This page explains how to upgrade a Kubernetes cluster created with kubeadm from version 1.13.x to version 1.14.x, and from version 1.14.x to 1.14.y (where y > x).
The upgrade workflow at high level is the following:
- Upgrade the primary control plane node.
- Upgrade additional control plane nodes.
- Upgrade worker nodes.
Note: With the release of Kubernetes v1.14, the kubeadm instructions for upgrading both HA and single control plane clusters are merged into a single document.
- Before you begin
- Determine which version to upgrade to
- Upgrade the first control plane node
- Upgrade additional control plane nodes
- Upgrade worker nodes
- Verify the status of the cluster
- Recovering from a failure state
- How it works
Before you begin
- You need to have a kubeadm Kubernetes cluster running version 1.13.0 or later.
- Swap must be disabled.
- The cluster should use a static control plane and etcd pods.
- Make sure you read the release notes carefully.
- Make sure to back up any important components, such as app-level state stored in a database.
kubeadm upgradedoes not touch your workloads, only components internal to Kubernetes, but backups are always a best practice.
Additional information
- All containers are restarted after upgrade, because the container spec hash value is changed.
- You only can upgrade from one MINOR version to the next MINOR version, or between PATCH versions of the same MINOR. That is, you cannot skip MINOR versions when you upgrade. For example, you can upgrade from 1.y to 1.y+1, but not from 1.y to 1.y+2.
Determine which version to upgrade to
- Find the latest stable 1.14 version:
Upgrade the first control plane node
- On your first control plane node, upgrade kubeadm:
-
Verify that the download works and has the expected version:
kubeadm version -
On the control plane node, run:
sudo kubeadm upgrade planYou should see output similar to this:
[preflight] Running pre-flight checks. [upgrade] Making sure the cluster is healthy: [upgrade/config] Making sure the configuration is correct: [upgrade/config] Reading configuration from the cluster... [upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml' [upgrade] Fetching available versions to upgrade to [upgrade/versions] Cluster version: v1.13.3 [upgrade/versions] kubeadm version: v1.14.0 Components that must be upgraded manually after you have upgraded the control plane with 'kubeadm upgrade apply': COMPONENT CURRENT AVAILABLE Kubelet 2 x v1.13.3 v1.14.0 Upgrade to the latest version in the v1.13 series: COMPONENT CURRENT AVAILABLE API Server v1.13.3 v1.14.0 Controller Manager v1.13.3 v1.14.0 Scheduler v1.13.3 v1.14.0 Kube Proxy v1.13.3 v1.14.0 CoreDNS 1.2.6 1.3.1 Etcd 3.2.24 3.3.10 You can now apply the upgrade by executing the following command: kubeadm upgrade apply v1.14.0 _____________________________________________________________________This command checks that your cluster can be upgraded, and fetches the versions you can upgrade to.
-
Choose a version to upgrade to, and run the appropriate command. For example:
sudo kubeadm upgrade apply v1.14.x- Replace
xwith the patch version you picked for this ugprade.
You should see output similar to this:
[preflight] Running pre-flight checks. [upgrade] Making sure the cluster is healthy: [upgrade/config] Making sure the configuration is correct: [upgrade/config] Reading configuration from the cluster... [upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml' [upgrade/version] You have chosen to change the cluster version to "v1.14.0" [upgrade/versions] Cluster version: v1.13.3 [upgrade/versions] kubeadm version: v1.14.0 [upgrade/confirm] Are you sure you want to proceed with the upgrade? [y/N]: y [upgrade/prepull] Will prepull images for components [kube-apiserver kube-controller-manager kube-scheduler etcd] [upgrade/prepull] Prepulling image for component etcd. [upgrade/prepull] Prepulling image for component kube-scheduler. [upgrade/prepull] Prepulling image for component kube-apiserver. [upgrade/prepull] Prepulling image for component kube-controller-manager. [apiclient] Found 0 Pods for label selector k8s-app=upgrade-prepull-etcd [apiclient] Found 0 Pods for label selector k8s-app=upgrade-prepull-kube-scheduler [apiclient] Found 0 Pods for label selector k8s-app=upgrade-prepull-kube-controller-manager [apiclient] Found 0 Pods for label selector k8s-app=upgrade-prepull-kube-apiserver [apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-etcd [apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-kube-controller-manager [apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-kube-scheduler [apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-kube-apiserver [upgrade/prepull] Prepulled image for component etcd. [upgrade/prepull] Prepulled image for component kube-apiserver. [upgrade/prepull] Prepulled image for component kube-scheduler. [upgrade/prepull] Prepulled image for component kube-controller-manager. [upgrade/prepull] Successfully prepulled the images for all the control plane components [upgrade/apply] Upgrading your Static Pod-hosted control plane to version "v1.14.0"... Static pod: kube-apiserver-myhost hash: 6436b0d8ee0136c9d9752971dda40400 Static pod: kube-controller-manager-myhost hash: 8ee730c1a5607a87f35abb2183bf03f2 Static pod: kube-scheduler-myhost hash: 4b52d75cab61380f07c0c5a69fb371d4 [upgrade/etcd] Upgrading to TLS for etcd Static pod: etcd-myhost hash: 877025e7dd7adae8a04ee20ca4ecb239 [upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/etcd.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2019-03-14-20-52-44/etcd.yaml" [upgrade/staticpods] Waiting for the kubelet to restart the component [upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s) Static pod: etcd-myhost hash: 877025e7dd7adae8a04ee20ca4ecb239 Static pod: etcd-myhost hash: 877025e7dd7adae8a04ee20ca4ecb239 Static pod: etcd-myhost hash: 64a28f011070816f4beb07a9c96d73b6 [apiclient] Found 1 Pods for label selector component=etcd [upgrade/staticpods] Component "etcd" upgraded successfully! [upgrade/etcd] Waiting for etcd to become available [upgrade/staticpods] Writing new Static Pod manifests to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests043818770" [upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-apiserver.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2019-03-14-20-52-44/kube-apiserver.yaml" [upgrade/staticpods] Waiting for the kubelet to restart the component [upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s) Static pod: kube-apiserver-myhost hash: 6436b0d8ee0136c9d9752971dda40400 Static pod: kube-apiserver-myhost hash: 6436b0d8ee0136c9d9752971dda40400 Static pod: kube-apiserver-myhost hash: 6436b0d8ee0136c9d9752971dda40400 Static pod: kube-apiserver-myhost hash: b8a6533e241a8c6dab84d32bb708b8a1 [apiclient] Found 1 Pods for label selector component=kube-apiserver [upgrade/staticpods] Component "kube-apiserver" upgraded successfully! [upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-controller-manager.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2019-03-14-20-52-44/kube-controller-manager.yaml" [upgrade/staticpods] Waiting for the kubelet to restart the component [upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s) Static pod: kube-controller-manager-myhost hash: 8ee730c1a5607a87f35abb2183bf03f2 Static pod: kube-controller-manager-myhost hash: 6f77d441d2488efd9fc2d9a9987ad30b [apiclient] Found 1 Pods for label selector component=kube-controller-manager [upgrade/staticpods] Component "kube-controller-manager" upgraded successfully! [upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-scheduler.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2019-03-14-20-52-44/kube-scheduler.yaml" [upgrade/staticpods] Waiting for the kubelet to restart the component [upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s) Static pod: kube-scheduler-myhost hash: 4b52d75cab61380f07c0c5a69fb371d4 Static pod: kube-scheduler-myhost hash: a24773c92bb69c3748fcce5e540b7574 [apiclient] Found 1 Pods for label selector component=kube-scheduler [upgrade/staticpods] Component "kube-scheduler" upgraded successfully! [upload-config] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace [kubelet] Creating a ConfigMap "kubelet-config-1.14" in namespace kube-system with the configuration for the kubelets in the cluster [kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.14" ConfigMap in the kube-system namespace [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster [addons] Applied essential addon: CoreDNS [addons] Applied essential addon: kube-proxy [upgrade/successful] SUCCESS! Your cluster was upgraded to "v1.14.0". Enjoy! [upgrade/kubelet] Now that your control plane is upgraded, please proceed with upgrading your kubelets if you haven't already done so. - Replace
-
Manually upgrade your CNI provider plugin.
Your Container Network Interface (CNI) provider may have its own upgrade instructions to follow. Check the addons page to find your CNI provider and see whether additional upgrade steps are required.
-
Upgrade the kubelet and kubectl on the control plane node:
-
Restart the kubelet
sudo systemctl restart kubelet
Upgrade additional control plane nodes
- Same as the first control plane node but use: (值得注意的是:其它master节点的更新步骤与上面一样,但是upgrade使用下面的进行替换)
sudo kubeadm upgrade node experimental-control-plane
instead of:
sudo kubeadm upgrade apply
Also sudo kubeadm upgrade plan is not needed.
Upgrade worker nodes
The upgrade procedure on worker nodes should be executed one node at a time or few nodes at a time, without compromising the minimum required capacity for running your workloads.
Upgrade kubeadm
- Upgrade kubeadm on all worker nodes:
Cordon the node
-
Prepare the node for maintenance by marking it unschedulable and evicting the workloads. Run:
kubectl drain $NODE --ignore-daemonsetsYou should see output similar to this:
node/ip-172-31-85-18 cordoned WARNING: ignoring DaemonSet-managed Pods: kube-system/kube-proxy-dj7d7, kube-system/weave-net-z65qx node/ip-172-31-85-18 drained
Upgrade the kubelet config
-
Upgrade the kubelet config:
sudo kubeadm upgrade node config --kubelet-version v1.14.xReplace
xwith the patch version you picked for this ugprade.
Upgrade kubelet and kubectl
- Upgrade the Kubernetes package version by running the Linux package manager for your distribution:
-
Restart the kubelet
sudo systemctl restart kubelet
Uncordon the node
-
Bring the node back online by marking it schedulable:
kubectl uncordon $NODE
Verify the status of the cluster
After the kubelet is upgraded on all nodes verify that all nodes are available again by running the following command from anywhere kubectl can access the cluster:
kubectl get nodes
The STATUS column should show Ready for all your nodes, and the version number should be updated.
Recovering from a failure state
If kubeadm upgrade fails and does not roll back, for example because of an unexpected shutdown during execution, you can run kubeadm upgrade again. This command is idempotent and eventually makes sure that the actual state is the desired state you declare.
To recover from a bad state, you can also run kubeadm upgrade --force without changing the version that your cluster is running.
How it works
kubeadm upgrade apply does the following:
- Checks that your cluster is in an upgradeable state:
- The API server is reachable
- All nodes are in the
Readystate - The control plane is healthy
- Enforces the version skew policies.
- Makes sure the control plane images are available or available to pull to the machine.
- Upgrades the control plane components or rollbacks if any of them fails to come up.
- Applies the new
kube-dnsandkube-proxymanifests and makes sure that all necessary RBAC rules are created. - Creates new certificate and key files of the API server and backs up old files if they’re about to expire in 180 days.
kubeadm upgrade node experimental-control-plane does the following on additional control plane nodes: - Fetches the kubeadm ClusterConfiguration from the cluster. - Optionally backups the kube-apiserver certificate. - Upgrades the static Pod manifests for the control plane components.
浙公网安备 33010602011771号