Loading

k8s(1)-使用kubeadm安装Kubernetes

kubeadmin工作原理

Kubernetes在部署时,它的每个组件都是一个需要被执行的、单独的二进制文件,
有人提出将将这些组件都以容器的方式运行,但是kubelet这遇到了困难,
因为kubelet在配置容器网络、管理容器volume,都需要直接操作宿主机。
kubeadm选择了一种妥协的方式,直接在宿主机上安装kubelet,然后用容器运行其它组件。

这个项目使部署k8s集群只需两个步骤:

  1. 创建Master节点
    kubeadm init
  2. 将一个Node节点加入集群
    kubeadm join <Master 的节点I的P和端口>

机器准备

节点规划

角色 IP hostname OS
控制节点 172.16.30.10 master CentOS Linux release 7.9
工作节点 172.16.30.11 node1 CentOS Linux release 7.9

开放端口

控制节点

协议 方向 端口范围 作用 使用者
TCP 入站 6443 kube-apierver 所有组件
TCP 入站 2379-2380 etcd 客户端 API kube-apierver,etcd
TCP 入站 10250 kubelet API kubelet,控制平面
TCP 入站 10251 kube-schedule self
TCP 入站 10252 kube-controller-manager self

工作节点

协议 方向 端口范围 作用 使用者
TCP 入站 10250 kubelet API kubelet,控制平面
TCP 入站 30000-32767 NodePort 服务 所有组件

如果使用Weave Net之前,您应确保防火墙未阻止以下端口:TCP 6783和UDP 6783/6784。
如果使用的是flannel,则需要开放8472/udp

控制节点

firewall-cmd --add-port=6443/tcp --permanent
firewall-cmd --add-port=2739-2380/tcp --permanent
firewall-cmd --add-port=10250-10252/tcp --permanent 
firewall-cmd --add-port=6783/tcp --permanent
firewall-cmd --add-port=6783-6784/udp --permanent
firewall-cmd --reload

工作节点

firewall-cmd --add-port=10250/tcp --permanent
firewall-cmd --add-port=30000-32767/tcp --permanent
firewall-cmd --add-port=6783/tcp --permanent
firewall-cmd --add-port=6783-6784/udp --permanent
firewall-cmd --reload

关闭selinx

sed -i 's/^SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config
setenforce 0

关闭防火墙

这里先关闭防火墙,后面可以再打开相应的端口

systemctl stop firewalld
systemctl disable firewalld

关闭swap

注释掉/etc/fstab中的swap挂载那一行
临时关闭
swapoff -a

修改主机名

控制节点执行

sudo hostnamectl set-hostname master

工作节点执行

sudo hostnamectl set-hostname node1

配置hosts文件

sudo cat <<EOF | sudo tee -a /etc/hosts
172.16.30.10 master
172.16.30.11 node1
EOF

配置iptables,允许检查桥接流量,并打开ip转发

cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
br_netfilter
EOF

cat <<EOF >  /etc/sysctl.d/99-kubernetes-cri.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
# vm.swappiness=0
EOF
sudo modprobe br_netfilter
sudo sysctl --system

安装时间服务器

yum install chrony
systemctl enable --now chronyd

安装容器运行时

这里使用docker作为容器运行时,每个节点都需要安装

sudo yum install -y yum-utils
sudo yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
sudo yum install -y install docker-ce docker-ce-cli containerd.io
systemctl enable --now  docker

配置docker守护程序,使用systemd来管理容器的cgroup

容器运行时和 kubelet 使用 systemd 作为 cgroup 驱动,以此使系统更为稳定

cat <<EOF | sudo tee /etc/docker/daemon.json
{
  "exec-opts": ["native.cgroupdriver=systemd"],
  "log-driver": "json-file",
  "log-opts": {
    "max-size": "100m"
  },
  "storage-driver": "overlay2"
}
EOF
sudo systemctl daemon-reload
sudo systemctl restart docker

安装 kubeadm、kubelet 和 kubectl

每台机器都需要执行
由于国内墙的问题,google仓库无法访问,这里使用阿里云的镜像

cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
exclude=kubelet kubeadm kubectl
EOF
sudo yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes

这里安装报了一个错误:[Errno -1] repomd.xml signature could not be verified for kubernetes
将repo_gpgcheck=1中的1改为0,问题得到解决。应该阿里云的镜像问题

启动kublet

kubelet 现在每隔几秒就会重启,因为它陷入了一个等待 kubeadm 指令的死循环。
sudo systemctl enable --now kubelet

安装shell自动补全功能

sudo yum install -y bash-completion 
sudo kubectl completion bash >/etc/bash_completion.d/kubectl

使用 kubeadm 创建集群

准备所需镜像

因为墙的问题,我们无法从k8s.gcr.io拉去集群所需要的镜像,可以指定阿里云镜像仓库

[root@master ~]# kubeadm config images list
k8s.gcr.io/kube-apiserver:v1.23.5
k8s.gcr.io/kube-controller-manager:v1.23.5
k8s.gcr.io/kube-scheduler:v1.23.5
k8s.gcr.io/kube-proxy:v1.23.5
k8s.gcr.io/pause:3.6
k8s.gcr.io/etcd:3.5.1-0
k8s.gcr.io/coredns/coredns:v1.8.6

初始化集群会自动拉取集群所有的镜像,当然也可以在init之前拉取镜像
kubeadm config images pull --image-repository registry.aliyuncs.com/google_containers

如果遇到镜像无法拉取的错误,我们也可以采用改tag的方法
比如coredns镜像无法拉取:

docker pull coredns/coredns:1.8.0
docker tag coredns/coredns:1.8.0 registry.aliyuncs.com/google_containers/coredns/coredns:v1.8.0
docker rmi coredns/coredns:1.8.0

初始化集群

查看默认的初始化配置

[root@master ~]# kubeadm config print init-defaults
apiVersion: kubeadm.k8s.io/v1beta3
bootstrapTokens:
- groups:
  - system:bootstrappers:kubeadm:default-node-token
  token: abcdef.0123456789abcdef
  ttl: 24h0m0s
  usages:
  - signing
  - authentication
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: 1.2.3.4
  bindPort: 6443
nodeRegistration:
  criSocket: /var/run/dockershim.sock
  imagePullPolicy: IfNotPresent
  name: node
  taints: null
---
apiServer:
  timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta3
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
dns: {}
etcd:
  local:
    dataDir: /var/lib/etcd
imageRepository: k8s.gcr.io
kind: ClusterConfiguration
kubernetesVersion: 1.23.0
networking:
  dnsDomain: cluster.local
  serviceSubnet: 10.96.0.0/12
scheduler: {}

我们既可以在默认的配置文件基础上重写写一份配置文件,用kubeadmin --config <xxx.yaml>来初始化。
也可以采用命令行出参的形式,修改默认的配置,比如这里加了镜像仓库的参数

在控制节点执行:
kubeadm init --image-repository registry.aliyuncs.com/google_containers

日志输出如下:

[root@master ~]# kubeadm init --image-repository  registry.aliyuncs.com/google_containers --cri-socket=unix:///var/run/containerd/containerd.sock --apiserver-advertise-address=192.168.1.190 --kubernetes-version=v1.26.3 --pod-network-cidr=10.244.0.0/16
[init] Using Kubernetes version: v1.23.5
[preflight] Running pre-flight checks
error execution phase preflight: [preflight] Some fatal errors occurred:
        [ERROR CRI]: container runtime is not running: output: time="2022-04-02T20:43:06+08:00" level=fatal msg="getting status of runtime: rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService"
, error: exit status 1
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
To see the stack trace of this error execute with --v=5 or higher
[root@master ~]# kubeadm init --image-repository  registry.aliyuncs.com/google_containers [init] Using Kubernetes version: v1.23.5
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local master] and IPs [10.96.0.1 172.16.30.10]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [localhost master] and IPs [172.16.30.10 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [localhost master] and IPs [172.16.30.10 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 11.503214 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.23" in namespace kube-system with the configuration for the kubelets in the cluster
NOTE: The "kubelet-config-1.23" naming of the kubelet ConfigMap is deprecated. Once the UnversionedKubeletConfigMap feature gate graduates to Beta the default name will become just "kubelet-config". Kubeadm upgrade will handle this transition transparently.
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node master as control-plane by adding the labels: [node-role.kubernetes.io/master(deprecated) node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node master as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: e52qux.qs7r5rb03pda6ilk
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 172.16.30.10:6443 --token e52qux.qs7r5rb03pda6ilk \
        --discovery-token-ca-cert-hash sha256:83a96fbadd2fc61b870c95ed81d69e93cec52a69121611aecd82c421d4d927f4
  • 最后记住最后面的加入集群的命令 *

在初始化的过程中大体会执行如下步骤:

  1. pre-flight checks:
    检查机器内核版本,Cgroups,hostname,kubelete和kubeadm版本是否匹配,容器运行时是安装,ip、mounts指令是否存在等
    下载集群所需镜像,网络不好,时间会很长

  2. 生成对外提供服务所需的证书
    证书的生成默认目录为 '/etc/kubernetes/pki',可以通过 --cert-dir 参数配置
    证书生成后,kubeadm会为其它组件访问kube-apiserver生成配置文件,这些文件的目录在/etc/kubernetes/xxx.conf

  3. 将 kubeconfig 文件写入 /etc/kubernetes/ 目录以便 kubelet、控制器管理器和调度器用来连接到 API 服务器,它们每一个都有自己的身份标识,同时生成一个名为 admin.conf 的独立的 kubeconfig 文件,用于管理操作。

  4. 为 API 服务器、控制器管理器和调度器生成静态 Pod 的清单文件。假使没有提供一个外部的 etcd 服务的话,也会为 etcd 生成一份额外的静态 Pod 清单文件。
    静态 Pod 的清单文件被写入到 /etc/kubernetes/manifests 目录; kubelet 会监视这个目录以便在系统启动的时候创建 Pod。
    一旦控制平面的 Pod 都运行起来, kubeadm init 的工作流程就继续往下执行。

  5. 对控制平面节点应用标签和污点标记以便不会在它上面运行其它的工作负载。

  6. 生成令牌,将来其他节点可使用该令牌向控制平面注册自己。用户可以选择通过 --token 提供令牌。

  7. 为了使得节点能够遵照启动引导令牌 和 TLS 启动引导 这两份文档中描述的机制加入到集群中,kubeadm 会执行所有的必要配置:

    • 创建一个 ConfigMap 提供添加集群节点所需的信息,并为该 ConfigMap 设置相关的 RBAC 访问规则。
    • 允许启动引导令牌访问 CSR 签名 API。
    • 配置自动签发新的 CSR 请求。
  8. 通过 API 服务器安装一个 DNS 服务器 (CoreDNS) 和 kube-proxy 附加组件。
    请注意,尽管已部署 DNS 服务器,但直到安装 CNI 时才调度它。

声明kubeconfig

kubectl会使用kubeadmin生成的 '/etc/kubernetes/admin.conf' 文件与apiserver交互,
默认会去$HOME/.kube/config寻找admin。conf

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

或者,如果你是 root 用户,则可以运行:
export KUBECONFIG=/etc/kubernetes/admin.conf

测试kublect

[root@master ~]# kubectl get cs
Warning: v1 ComponentStatus is deprecated in v1.19+
NAME                 STATUS    MESSAGE                         ERROR
etcd-0               Healthy   {"health":"true","reason":""}   
scheduler            Healthy   ok                              
controller-manager   Healthy   ok

安装Pod网络插件

参考: https://kubernetes.io/zh/docs/concepts/cluster-administration/networking/#how-to-implement-the-kubernetes-networking-model
必须部署一个基于 Pod 网络插件的 容器网络接口 (CNI),以便你的 Pod 可以相互通信。 在安装网络之前,集群 DNS (CoreDNS) 将不会启动。

常见的网络策略驱动有: flannel、weave这里使用weave

kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')"

查看安装情况
每个 Node 都有一个 weave Pod,所有 Pod 都是Running 和 2/2 READY
kubectl -n kube-system get pods --watch
当coredns的状态由Pending变成Running,并且有weave-net pod变成runing,则网络插件安装好了

加入集群

加入集群后,在工作节点会运行两个Pod,一个为负责网络的weave-net,一个为kube-proxy

加入集群的命令格式:
kubeadm join --token : --discovery-token-ca-cert-hash sha256:

获取令牌

kubeadm token list

更新令牌

默认情况下令牌24小时过期,使用下面方法创建新的令牌
kubeadm token create

获取证书hash

如果你没有 --discovery-token-ca-cert-hash 的值,则可以通过在控制平面节点上执行以下命令链来获取它:
openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed 's/^.* //'

这里取上面初始化日志里的记录:

[root@node1 ~]# kubeadm join 172.16.30.10:6443 --token e52qux.qs7r5rb03pda6ilk --discovery-token-ca-cert-hash sha256:83a96fbadd2fc61b870c95ed81d69e93cec52a69121611aecd82c421d4d927f4

[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

检查节点

[root@master ~]# kubectl get nodes
NAME     STATUS   ROLES                  AGE   VERSION
master   Ready    control-plane,master   75m   v1.23.5
node1    Ready    <none>                 15m   v1.23.5

将主节点加入工作负载(可选操作)

默认情况下,出于安全原因,你的集群不会在控制平面节点上调度 Pod。 如果你希望能够在控制平面节点上调度 Pod, 例如用于开发的单机 Kubernetes 集群,请运行

[root@master ~]# kubectl describe node master | grep Taint
Taints:             node-role.kubernetes.io/master:NoSchedule
[root@master ~]# kubectl taint nodes --all node-role.kubernetes.io/master-
node/master untainted
taint "node-role.kubernetes.io/master:" not found
taint "node-role.kubernetes.io/master:" not found

清理控制平面和删除节点

删除节点

# 使用适当的凭证与控制平面节点通信,运行
kubectl drain <node name> --delete-emptydir-data --force --ignore-daemonsets
# 该命令尽力还原由 kubeadm init 或 kubeadm join 所做的更改。
kubeadm reset
# 重置过程不会重置或清除 iptables 规则或 IPVS 表。如果你希望重置 iptables,则必须手动进行:
iptables -F && iptables -t nat -F && iptables -t mangle -F && iptables -X
# 如果要重置 IPVS 表,则必须运行以下命令:
ipvsadm -C
# 删除节点
kubectl delete node <node name>

清理控制平面

在控制平面主机上使用 kubeadm reset 来触发尽力而为的清理。

posted @ 2019-01-23 09:54  头痛不头痛  阅读(2399)  评论(0编辑  收藏  举报