K8S1.23&docker运行时&cni-calico部署
1. 部署之前环境准备
1.1 主机准备
系统版本ubuntu22.04
10.0.0.119 master
10.0.0.120 worker120
10.0.0.121 worker121
1.2 关闭swap分区
使用 systemctl --type swap 列出当前的交换单元
systemctl mask #dev-sda3(磁盘位置)#.swap #关闭swap自动激活功能
swapoff -a && sysctl -w vm.swappiness=0 # 临时关闭
sed -ri '/^[^#]*swap/s@^@#@' /etc/fstab # 基于配置文件关闭
1.3 确保各个节点MAC地址或product_uuid唯一
apt update&&apt install -y net-tools
ifconfig eth0 | grep ether | awk '{print $2}'
cat /sys/class/dmi/id/product_uuid
温馨提示: 硬件设备会拥有唯一的地址,但是有些虚拟机的地址可能会重复如克隆的虚拟机会导致mac地址一样需重新生成MAC地址。 Kubernetes使用这些值来唯一确定集群中的节点。 如果这些值在每个节点上不唯一,可能会导致安装失败
1.4 检查网络节点是否互通
简而言之,就是检查你的k8s集群各节点是否互通,可以使用ping命令来测试。
ping jd.com -c 10
1.5 所有节点修改时间
[root@master119 ~]# date -R
Mon, 09 Sep 2024 14:58:34 +0800
[root@master115 ~]# ll /etc/localtime
lrwxrwxrwx 1 root root 33 Aug 30 15:27 /etc/localtime -> /usr/share/zoneinfo/Asia/Shanghai
[root@worker120 ~]# date -R
Mon, 09 Sep 2024 14:59:22 +0800
[root@worker232 ~]# ll /etc/localtime
lrwxrwxrwx 1 root root 33 Aug 30 15:27 /etc/localtime -> /usr/share/zoneinfo/Asia/Shanghai
[root@worker121 ~]# date -R
Mon, 09 Sep 2024 14:59:35 +0800
[root@worker116 ~]# ll /etc/localtime
lrwxrwxrwx 1 root root 33 Aug 30 15:27 /etc/localtime -> /usr/share/zoneinfo/Asia/Shanghai
*****************************************************************************************
如果没有修改使用以下命令进行修改
[root@worker119 ~]# timedatectl set-timezone Asia/Shanghai
1.6 允许iptable检查桥接流量
cat <<EOF | tee /etc/modules-load.d/k8s.conf
br_netfilter
EOF
cat <<EOF | tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1 表示启用网桥对 IPv6 的 iptables 转发机制进行处理。此配置会影响到容器间的网络通讯,如果不开启,有可能出现网络连接问题。
net.bridge.bridge-nf-call-iptables = 1 表示启用网桥对 iptables 转发机制进行处理。此配置启用后,Docker 容器之间的网络转发和宿主机与容器之间的网络转发,都会被 iptables 进行控制。如果没有启用此配置,在使用容器时,常常会遇到 iptables 规则无法生效导致容器无法与外部通信的问题。
net.ipv4.ip_forward = 1 可以确保 Kubernetes 集群中的各个 Pod 之间可以相互通信,从而保证了整个集群的正常工作
EOF
sysctl --system
1.7 所有节点禁用ufw&apparmor
systemctl disable --now ufw
#ubuntu的ufw类似于Centos的firewalld
systemctl disable --now apparmor
#ubuntu的apparmor类似于Centos的selinux
1.8 安装docker【所有节点修改cgroup的管理进程为systemd】
1)ubuntu 默认cgroup的管理进程为systemd
tar xf Nolen_H-docker_v20.10.24.tar
./install-docker.sh i
[root@master119 ~]# docker info | grep "Cgroup Driver:"
Cgroup Driver: systemd
[root@master119 ~]#
[root@worker120 ~]# docker info | grep "Cgroup Driver:"
Cgroup Driver: systemd
[root@worker120 ~]#
[root@worker121 ~]# docker info | grep "Cgroup Driver:"
Cgroup Driver: systemd
[root@worker121 ~]#
2)centos:修改cgroup的管理进程为systemd
[root@master119 ~]# docker info | grep cgroup
Cgroup Driver: cgroupfs
[root@master119 ~]#
[root@master119 ~]# cat /etc/docker/daemon.json
{
"registry-mirrors": ["https://tuv7rqqq.mirror.aliyuncs.com","https://docker.mirrors.ustc.edu.cn/","https://hub-mirror.c.163.com/","https://reg-mirror.qiniu.com"],
"exec-opts": ["native.cgroupdriver=systemd"]
}
[root@master119 ~]#
[root@master119 ~]# systemctl restart docker
[root@master119 ~]#
[root@master119 ~]# docker info | grep "Cgroup Driver"
Cgroup Driver: systemd
2.K8S集群部署所有节点安装kubeadm,kubelet,kubectl
2.1 配置软件源
apt-get update && apt-get install -y apt-transport-https
curl https://mirrors.aliyun.com/kubernetes/apt/doc/apt-key.gpg | apt-key add -
cat <<EOF >/etc/apt/sources.list.d/kubernetes.list
deb https://mirrors.aliyun.com/kubernetes/apt/ kubernetes-xenial main
EOF
apt-get update
2.2 查看一下当前环境支持的k8s版本
[root@master119 ~]# apt-cache madison kubeadm
kubeadm | 1.28.2-00 | https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial/main amd64 Packages
kubeadm | 1.28.1-00 | https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial/main amd64 Packages
kubeadm | 1.28.0-00 | https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial/main amd64 Packages
...
kubeadm | 1.23.17-00 | https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial/main amd64 Packages
kubeadm | 1.23.16-00 | https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial/main amd64 Packages
kubeadm | 1.23.15-00 | https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial/main amd64 Packages
kubeadm | 1.23.14-00 | https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial/main amd64 Packages
...
2.3 所有节点安装 kubelet kubeadm kubectl
apt-get -y install kubelet=1.23.17-00 kubeadm=1.23.17-00 kubectl=1.23.17-00
2.4 检查各组件版本
[root@master119 ~]# kubeadm version
kubeadm version: &version.Info{Major:"1", Minor:"23", GitVersion:"v1.23.17", GitCommit:"953be8927218ec8067e1af2641e540238ffd7576", GitTreeState:"clean", BuildDate:"2023-02-22T13:33:14Z", GoVersion:"go1.19.6", Compiler:"gc", Platform:"linux/amd64"}
[root@master119 ~]# kubectl version
Client Version: version.Info{Major:"1", Minor:"23", GitVersion:"v1.23.17", GitCommit:"953be8927218ec8067e1af2641e540238ffd7576", GitTreeState:"clean", BuildDate:"2023-02-22T13:34:27Z", GoVersion:"go1.19.6", Compiler:"gc", Platform:"linux/amd64"}
The connection to the server localhost:8080 was refused - did you specify the right host or port?
[root@master119 ~]# kubelet --version
Kubernetes v1.23.17
2.5 导入K8S1.23镜像【服务器可以FQ,此步骤可以跳过】
[root@master119 ~]# docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
k8s.gcr.io/kube-apiserver v1.23.17 62bc5d8258d6 22 months ago 130MB
k8s.gcr.io/kube-controller-manager v1.23.17 1dab4fc7b6e0 22 months ago 120MB
k8s.gcr.io/kube-proxy v1.23.17 f21c8d21558c 22 months ago 111MB
k8s.gcr.io/kube-scheduler v1.23.17 bc6794cb54ac 22 months ago 51.9MB
k8s.gcr.io/etcd 3.5.6-0 fce326961ae2 2 years ago 299MB
k8s.gcr.io/coredns v1.8.6 a4ca41631cc7 3 years ago 46.8MB
k8s.gcr.io/pause 3.6 6270bb605e12 3 years ago 683kB
3.基于kubeadm组件初始化K8S的master组件
3.1 使用kubeadm初始化master节点
[root@master119 ~]# kubeadm init --kubernetes-version=v1.23.17 --image-repository registry.aliyuncs.com/google_containers --pod-network-cidr=10.100.0.0/16 --service-cidr=10.200.0.0/16 --service-dns-domain=nolenlinux.cn
[init] Using Kubernetes version: v1.23.17
...
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Alternatively, if you are the root user, you can run:
export KUBECONFIG=/etc/kubernetes/admin.conf
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 10.0.0.119:6443 --token 80cco5.pgz7d4a79ob6f3aa \
--discovery-token-ca-cert-hash sha256:0a3f25a7364afbf3fcabb3a89e96365edda8a2b0c9991cd323734166de9ccc95
3.2 拷贝授权文件,用于管理K8S集群
[root@master119 ~]# mkdir -p $HOME/.kube
[root@master119 ~]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@master119 ~]# sudo chown $(id -u):$(id -g) $HOME/.kube/config
3.3 查看集群节点
[root@master119 ~]# kubectl get cs
Warning: v1 ComponentStatus is deprecated in v1.19+
NAME STATUS MESSAGE ERROR
scheduler Healthy ok
controller-manager Healthy ok
etcd-0 Healthy {"health":"true","reason":""}
[root@master119 ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
master119 NotReady control-plane,master 3m42s v1.23.17
3.4 worker节点加入K8S集群
1. 复制master节点生成的token加入到K8S节点
[root@master120 ~]# kubeadm join 10.0.0.119:6443 --token 80cco5.pgz7d4a79ob6f3aa \
> --discovery-token-ca-cert-hash sha256:0a3f25a7364afbf3fcabb3a89e96365edda8a2b0c9991cd323734166de9ccc95
[root@master121 ~]# kubeadm join 10.0.0.119:6443 --token 80cco5.pgz7d4a79ob6f3aa \
> --discovery-token-ca-cert-hash sha256:0a3f25a7364afbf3fcabb3a89e96365edda8a2b0c9991cd323734166de9ccc95
2. 查看K8S集群状态
[root@master119 ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
master119 NotReady control-plane,master 6m52s v1.23.17
worker120 NotReady <none> 29s v1.23.17
worker121 NotReady <none> 17s v1.23.17
4. 安装calico网络插件 【虚拟机需要FQ拉取资源清单和镜像或者有容器所需要的镜像导入到所有节点】
4.1 下载calico的资源清单
[root@master119 ~]# wget https://raw.githubusercontent.com/projectcalico/calico/v3.25.2/manifests/tigera-operator.yaml
4.2 下载calico的自定义配置Pod网络
[root@master119 ~]# wget https://raw.githubusercontent.com/projectcalico/calico/v3.25.2/manifests/custom-resources.yaml
4.3 安装部署组件
[root@master119 ~]# kubectl create -f tigera-operator.yaml
4.4 应用资源池定义Pod网段并创建
[root@master119 ~]# grep ipPools: custom-resources.yaml -A 2
ipPools:
- blockSize: 26
cidr: 192.168.0.0/16
[root@master119 ~]#
[root@master119 ~]# sed -i '/cidr/s#192.168#10.100#' custom-resources.yaml
[root@master119 ~]#
[root@master119 ~]# grep ipPools: custom-resources.yaml -A 2
ipPools:
- blockSize: 26
cidr: 10.100.0.0/16
[root@master119 ~]#
[root@master119 ~]# kubectl create -f custom-resources.yaml
installation.operator.tigera.io/default created
apiserver.operator.tigera.io/default created
[root@master119 ~]#
4.5 确认所有的Pod均能正常运行
[root@master119:~ /calico]# kubectl get pods -o wide -n calico-system
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
calico-kube-controllers-76d5c7cfc-pkgmw 1/1 Running 0 9m35s 10.100.209.131 master119 <none> <none>
calico-node-6464f 1/1 Running 0 9m36s 10.0.0.119 master119 <none> <none>
calico-node-fz2tq 1/1 Running 0 9m36s 10.0.0.121 worker121 <none> <none>
calico-node-jsn2k 1/1 Running 0 9m36s 10.0.0.120 worker120 <none> <none>
calico-typha-8496cdf4c5-4srf9 1/1 Running 0 9m27s 10.0.0.121 worker121 <none> <none>
calico-typha-8496cdf4c5-x2fp5 1/1 Running 0 9m36s 10.0.0.120 worker120 <none> <none>
csi-node-driver-7rxzb 2/2 Running 0 9m36s 10.100.162.1 worker121 <none> <none>
csi-node-driver-j4prf 2/2 Running 0 9m36s 10.100.209.129 master119 <none> <none>
csi-node-driver-mz8wv 2/2 Running 0 9m36s 10.100.148.129 worker120 <none> <none>
[root@master119:~ /calico]# kubectl get pods -n calico-apiserver -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
calico-apiserver-697679ccc7-bbt5f 1/1 Running 0 18m 10.100.148.130 worker120 <none> <none>
calico-apiserver-697679ccc7-fzkgg 1/1 Running 0 18m 10.100.162.8 worker121 <none> <none>
4.6 查看集群是否就绪
[root@master119:~ /calico]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
master119 Ready control-plane,master 60m v1.23.17
worker120 Ready <none> 53m v1.23.17
worker121 Ready <none> 53m v1.23.17

浙公网安备 33010602011771号