K8S架构图解及kubeadm部署K8S集群
一.k8s集群架构
有master节点和worker节点
master节点:
- controller manager
维护集群状态
- Scheduler
负责调度Pod
- etcd(https)
存储数据
- api-server
访问K8S集群的统一入口
worker节点:
- kube-proxy
负责Pod的负载均衡和服务发现
- kubelet
负责Pod的生命周期并上报给apiServer
二.K8S集群部署之各节点环境准备
基于容器方式部署
1.K8S的集群配置环境准备
参考链接:
https://kubernetes.io/zh/docs/setup/production-environment/tools/kubeadm/install-kubeadm/
2.Kubernetes的版本选择
我们选择的版本是K8S 1.23.17版本,该版本的第一个rc版本是2021年初,最后一个版本是23年年初结束。
3.Kubernetes基于kubeadm部署K8S集群
1关闭swap分区
swapoff -a && sysctl -w vm.swappiness=0 # 临时关闭
sed -ri '/^[^#]*swap/s@^@#@' /etc/fstab # 基于配置文件关闭
2.永久关闭swap分区
cat /etc/fstab
/dev/disk/by-uuid/a40b9242-c53b-4040-8cd5-a3094216bb09 none swap sw,noauto 0 0
找到上列参数,在sw后面加noauto在重启
3 确保各个节点MAC地址或product_uuid唯一
ifconfig eth0 | grep ether | awk '{print $2}'
cat /sys/class/dmi/id/product_uuid
温馨提示:
一般来讲,硬件设备会拥有唯一的地址,但是有些虚拟机的地址可能会重复。
Kubernetes使用这些值来唯一确定集群中的节点。 如果这些值在每个节点上不唯一,可能会导致安装失败。
4 检查网络节点是否互通
简而言之,就是检查你的k8s集群各节点是否互通,可以使用ping命令来测试。
ping baidu.com -c 10
5 允许iptable检查桥接流量
cat <<EOF | tee /etc/modules-load.d/k8s.conf
br_netfilter
EOF
cat <<EOF | tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
EOF
sysctl --system
6 检查端口是否被占用
参考链接: https://kubernetes.io/zh-cn/docs/reference/networking/ports-and-protocols/
检查master节点和worker节点的各组件端口是否被占用。
7.所有节点修改cgroup的管理进程为systemd
安装docker环境。
下载docker软件
autoinstall-docker-docker-compose.tar.gz
tar xf autoinstall-docker-docker-compose.tar.gz
./install-docker.sh i
所有节点使用以下命令
[root@master231 ~]# docker info | grep "Cgroup Driver:"
Cgroup Driver: systemd
温馨提示:
在CentOS操作系统中,如果不修改cgroup的管理驱动为systemd,则默认值为cgroupfs,在初始化master节点时会失败哟!
下面的案例是CentOS操作的实力,Ubuntu可以跳过此步骤。
[root@master231 ~]# docker info | grep cgroup
Cgroup Driver: cgroupfs
[root@master231 ~]#
[root@master231 ~]# cat /etc/docker/daemon.json
{
"registry-mirrors": ["https://tuv7rqqq.mirror.aliyuncs.com","https://docker.mirrors.ustc.edu.cn/","https://hub-mirror.c.163.com/","https://reg-mirror.qiniu.com"],
"exec-opts": ["native.cgroupdriver=systemd"]
}
[root@master231 ~]#
[root@master231 ~]# systemctl restart docker
[root@master231 ~]#
[root@master231 ~]# docker info | grep "Cgroup Driver"
Cgroup Driver: systemd
[root@master231 ~]#
8.所有节点安装kubeadm,kubelet,kubectl
8.1 软件包说明
你需要在每台机器上安装以下的软件包:
kubeadm:
用来初始化K8S集群的工具。
kubelet:
在集群中的每个节点上用来启动Pod和容器等。
kubectl:
用来与K8S集群通信的命令行工具。
8.2 K8S所有节点配置软件源
apt-get update && apt-get install -y apt-transport-https
curl https://mirrors.aliyun.com/kubernetes/apt/doc/apt-key.gpg | apt-key add -
cat <<EOF >/etc/apt/sources.list.d/kubernetes.list
deb https://mirrors.aliyun.com/kubernetes/apt/ kubernetes-xenial main
EOF
apt-get update
8.3 查看一下当前环境支持的k8s版本
[root@master231 ~]# apt-cache madison kubeadm
kubeadm | 1.28.2-00 | https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial/main amd64 Packages
kubeadm | 1.28.1-00 | https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial/main amd64 Packages
kubeadm | 1.28.0-00 | https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial/main amd64 Packages
...
kubeadm | 1.23.17-00 | https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial/main amd64 Packages
kubeadm | 1.23.16-00 | https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial/main amd64 Packages
kubeadm | 1.23.15-00 | https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial/main amd64 Packages
kubeadm | 1.23.14-00 | https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial/main amd64 Packages
...
8.4 安装 kubelet kubeadm kubectl
apt-get -y install kubelet=1.23.17-00 kubeadm=1.23.17-00 kubectl=1.23.17-00
8.5 检查各组件版本
[root@worker232 ~]# kubeadm version
kubeadm version: &version.Info{Major:"1", Minor:"23", GitVersion:"v1.23.17", GitCommit:"953be8927218ec8067e1af2641e540238ffd7576", GitTreeState:"clean", BuildDate:"2023-02-22T13:33:14Z", GoVersion:"go1.19.6", Compiler:"gc", Platform:"linux/amd64"}
[root@worker232 ~]#
[root@worker232 ~]# kubectl version
Client Version: version.Info{Major:"1", Minor:"23", GitVersion:"v1.23.17", GitCommit:"953be8927218ec8067e1af2641e540238ffd7576", GitTreeState:"clean", BuildDate:"2023-02-22T13:34:27Z", GoVersion:"go1.19.6", Compiler:"gc", Platform:"linux/amd64"}
The connection to the server localhost:8080 was refused - did you specify the right host or port?
[root@worker232 ~]#
[root@worker232 ~]# kubelet --version
Kubernetes v1.23.17
[root@worker232 ~]#
温馨提示:
其他两个节点都要检查下,避免你安装的版本和我不一致!
参考链接:
https://kubernetes.io/zh/docs/tasks/tools/install-kubectl-linux/
9.检查时区(每个节点都需要检查)
[root@master231 ~]# date -R
Mon, 09 Sep 2024 14:58:34 +0800
[root@master231 ~]#
[root@master231 ~]# ll /etc/localtime
lrwxrwxrwx 1 root root 33 Aug 30 15:27 /etc/localtime -> /usr/share/zoneinfo/Asia/Shanghai
[root@master231 ~]#
10.关机,拍快照
避免后续步骤出错时,方便回滚。
三.基于kubeadm组件初始化K8S的master组件
导入镜像,修改时区
#导入镜像
[root@master231 ~]# wget http://192.168.11.253/Image/Kubernetes/images/K8S%20Cluster/oldboyedu-master-1.23.17.tar.gz
[root@master231 ~]# docker load -i oldboyedu-master-1.23.17.tar.gz
#修改时区
[root@master231 ~]# ln -svf /usr/share/zoneinfo/Asia/Shanghai /etc/localtime
'/etc/localtime' -> '/usr/share/zoneinfo/Asia/Shanghai'
[root@master231 ~]#
[root@worker233 ~]# ln -svf /usr/share/zoneinfo/Asia/Shanghai /etc/localtime
'/etc/localtime' -> '/usr/share/zoneinfo/Asia/Shanghai'
[root@worker233 ~]#
[root@worker232 ~]# ln -svf /usr/share/zoneinfo/Asia/Shanghai /etc/localtime
'/etc/localtime' -> '/usr/share/zoneinfo/Asia/Shanghai'
[root@worker232 ~]#
1 使用kubeadm初始化master节点
[root@master231 ~]# kubeadm init --kubernetes-version=v1.23.17 --image-repository registry.aliyuncs.com/google_containers --pod-network-cidr=10.100.0.0/16 --service-cidr=10.200.0.0/16 --service-dns-domain=oldboyedu.com
...
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Alternatively, if you are the root user, you can run:
export KUBECONFIG=/etc/kubernetes/admin.conf
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 10.0.0.231:6443 --token iiviy9.z6w6a4b27amj4iui \
--discovery-token-ca-cert-hash sha256:93e8350a3d73276a0bc298ec8ba5343d2611cfe99abfd529c00f58a27ef7c92c
[root@master231 ~]#
温馨提示:
你的token跟我不一致,请保存好,默认保留24小时,因此24小时内你的集群必须启动起来!
相关参数说明:
--kubernetes-version:
指定K8S master组件的版本号。
--image-repository:
指定下载k8s master组件的镜像仓库地址。
--pod-network-cidr:
指定Pod的网段地址。
--service-cidr:
指定SVC的网段
--service-dns-domain:
指定service的域名。若不指定,默认为"cluster.local"。
使用kubeadm初始化集群时,可能会出现如下的输出信息:
[init]
使用初始化的K8S版本。
[preflight]
主要是做安装K8S集群的前置工作,比如下载镜像,这个时间取决于你的网速。
[certs]
生成证书文件,默认存储在"/etc/kubernetes/pki"目录哟。
[kubeconfig]
生成K8S集群的默认配置文件,默认存储在"/etc/kubernetes"目录哟。
[kubelet-start]
启动kubelet,
环境变量默认写入:"/var/lib/kubelet/kubeadm-flags.env"
配置文件默认写入:"/var/lib/kubelet/config.yaml"
[control-plane]
使用静态的目录,默认的资源清单存放在:"/etc/kubernetes/manifests"。
此过程会创建静态Pod,包括"kube-apiserver","kube-controller-manager"和"kube-scheduler"
[etcd]
创建etcd的静态Pod,默认的资源清单存放在:""/etc/kubernetes/manifests"
[wait-control-plane]
等待kubelet从资源清单目录"/etc/kubernetes/manifests"启动静态Pod。
[apiclient]
等待所有的master组件正常运行。
[upload-config]
创建名为"kubeadm-config"的ConfigMap在"kube-system"名称空间中。
[kubelet]
创建名为"kubelet-config-1.22"的ConfigMap在"kube-system"名称空间中,其中包含集群中kubelet的配置
[upload-certs]
跳过此节点,详情请参考”--upload-certs"
[mark-control-plane]
标记控制面板,包括打标签和污点,目的是为了标记master节点。
[bootstrap-token]
创建token口令,例如:"kbkgsa.fc97518diw8bdqid"。
如下图所示,这个口令将来在加入集群节点时很有用,而且对于RBAC控制也很有用处哟。
[kubelet-finalize]
更新kubelet的证书文件信息
[addons]
添加附加组件,例如:"CoreDNS"和"kube-proxy”
2 拷贝授权文件,用于管理K8S集群
[root@master231 ~]# mkdir -p $HOME/.kube
[root@master231 ~]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@master231 ~]# sudo chown $(id -u):$(id -g) $HOME/.kube/config
3 查看集群节点
[root@master231 ~]# kubectl get cs
Warning: v1 ComponentStatus is deprecated in v1.19+
NAME STATUS MESSAGE ERROR
scheduler Healthy ok
controller-manager Healthy ok
etcd-0 Healthy {"health":"true","reason":""}
[root@master231 ~]#
[root@master231 ~]#
[root@master231 ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
master231 NotReady control-plane,master 117s v1.23.17
[root@master231 ~]#
master初始化不成功解决问题的方法
可能存在的原因:
- 由于没有禁用swap分区导致无法完成初始化;
- 每个2core以上的CPU导致无法完成初始化;
- 没有手动导入镜像;
解决方案:
- 1.检查上面的是否有上面的情况
free -h
lscpu
- 2.重置当前节点环境
[root@master231 ~]# kubeadm reset -f
- 3.再次尝试初始化master节点
四.基于kubeadm部署worker组件
0.可以提前导入镜像
[root@worker232 ~]# wget http://192.168.11.253/Image/Kubernetes/images/K8S%20Cluster/oldboyedu-slave-1.23.17.tar.gz
[root@worker233 ~]# wget http://192.168.11.253/Image/Kubernetes/images/K8S%20Cluster/oldboyedu-slave-1.23.17.tar.gz
[root@worker232 ~]# docker load -i oldboyedu-slave-1.23.17.tar.gz
[root@worker233 ~]# docker load -i oldboyedu-slave-1.23.17.tar.gz
1.在worker节点执行加入的命令【注意!!!!!!不要复制我的,照着我的做不出来,复制你自己的token】
[root@worker232 ~]# kubeadm join 10.0.0.231:6443 --token iiviy9.z6w6a4b27amj4iui \
--discovery-token-ca-cert-hash sha256:93e8350a3d73276a0bc298ec8ba5343d2611cfe99abfd529c00f58a27ef7c92c
[root@worker233 ~]# kubeadm join 10.0.0.231:6443 --token iiviy9.z6w6a4b27amj4iui \
--discovery-token-ca-cert-hash sha256:93e8350a3d73276a0bc298ec8ba5343d2611cfe99abfd529c00f58a27ef7c92c
温馨提示:
上面的命令表示将worker节点加入到k8s的master初始化的集群。使用你上一步生成的token即可。
2.master节点检查集群的worker节点列表
[root@master231 ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
master231 NotReady control-plane,master 32m v1.23.17
worker232 NotReady <none> 36s v1.23.17
worker233 NotReady <none> 25s v1.23.17
[root@master231 ~]#
温馨提示:
此时K8S组件就算部署成功了,但是将来容器的网络依旧没有准备就绪,因此各节点处于“NotReady”状态。
五.部署flannel的CNI插件
符合的CNI插件列表选择:
https://kubernetes.io/zh-cn/docs/concepts/cluster-administration/addons/
falnnel的网站:
https://github.com/flannel-io/flannel#deploying-flannel-manually
1.所有节点手动导入镜像
由于docker官网无法直接访问,因此虚拟机需要FQ。
如果无法FQ的小伙伴可以手动导入镜像,K8S所有节点操作如下 :
[root@master231 ~]# wget http://192.168.11.253/Image/Kubernetes/images/K8S%20Cluster/oldboyedu-cni-v1.5.1-flannel-v0.25.6.tar.gz
[root@master231 ~]# docker load -i oldboyedu-cni-v1.5.1-flannel-v0.25.6.tar.gz
[root@worker232 ~]# wget http://192.168.11.253/Image/Kubernetes/images/K8S%20Cluster/oldboyedu-cni-v1.5.1-flannel-v0.25.6.tar.gz
[root@worker232 ~]# docker load -i oldboyedu-cni-v1.5.1-flannel-v0.25.6.tar.gz
[root@worker233 ~]# wget http://192.168.11.253/Image/Kubernetes/images/K8S%20Cluster/oldboyedu-cni-v1.5.1-flannel-v0.25.6.tar.gz
[root@worker233 ~]# docker load -i oldboyedu-cni-v1.5.1-flannel-v0.25.6.tar.gz
2.下载Flannel组件
wget https://github.com/flannel-io/flannel/releases/latest/download/kube-flannel.yml
自行修改Pod的网段即可。
3.安装Flannel组件
[root@master231 ~]# kubectl apply -f kube-flannel.yml
namespace/kube-flannel created
serviceaccount/flannel created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
configmap/kube-flannel-cfg created
daemonset.apps/kube-flannel-ds created
[root@master231 ~]#
4.检查falnnel各组件是否安装成功
[root@master231 ~]# kubectl get pod -o wide -n kube-flannel
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
kube-flannel-ds-7mchd 1/1 Running 0 20s 10.0.0.232 worker232 <none> <none>
kube-flannel-ds-ccwl7 1/1 Running 0 20s 10.0.0.231 master231 <none> <none>
kube-flannel-ds-wzzq9 1/1 Running 0 20s 10.0.0.233 worker233 <none> <none>
[root@master231 ~]#
5.测试各节点组件
NAME STATUS ROLES AGE VERSION
master231 Ready control-plane,master 58m v1.23.17
worker232 Ready <none> 26m v1.23.17
worker233 Ready <none> 25m v1.23.17
[root@master231 ~]#
6.检查flannel.1网卡是否存在(所有节点都需要检查)
[root@master231 ~]# ifconfig
cni0: flags=4099<UP,BROADCAST,MULTICAST> mtu 1500
inet 10.100.0.1 netmask 255.255.255.0 broadcast 0.0.0.0
inet6 fe80::7498:efff:fe86:d4bc prefixlen 64 scopeid 0x20<link>
ether 3a:28:99:ca:1f:85 txqueuelen 1000 (Ethernet)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 2 bytes 164 (164.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
...
flannel.1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1450
inet 10.100.0.0 netmask 255.255.255.255 broadcast 0.0.0.0
inet6 fe80::d88c:e5ff:fe0f:b4ba prefixlen 64 scopeid 0x20<link>
ether da:8c:e5:0f:b4:ba txqueuelen 0 (Ethernet)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 33 overruns 0 carrier 0 collisions 0
...
[root@master231 ~]#
fannel网卡cni0网卡缺失
1问题描述
部分节点不存在cni0网络设备,仅有flannel.1设备,此时我们需要手动创建cni0网桥设备哟。
2解决方案
如果有节点没有cni0网卡,建议大家手动创建相应的网桥设备,但是注意网段要一致
- 手动创建cni0网卡
---> 假设 master231的flannel.1是10.100.0.0网段。
ip link add cni0 type bridge
ip link set dev cni0 up
ip addr add 10.100.0.1/24 dev cni0
---> 假设 worker232的flannel.1是10.100.1.0网段。
ip link add cni0 type bridge
ip link set dev cni0 up
ip addr add 10.100.1.1/24 dev cni0
六.验证Pod的CNI网络是否正常
1 编写Pod资源清单
[root@master231 ~]# cat > oldboyedu-network-cni-test.yaml <<EOF
apiVersion: v1
kind: Pod
metadata:
name: xiuxian-v1
spec:
nodeName: worker232
containers:
- image: registry.cn-hangzhou.aliyuncs.com/yinzhengjie-k8s/apps:v1
name: xiuxian
---
apiVersion: v1
kind: Pod
metadata:
name: xiuxian-v2
spec:
nodeName: worker233
containers:
- image: registry.cn-hangzhou.aliyuncs.com/yinzhengjie-k8s/apps:v2
name: xiuxian
EOF
2.创建Pod资源
[root@master231 ~]# kubectl apply -f oldboyedu-network-cni-test.yaml
pod/xiuxian-v1 created
pod/xiuxian-v2 created
[root@master231 ~]#
3.查看Pod资源你列表
[root@master231 ~]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
xiuxian-v1 1/1 Running 0 11s 10.100.1.2 worker232 <none> <none>
xiuxian-v2 1/1 Running 0 11s 10.100.2.4 worker233 <none> <none>
[root@master231 ~]#
4.访问worker232节点的服务
[root@master231 ~]# curl 10.100.1.2
<!DOCTYPE html>
<html>
<head>
<meta charset="utf-8"/>
<title>yinzhengjie apps v1</title>
<style>
div img {
width: 900px;
height: 600px;
margin: 0;
}
</style>
</head>
<body>
<h1 style="color: green">凡人修仙传 v1 </h1>
<div>
<img src="1.jpg">
<div>
</body>
</html>
[root@master231 ~]#
[root@master231 ~]# curl 10.100.2.4
<!DOCTYPE html>
<html>
<head>
<meta charset="utf-8"/>
<title>yinzhengjie apps v2</title>
<style>
div img {
width: 900px;
height: 600px;
margin: 0;
}
</style>
</head>
<body>
<h1 style="color: red">凡人修仙传 v2 </h1>
<div>
<img src="2.jpg">
<div>
</body>
</html>
[root@master231 ~]#
[root@master231 ~]# kubectl delete -f oldboyedu-network-cni.yaml
pod "xiuxian-v1" deleted
pod "xiuxian-v2" deleted
[root@master231 ~]#
[root@master231 ~]# kubectl get pods
No resources found in default namespace.
[root@master231 ~]#
七.kubectl工具实现自动补全功能
1.添加环境变量
[root@master231 ~]# kubectl completion bash > ~/.kube/completion.bash.inc
[root@master231 ~]#
[root@master231 ~]# echo source '$HOME/.kube/completion.bash.inc' >> ~/.bashrc
[root@master231 ~]#
[root@master231 ~]# source ~/.bashrc
[root@master231 ~]#
2.验证自动补全功能
[root@master231 ~]# kubectl # 连续按2次tab键测试能否出现命令
alpha auth cordon diff get patch run version
annotate autoscale cp drain help plugin scale wait
api-resources certificate create edit kustomize port-forward set
api-versions cluster-info debug exec label proxy taint
apply completion delete explain logs replace top
attach config describe expose options rollout uncordon
[root@master231 ~]#

浙公网安备 33010602011771号