搭建k8s1.23版本集群

搭建k8s1.23版本集群

1、安装条件

  1. 多台Linux机器 CentOS7
  2. 2G以上RAM,2个以上CPU
  3. 集群网络互通,可访问外网
  4. 关闭防火墙,关闭swap分区

2、准备安装环境

node IP
k8s-master 10.0.0.111
k8s-node1 10.0.0.112
k8s-node2 10.0.0.113

3、以下命令在三个主机上运行

关闭防火墙

systemctl stop firewalld && systemctl disable firewalld

关闭selinux

sed -i 's/enforcing/disabled/' /etc/selinux/config # 永久
setenforce 0 # 临时
[root@k8s111 ~]# getenforce
Disabled

关闭swap

swap分区指的是虚拟内存分区,它的作用是在物理内存使用完之后,将磁盘空间虚拟成内存来使用。但是会对系统性能产生影响。所以这里需要关闭。如果不能关闭,则在需要修改集群的配置参数

swapoff -a # 临时
sed -i '/ swap /s/^/#/' /etc/fstab # 永久
cat /etc/fstab # 查看

bridged网桥设置

为了让服务器的iptables能发现bridged traffic,需要添加网桥过滤和地址转发功能

新建modules-load.d/k8s.conf文件

[root@k8s-master ~]# cat <<EOF | tee /etc/modules-load.d/k8s.conf
overlay
br_netfilter
EOF
[root@k8s-master ~]#

新建sysctl.d/k8s.conf文件

[root@k8s-master ~]# cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward                 = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
[root@k8s-master ~]#

加载配置文件

[root@k8s-master ~]# sysctl --system

加载br_netfilter网桥过滤模块,和加载网络虚拟化技术模块

[root@k8s-master ~]# modprobe br_netfilter
[root@k8s-master ~]# modprobe overlay

检验网桥过滤模块是否加载成功

[root@k8s-master ~]# lsmod | grep -e br_netfilter -e overlay
br_netfilter           22256  0 
bridge                151336  1 br_netfilter
overlay                91659  0 
[root@k8s-master ~]#

配置IPVS

service有基于iptables和基于ipvs两种代理模型。基于ipvs的性能要高一些。需要手动载入才能使用ipvs模块

安装ipset和ipvsadm

[root@k8s-master ~]# yum install ipset ipvsadm

新建脚本文件/etc/sysconfig/modules/ipvs.modules,内容如下

[root@k8s-master ~]# cat > /etc/sysconfig/modules/ipvs.modules <<EOF
#!/bin/bash
 
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack_ipv4
EOF
[root@k8s-master ~]#

添加执行权限给脚本文件,然后执行脚本文件

[root@k8s-master ~]# chmod +x /etc/sysconfig/modules/ipvs.modules
[root@k8s-master ~]# /bin/bash /etc/sysconfig/modules/ipvs.modules
[root@k8s-master ~]#

检验模块是否加载成功

[root@k8s-master ~]# lsmod | grep -e ip_vs -e nf_conntrack_ipv4
ip_vs_sh               12688  0 
ip_vs_wrr              12697  0 
ip_vs_rr               12600  0 
ip_vs                 145458  6 ip_vs_rr,ip_vs_sh,ip_vs_wrr
nf_conntrack_ipv4      15053  2 
nf_defrag_ipv4         12729  1 nf_conntrack_ipv4
nf_conntrack          139264  7 ip_vs,nf_nat,nf_nat_ipv4,xt_conntrack,nf_nat_masquerade_ipv4,nf_conntrack_netlink,nf_conntrack_ipv4
libcrc32c              12644  4 xfs,ip_vs,nf_nat,nf_conntrack
[root@k8s-master ~]#

设置主机名

hostnamectl set-hostname <hostname>

添加hosts

cat >> /etc/hosts << EOF
10.0.0.111 k8s111
10.0.0.112 k8s112
10.0.0.113 k8s113
EOF

安装Docker

curl -o /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo
curl -o /etc/yum.repos.d/docker-ce.repo https://download.docker.com/linux/centos/docker-ce.repo
sed -i 's+download.docker.com+mirrors.tuna.tsinghua.edu.cn/docker-ce+' /etc/yum.repos.d/docker-ce.repo
yum list docker-ce --showduplicates
yum -y install docker-ce-20.10.7 docker-ce-cli-20.10.7
yum -y install bash-completion  #命令补全
source /usr/share/bash-completion/bash_completion
mkdir -pv /etc/docker && cat <<EOF | sudo tee /etc/docker/daemon.json
{
  "insecure-registries": ["k8s111:5000"], 
  "registry-mirrors": ["https://tuv7rqqq.mirror.aliyuncs.com"], 
  "exec-opts": ["native.cgroupdriver=systemd"]  
}
EOF
systemctl enable docker && systemctl start docker
docker run -d --network host --restart always --name oldboyedu-registry registry:2

安装kubeadm,kubelet和kubectl

添加阿里yum源

[root@k8s-master ~]# cat <<EOF | tee /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
   http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
[root@k8s-master ~]#

安装,然后启动kubelet

[root@k8s-master ~]# yum install -y --setopt=obsoletes=0 kubelet-1.23.6 kubeadm-1.23.6 kubectl-1.23.6
[root@k8s-master ~]# systemctl enable kubelet --now 
Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /usr/lib/systemd/system/kubelet.service.
[root@k8s-master ~]#

说明如下:

obsoletes等于1表示更新旧的rpm包的同时会删除旧包,0表示更新旧的rpm包不会删除旧包
kubelet启动后,可以用命令journalctl -f -u kubelet查看kubelet更详细的日志
kubelet默认使用systemd作为cgroup driver
启动后,kubelet现在每隔几秒就会重启,因为它陷入了一个等待kubeadm指令的死循环

下载各个机器需要的镜像

查看集群所需镜像的版本

[root@k8s111 ~]# kubeadm config images list
I0322 15:31:42.997877   12558 version.go:255] remote version is much newer: v1.26.3; falling back to: stable-1.23
k8s.gcr.io/kube-apiserver:v1.23.17
k8s.gcr.io/kube-controller-manager:v1.23.17
k8s.gcr.io/kube-scheduler:v1.23.17
k8s.gcr.io/kube-proxy:v1.23.17
k8s.gcr.io/pause:3.6
k8s.gcr.io/etcd:3.5.1-0
k8s.gcr.io/coredns/coredns:v1.8.6
[root@k8s-master ~]#

编辑镜像下载文件images.sh,然后执行。其中node节点只需要kube-proxy和pause

[root@k8s-master ~]# tee ./images.sh <<'EOF'
#!/bin/bash

images=(
kube-apiserver:v1.23.17
kube-controller-manager:v1.23.17
kube-scheduler:v1.23.17
kube-proxy:v1.23.17
pause:3.6
etcd:3.5.1-0
coredns:v1.8.6
)
for imageName in ${images[@]} ; do
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/$imageName
done
EOF
[root@k8s-master ~]#
[root@k8s-master ~]# chmod +x ./images.sh && ./images.sh

4、初始化集群

以下命令在master主机上运行

[root@k8s-master ~]# kubeadm init \
--apiserver-advertise-address=10.0.0.111 \ #这里是master的ip地址
--control-plane-endpoint=k8s111 \   #这里需要是主机名,并且是做过host解析的主机名
--image-repository registry.cn-hangzhou.aliyuncs.com/google_containers \
--kubernetes-version v1.23.6 \
--service-cidr=10.96.0.0/16 \
--pod-network-cidr=10.244.0.0/16
[init] Using Kubernetes version: v1.23.6
[preflight] Running pre-flight checks
......省略部分......
Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of control-plane nodes by copying certificate authorities
and service account keys on each node and then running the following as root:

  kubeadm join k8s-master:6443 --token 0qc9py.n6az0o2jy1tryg2b \
	--discovery-token-ca-cert-hash sha256:f049a62946af45c27d9a387468d598906fd68e6d918d925ce699cb4f2a32e111 \
	--control-plane 

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join k8s-master:6443 --token 0qc9py.n6az0o2jy1tryg2b \
	--discovery-token-ca-cert-hash sha256:f049a62946af45c27d9a387468d598906fd68e6d918d925ce699cb4f2a32e111 
[root@k8s-master ~]#

说明:

可以使用参数--v=6或--v=10等查看详细的日志
所有参数的网络不能重叠。比如192.168.2.x和192.168.3.x是重叠的
–pod-network-cidr:指定pod网络的IP地址范围。直接填写这个就可以了
–service-cidr:service VIP的IP地址范围。默认就10.96.0.0/12。直接填写这个就可以了
–apiserver-advertise-address:API Server监听的IP地址

另一种kubeadm init的方法

# 打印默认的配置信息
[root@k8s-master ~]# kubeadm config print init-defaults --component-configs KubeletConfiguration
# 通过默认的配置信息,进行编辑修改,其中serviceSubnet和podSubnet在同一层级。然后拉取镜像
[root@k8s-master ~]# kubeadm config images pull --config kubeadm-config.yaml
# 进行初始化
[root@k8s-master ~]# kubeadm init --config kubeadm-config.yaml

如果init失败,使用如下命令进行回退

[root@k8s-master ~]# kubeadm reset -f
[root@k8s-master ~]# 
[root@k8s-master ~]# rm -rf /etc/kubernetes
[root@k8s-master ~]# rm -rf /var/lib/etcd/
[root@k8s-master ~]# rm -rf $HOME/.kube

5、设置.kube/config(只在master执行)

kubectl会读取该配置文件

[root@k8s-master ~]# mkdir -p $HOME/.kube
[root@k8s-master ~]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@k8s-master ~]# sudo chown $(id -u):$(id -g) $HOME/.kube/config

6、安装网络插件flannel(只在master执行)

插件使用的是DaemonSet的控制器,会在每个节点都运行

根据github上的README.md当前说明,这个是支持Kubenetes1.17+的

如果因为镜像下载导致部署出错。可以先替换yaml文件内的image源为国内的镜像源

[root@k8s-master ~]# kubectl apply -f https://raw.githubusercontent.com/flannel-io/flannel/master/Documentation/kube-flannel.yml
Warning: policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
podsecuritypolicy.policy/psp.flannel.unprivileged created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.apps/kube-flannel-ds created
[root@k8s-master ~]# 

会下载rancher/mirrored-flannelcni-flannel:v0.17.0和rancher/mirrored-flannelcni-flannel-cni-plugin:v1.0.1这两个镜像

此时查看master的状态

[root@k8s-master ~]# 
[root@k8s-master ~]# kubectl get pods -A
NAMESPACE     NAME                                 READY   STATUS    RESTARTS      AGE
kube-system   coredns-65c54cc984-lqcxl             1/1     Running   0             49m
kube-system   coredns-65c54cc984-q2n72             1/1     Running   0             49m
kube-system   etcd-k8s-master                      1/1     Running   2 (14m ago)   49m
kube-system   kube-apiserver-k8s-master            1/1     Running   2 (14m ago)   49m
kube-system   kube-controller-manager-k8s-master   1/1     Running   2 (14m ago)   49m
kube-system   kube-flannel-ds-6v9jg                1/1     Running   0             9m15s
kube-system   kube-proxy-6dz9x                     1/1     Running   2 (14m ago)   49m
kube-system   kube-scheduler-k8s-master            1/1     Running   2 (14m ago)   49m
[root@k8s-master ~]# 
[root@k8s-master ~]# kubectl get nodes
NAME         STATUS   ROLES                  AGE   VERSION
k8s-master   Ready    control-plane,master   49m   v1.23.6
[root@k8s-master ~]# 

7、加入node节点(只在node执行)

由上面的kubeadm init成功后的结果得来的

[root@k8s-node1 ~]# kubeadm join k8s-master:6443 --token 0qc9py.n6az0o2jy1tryg2b \
	--discovery-token-ca-cert-hash sha256:f049a62946af45c27d9a387468d598906fd68e6d918d925ce699cb4f2a32e111
[preflight] Running pre-flight checks
......省略部分......
This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

[root@k8s-node1 ~]#

令牌有效期24小时,可以在master节点生成新令牌命令

[root@k8s-master ~]# kubeadm token create --print-join-command

然后在master通过命令watch -n 3 kubectl get pods -Akubectl get nodes查看状态

8、node节点可以执行kubectl命令方法

在master节点上将$HOME/.kube复制到node节点的$HOME目录下

[root@k8s-master ~]# scp -r $HOME/.kube k8s-node1:$HOME

遇见的错误

这个是因为kubelet一直找不到我们之前复制粘贴的ip地址和主机名称所以才会导致这样的。

查看报错

 journalctl -xeu kubelet

修改报错

[root@k8s-master ~]# kubeadm init \
--apiserver-advertise-address=10.0.0.111 \ #这里是master的ip地址
--control-plane-endpoint=k8s111 \   #这里需要是主机名,并且是做过host解析的主机名
--image-repository registry.cn-hangzhou.aliyuncs.com/google_containers \
--kubernetes-version v1.23.6 \
--service-cidr=10.96.0.0/16 \
--pod-network-cidr=10.244.0.0/16

报错2、文件已经存在

kubeadm reset
这个命令最主要是还原由kubeadm或kubeadm join所做的更改
如果使用的是外部的etcd,kubeadm不会删除任何etcd数据
这个命令主要用于我们在做实验的过程中,擦除实验环境,切不可用到生产环境中。
posted @ 2023-03-23 09:30  猛踢瘸子nei条好腿  阅读(308)  评论(0)    收藏  举报