kubeadm 部署k8s1.26高可用集群(containerd)

kubeadm总结

kubeadm部署
            两个大步骤:
                控制平面的第一个节点:
                    kubeadm init 
                        命令行选项
                        配置文件:--config 

                        Pod Network CIDR: 
                            Flannel: 10.244.0.0/16
                            Calico: 192.168.0.0/16
                            Cilium: 10.0.0.0/8
                            ...
                        Service Network CIDR: 10.96.0.0/12
                        Kubernetes Version 
                        当前节点的IP地址
                        API Server Endpoint
                        Image Repository
                            v1.24: registry.k8s.io 
                            v1.23-:k8s.grc.io 

                            registry.aliyuncs.com/google_containers
                        ...

                        拉起控制平面时:
                            预检:Swap禁用,...
                            生成各类证书
                            ...
                            部署附件:kube proxy, coredns

                        生成一个临时认证信息:
                            bootstrap token:引导令牌
                                ttl 

                添加控制平面的其它节点:
                    kubeadm join 
                        要从第一个主节点获取证书,要用hash信息来验证证书的完整性

                添加各个工作节点:
                    kubeadm join 
                        kubelet 
                        kube proxy 

k8s环境规划

podSubnet(pod网段) 10.244.0.0/16
serviceSubnet(service网段): 10.96.0.0/12

实验环境规划:
操作系统:centos7.6
配置: 4Gib内存/4vCPU/60G硬盘
网络:NAT模式

 

K8S集群角色

IP

主机名

安装的组件

控制节点

192.168.19.180

xksmaster1

apiserver、controller-manager、schedule、kubelet、etcd、kube-proxy、容器运行时、calico

工作节点

192.168.19.181

xksnode1

Kube-proxy、calico、coredns、容器运行时、kubelet

工作节点

192.168.19.182

xksnode2

Kube-proxy、calico、coredns、容器运行时、kubelet

 

 

 

1.1 初始化安装k8s集群的实验环境

各个节点执行如下命令更新yum源和操作系统:

yum update -y

在每台机器安装基础软件包:

yum install -y device-mapper-persistent-data lvm2 wget net-tools nfs-utils lrzsz gcc gcc-c++ make cmake libxml2-devel openssl-devel curl curl-devel unzip sudo ntp libaio-devel wget vim ncurses-devel autoconf automake zlib-devel  python-devel epel-release openssh-server socat  ipvsadm conntrack telnet ipvsadm

1.1.2 关闭selinux,所有k8s机器均操作

sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config

#修改selinux配置文件之后,重启机器,selinux配置才能永久生效,重启之后,登录到机器,执行如下命令:

getenforce

#如果显示Disabled说明selinux已经关闭

1.1.3 配置机器主机名

在192.168.19.180上执行如下:

hostnamectl set-hostname xksmaster1 && bash

在192.168.19.181上执行如下:

hostnamectl set-hostname xksnode1 && bash

1.1.4 配置主机hosts文件,相互之间通过主机名互相访问

修改每台机器的/etc/hosts文件,文件最后增加如下内容:

192.168.19.180   xksmaster1 

192.168.19.181   xksnode1 

修改之后的文件如下:

127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.40.180   xksmaster1
192.168.40.181   xksnode1 

1.1.5 配置主机之间无密码登录

配置xksmaster1到其他机器免密登录

[root@xksmaster1 ~]# ssh-keygen  #一路回车,不输入密码

把本地生成的密钥文件和私钥文件拷贝到远程主机

[root@xksmaster1 ~]# ssh-copy-id xksnode1

1.1.6 关闭交换分区swap,提升性能,重启服务器

#永久关闭:注释swap挂载,给swap这行开头加一下注释

[root@xksnode1 ~]# swapoff -a
[root@xksmaster1 ~]# vim /etc/fstab  
#/dev/mapper/centos-swap swap      swap    defaults        0 0
 [root@xksnode1 ~]# swapoff -a
[root@xksnode1 ~]# vim /etc/fstab
#/dev/mapper/centos-swap swap      swap    defaults        0 0

问题1:为什么要关闭swap交换分区?

Swap是交换分区,如果机器内存不够,会使用swap分区,但是swap分区的性能较低,k8s设计的时候为了能提升性能,默认是不允许使用交换分区的。Kubeadm初始化的时候会检测swap是否关闭,如果没关闭,那就初始化失败。如果不想要关闭交换分区,安装k8s的时候可以指定--ignore-preflight-errors=Swap来解决。

1.1.7 修改机器内核参数

[root@xksmaster1 ~]# modprobe br_netfilter
[root@xksmaster1  ~]# cat > /etc/sysctl.d/k8s.conf <<EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
EOF
[root@xksmaster1 ~]# sysctl -p /etc/sysctl.d/k8s.conf

[root@xksnode1~]# sysctl -p /etc/sysctl.d/k8s.conf
[root@xksnode1~]# modprobe br_netfilter
[root@xksnode1~]# cat > /etc/sysctl.d/k8s.conf <<EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
EOF
[root@ xksnode1~]# sysctl -p /etc/sysctl.d/k8s.conf

问题1:sysctl是做什么的?

在运行时配置内核参数

  -p   从指定的文件加载系统参数,如不指定即从/etc/sysctl.conf中加载

问题2:为什么要执行modprobe br_netfilter?

修改/etc/sysctl.d/k8s.conf文件,增加如下三行参数:

net.bridge.bridge-nf-call-ip6tables = 1

net.bridge.bridge-nf-call-iptables = 1

net.ipv4.ip_forward = 1

 

sysctl -p /etc/sysctl.d/k8s.conf出现报错:

sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-ip6tables: No such file or directory

sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory

解决方法:

modprobe br_netfilter

问题3:为什么开启net.bridge.bridge-nf-call-iptables内核参数?

在centos下安装docker,执行docker info出现如下警告:

WARNING: bridge-nf-call-iptables is disabled

WARNING: bridge-nf-call-ip6tables is disabled

 

解决办法:

vim  /etc/sysctl.d/k8s.conf

net.bridge.bridge-nf-call-ip6tables = 1

net.bridge.bridge-nf-call-iptables = 1

问题4:为什么要开启net.ipv4.ip_forward = 1参数?

kubeadm初始化k8s如果报错:

就表示没有开启ip_forward,需要开启。

net.ipv4.ip_forward是数据包转发:

出于安全考虑,Linux系统默认是禁止数据包转发的。所谓转发即当主机拥有多于一块的网卡时,其中一块收到数据包,根据数据包的目的ip地址将数据包发往本机另一块网卡,该网卡根据路由表继续发送数据包。这通常是路由器所要实现的功能。

要让Linux系统具有路由转发功能,需要配置一个Linux的内核参数net.ipv4.ip_forward。这个参数指定了Linux系统当前对路由转发功能的支持情况;其值为0时表示禁止进行IP转发;如果是1,则说明IP转发功能已经打开。

1.1.8 关闭firewalld防火墙

[root@xkschaomaster1 ~]# systemctl stop firewalld ; systemctl disable firewalld
[root@xkschaonode1 ~]# systemctl stop firewalld ; systemctl disable firewalld

1.1.9 配置阿里云的repo源

#配置国内安装docker和containerd的阿里云的repo源

[root@xksmaster1 ~]#yum install yum-utils -y
[root@xksnode1 ~]#yum install yum-utils -y
[root@xksmaster1 ~]# yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
[root@xksnode1 ~]# yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo

1.1.10 配置安装k8s组件需要的阿里云的repo源

[root@xksmaster1 ~]#cat >  /etc/yum.repos.d/kubernetes.repo <<EOF
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=0
EOF

[root@xksnode1 ~]#cat >  /etc/yum.repos.d/kubernetes.repo <<EOF
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=0
EOF

1.1.11 配置时间同步

在xksmaster1上执行如下:

#安装ntpdate命令

[root@xksmaster1 ~]# yum install ntpdate -y

#跟网络时间做同步

[root@xksmaster1 ~]# ntpdate cn.pool.ntp.org

#把时间同步做成计划任务

[root@xksmaster1 ~]# crontab -e

* *  * * * /usr/sbin/ntpdate   cn.pool.ntp.org

#重启crond服务

[root@xksmaster1 ~]#service crond restart

在xksnode1上执行如下:

#安装ntpdate命令

[root@xksnode1 ~]# yum install ntpdate -y

#跟网络时间做同步

[root@xksnode1 ~]#ntpdate cn.pool.ntp.org

#把时间同步做成计划任务

[root@xksnode1 ~]#crontab -e
* * * * * /usr/sbin/ntpdate   cn.pool.ntp.org

#重启crond服务

[root@xksnode1 ~]#service crond restart

1.1.12 安装基础软件包

[root@xksmaster1 ~]# yum install -y device-mapper-persistent-data lvm2 wget net-tools nfs-utils lrzsz gcc gcc-c++ make cmake libxml2-devel openssl-devel curl curl-devel unzip sudo ntp libaio-devel wget vim ncurses-devel autoconf automake zlib-devel  python-devel epel-release openssh-server socat  ipvsadm conntrack telnet ipvsadm
[root@xksnode1 ~]# yum install -y device-mapper-persistent-data lvm2 wget net-tools nfs-utils lrzsz gcc gcc-c++ make cmake libxml2-devel openssl-devel curl curl-devel unzip sudo ntp libaio-devel wget vim ncurses-devel autoconf automake zlib-devel  python-devel epel-release openssh-server socat  ipvsadm conntrack telnet ipvsadm

1.2、安装containerd服务

1.2.1 安装containerd

yum list | grep containerd
[root@xksmaster1 ~]#yum install  containerd.io-1.6.6 -y

接下来生成 containerd 的配置文件:

 

报错:Requires: container-selinux >= 2:2.74

wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo
yum install -y container-selinux
[root@xksmaster1 ~]#mkdir -p /etc/containerd
[root@xksmaster1 ~]#containerd config default > /etc/containerd/config.toml

修改配置文件:

vim /etc/containerd/config.toml
把SystemdCgroup = false修改成SystemdCgroup = true
把sandbox_image = "k8s.gcr.io/pause:3.6"修改成sandbox_image="registry.aliyuncs.com/google_containers/pause:3.7"

配置 containerd 开机启动,并启动 containerd

[root@xksmaster1 ~]#systemctl enable containerd  --now
[root@xksnode1 ~]#yum install  containerd.io-1.6.6 -y

接下来生成 containerd 的配置文件:

报错:Requires: container-selinux >= 2:2.74

wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo
yum install -y container-selinux
 [root@xksnode1 ~]#mkdir -p /etc/containerd
[root@xksnode1 ~]#containerd config default > /etc/containerd/config.toml

修改配置文件:

vim /etc/containerd/config.toml

把SystemdCgroup = false修改成SystemdCgroup = true

把sandbox_image = "k8s.gcr.io/pause:3.6"修改成sandbox_image="registry.aliyuncs.com/google_containers/pause:3.7"

配置 containerd 开机启动,并启动 containerd

[root@xksnode1 ~]#systemctl enable containerd  --now

修改/etc/crictl.yaml文件

[root@xksmaster1 ~]#cat > /etc/crictl.yaml <<EOF
runtime-endpoint: unix:///run/containerd/containerd.sock
image-endpoint: unix:///run/containerd/containerd.sock
timeout: 10
debug: false
EOF
[root@xksmaster1 ~]#systemctl restart  containerd

[root@xksnode1 ~]#cat > /etc/crictl.yaml <<EOF
runtime-endpoint: unix:///run/containerd/containerd.sock
image-endpoint: unix:///run/containerd/containerd.sock
timeout: 10
debug: false
EOF
[root@xksnode1 ~]#systemctl restart  containerd

配置containerd镜像加速器,k8s所有节点均按照以下配置:

编辑vim /etc/containerd/config.toml文件

找到config_path = "",修改成如下目录:

config_path = "/etc/containerd/certs.d"
mkdir /etc/containerd/certs.d/docker.io/ -p
vim /etc/containerd/certs.d/docker.io/hosts.toml
#写入如下内容:
[host."https://qryj5zfu.mirror.aliyuncs.com",host."https://registry.docker-cn.com"]
  capabilities = ["pull"]
重启containerd:
systemctl restart containerd

备注:docker也要安装,docker跟containerd不冲突,安装docker是为了能基于dockerfile构建镜像

[root@xksmaster1 ~]#yum install  docker-ce  -y
[root@xksnode1 ~]#yum install  docker-ce  -y
[root@xksmaster1 ~]#systemctl enable docker --now
[root@xksnode1 ~]#systemctl enable docker --now

配置docker镜像加速器,k8s所有节点均按照以下配置

vim /etc/docker/daemon.json
写入如下内容:
{
 "registry-mirrors":["https://qryj5zfu.mirror.aliyuncs.com","https://registry.docker-cn.com","https://docker.mirrors.ustc.edu.cn","https://dockerhub.azk8s.cn","http://hub-mirror.c.163.com"]
}

重启docker:

systemctl restart docker

1.3、安装初始化k8s需要的软件包

[root@xksmaster1 ~]# yum install -y kubelet-1.26.0 kubeadm-1.26.0 kubectl-1.26.0
[root@xksmaster1 ~]# systemctl enable kubelet

[root@xksnode1 ~]# yum install -y kubelet-1.26.0 kubeadm-1.26.0 kubectl-1.26.0
[root@xksnode1 ~]# systemctl enable kubelet

注:每个软件包的作用

Kubeadm:  kubeadm是一个工具,用来初始化k8s集群的

kubelet:   安装在集群所有节点上,用于启动Pod的,kubeadm安装k8s,k8s控制节点和工作节点的组件,都是基于pod运行的,只要pod启动,就需要kubelet

kubectl:   通过kubectl可以部署和管理应用,查看各种资源,创建、删除和更新各种组件

1.4、kubeadm初始化k8s集群

#设置容器运行时

[root@xksmaster1~]# crictl config runtime-endpoint unix:///run/containerd/containerd.sock
[root@xksnode1~]# crictl config runtime-endpoint unix:///run/containerd/containerd.sock

#使用kubeadm初始化k8s集群

[root@xksmaster1 ~]# kubeadm config print init-defaults > kubeadm.yaml

根据我们自己的需求修改配置,比如修改 imageRepository 的值,kube-proxy 的模式为 ipvs,需要注意的是由于我们使用的containerd作为运行时,所以在初始化节点的时候需要指定cgroupDriver为systemd

kubeadm.yaml配置文件如下:

apiVersion: kubeadm.k8s.io/v1beta3
。。。
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: 192.168.19.180 #控制节点的ip
  bindPort: 6443
nodeRegistration:
  criSocket: unix:///run/containerd/containerd.sock  #指定containerd容器运行时
  imagePullPolicy: IfNotPresent
  name:  xksmaster1 #控制节点主机名
  taints: null
---
apiVersion: kubeadm.k8s.io/v1beta3
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
dns: {}
etcd:
  local:
    dataDir: /var/lib/etcd
imageRepository: registry.cn-hangzhou.aliyuncs.com/google_containers
# 指定阿里云镜像仓库地址
kind: ClusterConfiguration
kubernetesVersion: 1.25.0 #k8s版本
networking:
  dnsDomain: cluster.local
  podSubnet: 10.244.0.0/16 #指定pod网段, 需要新增加这个
  serviceSubnet: 10.96.0.0/12 #指定Service网段
scheduler: {}
#在文件最后,插入以下内容,(复制时,要带着---):
---=
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
mode: ipvs
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
cgroupDriver: systemd

#基于kubeadm.yaml初始化k8s集群

通过查询需要的镜像版本
kubeadm config images list --kubernetes-version v1.24.3
kubeadm config images list --kubernetes-version v1.25.x
kubeadm config images list --kubernetes-version v1.26.1

#有代理可以下载直接下载 无代理可以改为阿里云镜像仓库
registry.cn-hangzhou.aliyun.com/google_containers/...

尝试提前下载镜像 

[root@ca-k8s-master01 kubeadm]# kubeadm config images pull --config kubeadm.yaml
[config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.25.0
[config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.25.0
[config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.25.0
[config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.25.0
[config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.8
[config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.5.4-0
[config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:v1.9.3

或者

#1.28.2版本 自己下载 registry.k8s.io 改为 registry.lank8s.cn
crictl pull registry.lank8s.cn/kube-apiserver:v1.28.2                 
crictl pull registry.lank8s.cn/kube-controller-manager:v1.28.2        
crictl pull registry.lank8s.cn/kube-scheduler:v1.28.2                 
crictl pull registry.lank8s.cn/kube-proxy:v1.28.2                     
crictl pull registry.lank8s.cn/pause:3.9                              
crictl pull registry.lank8s.cn/etcd:3.5.9-0                           
crictl pull registry.lank8s.cn/coredns/coredns:v1.10.1

#crictl images
IMAGE                                        TAG                 IMAGE ID            SIZE
registry.lank8s.cn/coredns/coredns           v1.10.1             ead0a4a53df89       16.2MB
registry.lank8s.cn/etcd                      3.5.9-0             73deb9a3f7025       103MB
registry.lank8s.cn/kube-apiserver            v1.28.2             cdcab12b2dd16       34.7MB
registry.lank8s.cn/kube-controller-manager   v1.28.2             55f13c92defb1       33.4MB
registry.lank8s.cn/kube-proxy                v1.28.2             c120fed2beb84       24.6MB
registry.lank8s.cn/kube-scheduler            v1.28.2             7a5d9d67a13f6       18.8MB
registry.lank8s.cn/pause                     3.9                 e6f1816883972       322kB

#重新打标记 因为默认会去registry.k8s.io 去下载镜像 也可以指定阿里云源
ctr -n k8s.io images tag registry.lank8s.cn/kube-controller-manager:v1.28.2 registry.k8s.io/kube-controller-manager:v1.28.2        
ctr -n k8s.io images tag registry.lank8s.cn/kube-scheduler:v1.28.2 registry.k8s.io/kube-scheduler:v1.28.2                 
ctr -n k8s.io images tag registry.lank8s.cn/kube-proxy:v1.28.2 registry.k8s.io/kube-proxy:v1.28.2                     
ctr -n k8s.io images tag registry.lank8s.cn/pause:3.9 registry.k8s.io/pause:3.9                              
ctr -n k8s.io images tag registry.lank8s.cn/etcd:3.5.9-0 registry.k8s.io/etcd:3.5.9-0                          
ctr -n k8s.io images tag registry.lank8s.cn/coredns/coredns:v1.10.1 registry.k8s.io/coredns/coredns:v1.10.1
[root@xksmaster1 ~]# ctr -n=k8s.io images import k8s_1.26.0.tar.gz
[root@xksnode1 ~]# ctr -n=k8s.io images import k8s_1.26.0.tar.gz
备注:k8s_1.26.0.tar.gz这个文件如何来的?
这个文件把安装k8s需要的镜像都集成好了,这个是我第一次安装1.26.0这个版本,获取到对应的镜像,通过ctr images export 这个命令把镜像输出到k8s_1.26.0.tar.gz文件,如果大家安装其他版本,那就不需要实现解压镜像,可以默认从网络拉取镜像即可。

ctr是containerd自带的工具,有命名空间的概念,若是k8s相关的镜像,都默认在k8s.io这个命名空间,所以导入镜像时需要指定命令空间为k8s.io

#使用ctr命令指定命名空间导入镜像

ctr -n=k8s.io images import k8s_1.26.0.tar.gz

#查看镜像,可以看到可以查询到了

crictl images

#init
kubeadm init --config=kubeadm.yaml --ignore-preflight-errors=SystemVerification --ignore-preflight-errors=Swap

显示如下,说明安装完成:
特别提醒:--image-repository registry.aliyuncs.com/google_containers为保证拉取镜像不到国外站点拉取,手动指定仓库地址为registry.aliyuncs.com/google_containers。kubeadm默认从 (default "registry.k8s.io")
拉取镜像。   我们本地有导入到的离线镜像,所以会优先使用本地的镜像。

mode: ipvs 表示kube-proxy代理模式是ipvs,如果不指定ipvs,会默认使用iptables,但是iptables效率低,所以我们生产环境建议开启ipvs,阿里云和华为云托管的K8s,也提供ipvs模式,如下:


#配置kubectl的配置文件config,相当于对kubectl进行授权,这样kubectl命令可以使用这个证书对k8s集群进行管理
[root@xksmaster1 ~]# mkdir -p $HOME/.kube
[root@xksmaster1 ~]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@xksmaster1 ~]# sudo chown $(id -u):$(id -g) $HOME/.kube/config
[root@xksmaster1 ~]# kubectl get nodes
NAME              STATUS         ROLES                  AGE     VERSION
xksmaster1   NotReady       control-plane         2m25s   v1.26.0

1.5、扩容k8s集群-添加第一个工作节点

在xksmaster1上查看加入节点的命令:

[root@xksmaster1 ~]# kubeadm token create --print-join-command

显示如下:kubeadm join 192.168.19.180:6443 --token ktmg75.h756tay5fp1pw2ot --discovery-token-ca-cert-hash sha256:3eb76bbb8ebe90d7b3477757d97bd6efea4db49b4160cc1dc0a978bf8f01c055

把xksnode1加入k8s集群:

[root@xksnode1~]# kubeadm join 192.168.19.180:6443 --token ktmg75.h756tay5fp1pw2ot --discovery-token-ca-cert-hash sha256:3eb76bbb8ebe90d7b3477757d97bd6efea4db49b4160cc1dc0a978bf8f01c055 --ignore-preflight-errors=SystemVerification

#看到上面说明xksnode1节点已经加入到集群了,充当工作节点

#在xksmaster1上查看集群节点状况:

[root@xksmaster1 ~]# kubectl get nodes
NAME              STATUS     ROLES               AGE   VERSION
xksmaster1      Ready   control-plane,master      49m   v1.26.0
xksnode1         NotReady   <none>                    39s   v1.26.0

#可以对xksnode1打个标签,显示work

[root@xksmaster1 ~]# kubectl label nodes xksnode1 node-role.kubernetes.io/work=work
[root@xksmaster1 ~]# kubectl get nodes
NAME              STATUS     ROLES           AGE     VERSION
xksmaster1   NotReady   control-plane   10m     v1.26.0
xksnode1     NotReady   work           27s     v1.26.0

1.6、安装kubernetes网络组件-Calico

把安装calico需要的镜像calico.tar.gz传到xksmaster1和xksnode1节点,手动解压:

[root@xksmaster1 ~]# ctr -n=k8s.io images import calico.tar.gz
[root@xksnode1 ~]# ctr -n=k8s.io images import calico.tar.gz
上传calico.yaml到xksmaster1上,使用yaml文件安装calico 网络插件 。
[root@xksmaster1 ~]# kubectl apply -f  calico.yaml
unpacking docker.io/calico/node:v3.18.0 (sha256:486d1fc89d07fb7ee9b5947814218950d41bfdb02b5a827ee919503dc635fdd8)...done
unpacking docker.io/calico/pod2daemon-flexvol:v3.18.0 (sha256:2e710069e4a7d6b312d323645140198a4881173e2da0f5d1ac869e0d9072c073)...done
unpacking docker.io/calico/kube-controllers:v3.18.0 (sha256:e7955af25219e2855aaca3e742fe79b89f14a4759aed52b5bc7ffea25f0696b7)...done
unpacking docker.io/calico/cni:v3.18.0 (sha256:b522c4c863576b449a026b242d89bcc75a84d79a6f0997ba75bd2d6a6b3fba39)...done

注:在线下载配置文件地址是: https://docs.projectcalico.org/manifests/calico.yaml

 

[root@xksmaster1 ~]# kubectl get node
NAME              STATUS   ROLES           AGE   VERSION
xksmaster1   Ready    control-plane   36m   v1.26.0
xksnode1     Ready    work            21m   v1.26.0

calico网络插件配置文件说明

1、Daemonset配置

……

containers:

        # Runs calico-node container on each Kubernetes node. This

        # container programs network policy and routes on each

        # host.

        - name: calico-node

          image: docker.io/calico/node:v3.18.0

……

          env:

            # Use Kubernetes API as the backing datastore.

            - name: DATASTORE_TYPE

              value: "kubernetes"

            # Cluster type to identify the deployment type

            - name: CLUSTER_TYPE

              value: "k8s,bgp"

            # Auto-detect the BGP IP address.

            - name: IP

              value: "autodetect"

         #pod网段

         - name: CALICO_IPV4POOL_CIDR

value: "10.244.0.0/16"

            # Enable IPIP

            - name: CALICO_IPV4POOL_IPIP

              value: "Always"

 

IP_AUTODETECTION_METHOD:获取Node IP地址的方式,默认使用第1个网络接口的IP地址,对于安装了多块网卡的Node,可以使用正则表达式选择正确的网卡,例如"interface=eth.*"表示选择名称以eth开头的网卡的IP地址。

- name: IP_AUTODETECTION_METHOD

value: "interface=ens33"

1.7、测试在k8s创建pod是否可以正常访问网络

#把busybox-1-28.tar.gz上传到xksnode1节点,手动解压

[root@xksnode1 ~]# ctr -n k8s.io images import busybox-1-28.tar.gz
[root@xksmaster1 ~]# kubectl run busybox --image docker.io/library/busybox:1.28  --image-pull-policy=IfNotPresent --restart=Never --rm -it busybox -- sh
/ # ping www.baidu.com
PING www.baidu.com (39.156.66.18): 56 data bytes
64 bytes from 39.156.66.18: seq=0 ttl=127 time=39.3 ms

#通过上面可以看到能访问网络,说明calico网络插件已经被正常安装了
/ # nslookup kubernetes.default.svc.cluster.local
Server:    10.96.0.10
Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local
Name:      kubernetes.default.svc.cluster.local
Address 1: 10.96.0.1 kubernetes.default.svc.cluster.local
/ # exit #退出pod

10.96.0.10 就是我们coreDNS的clusterIP,说明coreDNS配置好了。

解析内部Service的名称,是通过coreDNS去解析的。

 

#注意:

busybox要用指定的1.28版本,不能用最新版本,最新版本,nslookup会解析不到dns和ip

1.8、ctr和crictl区别

背景:在部署k8s的过程中,经常要对镜像进行操作(拉取、删除、查看等)

问题:使用过程中会发现ctr和crictl有很多相同功能,也有些不同,那区别到底在哪里?

说明:

1.ctr是containerd自带的CLI命令行工具,crictl是k8s中CRI(容器运行时接口)的客户端,k8s使用该客户端和containerd进行交互;

[root@xksnode1 ~]# cat /etc/crictl.yaml
runtime-endpoint: unix:///run/containerd/containerd.sock
image-endpoint: unix:///run/containerd/containerd.sock
timeout: 10
debug: false
pull-image-on-create: false
disable-pull-on-run: false

systemctl restart  containerd

2.ctr和crictl命令具体区别如下,也可以--help查看。crictl缺少对具体镜像的管理能力,可能是k8s层面镜像管理可以由用户自行控制,能配置pod里面容器的统一镜像仓库,镜像的管理可以有habor等插件进行处理。

1.9、扩容k8s集群,添加第二个工作节点(参考xksnode1步骤 一样)

启动一台新的机器xksnode2

1.1.1 修改机器IP,变成静态IP

vim /etc/sysconfig/network-scripts/ifcfg-ens33文件

TYPE=Ethernet

PROXY_METHOD=none

BROWSER_ONLY=no

BOOTPROTO=static

IPADDR=192.168.40.182

NETMASK=255.255.255.0

GATEWAY=192.168.40.2

DNS1=192.168.40.2

DEFROUTE=yes

IPV4_FAILURE_FATAL=no

IPV6INIT=yes

IPV6_AUTOCONF=yes

IPV6_DEFROUTE=yes

IPV6_FAILURE_FATAL=no

IPV6_ADDR_GEN_MODE=stable-privacy

NAME=ens33

DEVICE=ens33

ONBOOT=yes

#修改配置文件之后需要重启网络服务才能使配置生效,重启网络服务命令如下:

service network restart

注:/etc/sysconfig/network-scripts/ifcfg-ens33文件里的配置说明:

NAME=ens33    #网卡名字,跟DEVICE名字保持一致即可

DEVICE=ens33   #网卡设备名,大家ip addr可看到自己的这个网卡设备名,每个人的机器可能这个名字不一样,需要写自己的

BOOTPROTO=static   #static表示静态ip地址

ONBOOT=yes    #开机自启动网络,必须是yes

IPADDR=192.168.19.182   #ip地址,需要跟自己电脑所在网段一致

NETMASK=255.255.255.0  #子网掩码,需要跟自己电脑所在网段一致

GATEWAY=192.168.1.2   #网关,在自己电脑打开cmd,输入ipconfig /all可看到

DNS1=192.168.1.2    #DNS,在自己电脑打开cmd,输入ipconfig /all可看到 

 

执行如下命令更新yum源和操作系统:

yum update -y

 

安装基础软件包:

yum install -y device-mapper-persistent-data lvm2 wget net-tools nfs-utils lrzsz gcc gcc-c++ make cmake libxml2-devel openssl-devel curl curl-devel unzip sudo ntp libaio-devel wget vim ncurses-devel autoconf automake zlib-devel  python-devel epel-release openssh-server socat  ipvsadm conntrack telnet ipvsadm

1.1.2 关闭selinux,所有k8s机器均操作

sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config

#修改selinux配置文件之后,重启机器,selinux配置才能永久生效,重启之后,登录到机器,执行如下命令:

getenforce

#如果显示Disabled说明selinux已经关闭

1.1.3 配置机器主机名

在192.168.19.182上执行如下:

hostnamectl set-hostname xksnode2 && bash

1.1.4 配置主机hosts文件,相互之间通过主机名互相访问

修改每台机器的/etc/hosts文件,文件最后增加如下内容:

192.168.19.180   xksmaster1 

192.168.19.181   xksnode1 

192.168.19.182   xksnode2

修改之后的文件如下:

127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4

::1         localhost localhost.localdomain localhost6 localhost6.localdomain6

192.168.19.180   xksmaster1

192.168.19.181   xksnode1 

192.168.19.182   xksnode2

1.1.5 配置xksmaster1登录xksnode2免密码登录

配置xksmaster1到其他机器免密登录

把本地生成的密钥文件和私钥文件拷贝到远程主机

[root@xksmaster1 ~]# ssh-copy-id xksnode2

1.1.6 关闭交换分区swap,提升性能

[root@xksnode2 ~]# swapoff -a

[root@xksnode2 ~]# free -m

              total        used        free      shared  buff/cache   available

Mem:           3932         174        3564          11         193        3515

Swap:             0           0           0

[root@xksnode2 ~]# vim /etc/fstab

#/dev/mapper/centos-swap swap      swap    defaults        0 0

1.1.7 修改机器内核参数

[root@xksnode2 ~]# modprobe br_netfilter

[root@xksnode2~]# cat > /etc/sysctl.d/k8s.conf <<EOF

net.bridge.bridge-nf-call-ip6tables = 1

net.bridge.bridge-nf-call-iptables = 1

net.ipv4.ip_forward = 1

EOF

[root@ xksnode2~]# sysctl -p /etc/sysctl.d/k8s.conf

1.1.8 关闭firewalld防火墙

[root@xksnode2~]# systemctl stop firewalld ; systemctl disable firewalld

1.1.9 配置阿里云的repo源

[root@xksnode2 ~]#yum install yum-utils -y

[root@xksnode2 ~]# yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo

1.1.10 配置安装k8s组件需要的阿里云的repo源

[root@xksnode2 ~]#cat >  /etc/yum.repos.d/kubernetes.repo <<EOF

[kubernetes]

name=Kubernetes

baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/

enabled=1

gpgcheck=0

EOF

1.1.11 配置时间同步

在xksnode2上执行如下:

#安装ntpdate命令

[root@xksnode2 ~]# yum install ntpdate -y

#跟网络时间做同步

[root@xksnode2 ~]#ntpdate cn.pool.ntp.org

#把时间同步做成计划任务

[root@xksnode2 ~]#crontab -e

* * * * * /usr/sbin/ntpdate   cn.pool.ntp.org

1.2.1 安装containerd

[root@xksnode2 ~]#yum install  containerd.io-1.6.6 -y

报错:Requires: container-selinux >= 2:2.74

wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo

yum install -y container-selinux

接下来生成 containerd 的配置文件:

[root@xksnode2 ~]#mkdir -p /etc/containerd

[root@xksnode2 ~]#containerd config default > /etc/containerd/config.toml

修改配置文件:

打开/etc/containerd/config.toml

把SystemdCgroup = false修改成SystemdCgroup = true

把sandbox_image = "k8s.gcr.io/pause:3.6"修改成sandbox_image="registry.aliyuncs.com/google_containers/pause:3.7"

配置 containerd 开机启动,并启动 containerd

[root@xksnode2 ~]#systemctl enable containerd  --now

修改/etc/crictl.yaml文件

[root@xksnode2 ~]#cat > /etc/crictl.yaml <<EOF

runtime-endpoint: unix:///run/containerd/containerd.sock

image-endpoint: unix:///run/containerd/containerd.sock

timeout: 10

debug: false

EOF

[root@xksnode2 ~]#systemctl restart  containerd

 

配置containerd镜像加速器,按照以下方法配置:

编辑vim /etc/containerd/config.toml文件

找到config_path = "",修改成如下目录:

config_path = "/etc/containerd/certs.d"

 

#保存退出

mkdir /etc/containerd/certs.d/docker.io/ -p

vim /etc/containerd/certs.d/docker.io/hosts.toml

#写入如下内容:

[host."https://vh3bm52y.mirror.aliyuncs.com",host."https://registry.docker-cn.com"]

  capabilities = ["pull","push"]

重启containerd:

systemctl restart containerd

备注:docker也要安装,docker跟containerd不冲突,安装docker是为了能基于dockerfile构建镜像

[root@xksnode2 ~]#yum install  docker-ce  -y

[root@xksnode2 ~]#systemctl enable docker --now

配置docker镜像加速器

vim /etc/docker/daemon.json

写入如下内容:

{

 "registry-mirrors":["https://vh3bm52y.mirror.aliyuncs.com","https://registry.docker-cn.com","https://docker.mirrors.ustc.edu.cn","https://dockerhub.azk8s.cn","http://hub-mirror.c.163.com"]

}

重启docker:

systemctl restart docker

1.3、安装初始化k8s需要的软件包

[root@xksnode2 ~]# yum install -y kubelet-1.26.0 kubeadm-1.26.0 kubectl-1.26.0
[root@xksnode2 ~]# systemctl enable kubelet

注:每个软件包的作用

Kubeadm:  kubeadm是一个工具,用来初始化k8s集群的

kubelet:   安装在集群所有节点上,用于启动Pod的,kubeadm安装k8s,k8s控制节点和工作节点的组件,都是基于pod运行的,只要pod启动,就需要kubelet

kubectl:   通过kubectl可以部署和管理应用,查看各种资源,创建、删除和更新各种组件

 

把xksnode2加入到K8s集群:

在xksmaster1上查看加入节点的命令:

[root@xksmaster1 ~]# kubeadm token create --print-join-command
显示如下:

kubeadm join 192.168.19.180:6443 --token ktmg75.h756tay5fp1pw2ot --discovery-token-ca-cert-hash sha256:3eb76bbb8ebe90d7b3477757d97bd6efea4db49b4160cc1dc0a978bf8f01c055

把xksnode2加入k8s集群:
[root@xksnode2 ~]# ctr -n=k8s.io images import k8s_1.26.0.tar.gz
[root@xksnode2 ~]# ctr -n=k8s.io images import calico.tar.gz
[root@xksnode2~]#  kubeadm join 192.168.19.180:6443 --token ktmg75.h756tay5fp1pw2ot --discovery-token-ca-cert-hash sha256:3eb76bbb8ebe90d7b3477757d97bd6efea4db49b4160cc1dc0a978bf8f01c055 --ignore-preflight-errors=SystemVerification

#在xksmaster1上查看集群节点状况:

[root@xksmaster1 ~]# kubectl get nodes
NAME              STATUS     ROLES               AGE   VERSION
xksmaster1      Ready   control-plane,master      49m   v1.26.
xksnode1        Ready   <none>                    39s   v1.26.0
xksnode2      Ready   <none>                    39s   v1.26.0

 

 

 ===补充 安装过程报错

[root@xksmaster1 ~]# kubeadm init --config=kubeadm.yaml --ignore-preflight-errors=SystemVerification
[init] Using Kubernetes version: v1.26.0
[preflight] Running pre-flight checks
        [WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
        [WARNING Hostname]: hostname "node" could not be reached
        [WARNING Hostname]: hostname "node": lookup node on 192.168.19.2:53: server misbehaving
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local node] and IPs [10.96.0.1 192.168.19.180]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [localhost node] and IPs [192.168.19.180 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [localhost node] and IPs [192.168.19.180 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.

Unfortunately, an error has occurred:
        timed out waiting for the condition

This error is likely caused by:
        - The kubelet is not running
        - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
        - 'systemctl status kubelet'
        - 'journalctl -xeu kubelet'

Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all running Kubernetes containers by using crictl:
        - 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
        Once you have found the failing container, you can inspect its logs with:
        - 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock logs CONTAINERID'
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
You have new mail in /var/spool/mail/root


查看kubelet :journaclt -xe kubelet 需要关闭交换分区

Mar 01 11:05:42 xksmaster1 kubelet[30650]: E0301 11:05:42.823085 30650 run.go:74] "command failed" err="failed to run Kubelet: running with swap on is not supported, please disable swap! or set --fail-swap-on
Mar 01 11:05:42 xksmaster1 systemd[1]: kubelet.service: main process exited, code=exited, status=1/FAILURE
Mar 01 11:05:42 xksmaster1 systemd[1]: Unit kubelet.service entered failed state.
Mar 01 11:05:42 xksmaster1 systemd[1]: kubelet.service failed.

 

重新初始化 如果不删除之前的目录 会报错:rm -rf /etc/kubernetes/mani

[init] Using Kubernetes version: v1.26.0
[preflight] Running pre-flight checks
[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
[WARNING Hostname]: hostname "node" could not be reached
[WARNING Hostname]: hostname "node": lookup node on 192.168.19.2:53: server misbehaving
error execution phase preflight: [preflight] Some fatal errors occurred:
[ERROR FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml]: /etc/kubernetes/manifests/kube-apiserver.yaml already exists
[ERROR FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml]: /etc/kubernetes/manifests/kube-controller-manager.yaml already exists
[ERROR FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml]: /etc/kubernetes/manifests/kube-scheduler.yaml already exists
[ERROR FileAvailable--etc-kubernetes-manifests-etcd.yaml]: /etc/kubernetes/manifests/etcd.yaml already exists
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
To see the stack trace of this error execute with --v=5 or higher

 

因为多次安装 导致有些应用启动 需要强杀

[root@xksmaster1 ~]# kubeadm init --config=kubeadm.yaml --ignore-preflight-errors=SystemVerification
[init] Using Kubernetes version: v1.26.0
[preflight] Running pre-flight checks
[WARNING Hostname]: hostname "node" could not be reached
[WARNING Hostname]: hostname "node": lookup node on 192.168.19.2:53: server misbehaving
error execution phase preflight: [preflight] Some fatal errors occurred:
[ERROR Port-6443]: Port 6443 is in use
[ERROR Port-10250]: Port 10250 is in use
[ERROR DirAvailable--var-lib-etcd]: /var/lib/etcd is not empty
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
To see the stack trace of this error execute with --v=5 or higher

 安装成功

[root@xksmaster1 ~]# kubeadm init --config=kubeadm.yaml --ignore-preflight-errors=SystemVerification
[init] Using Kubernetes version: v1.26.0
[preflight] Running pre-flight checks
        [WARNING Hostname]: hostname "node" could not be reached
        [WARNING Hostname]: hostname "node": lookup node on 192.168.19.2:53: server misbehaving
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/admin.conf"
[kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/kubelet.conf"
[kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/controller-manager.conf"
[kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/scheduler.conf"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 5.002138 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node node as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node node as control-plane by adding the taints [node-role.kubernetes.io/control-plane:NoSchedule]
[bootstrap-token] Using token: abcdef.0123456789abcdef
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.19.180:6443 --token abcdef.0123456789abcdef \
        --discovery-token-ca-cert-hash sha256:3eb76bbb8ebe90d7b3477757d97bd6efea4db49b4160cc1dc0a978bf8f01c055

 

 

 

其他节点加入

[root@xksnode1 ~]# kubeadm join 192.168.19.180:6443 --token 0ivs5a.zjlp2h9jd60cqryw --discovery-token-ca-cert-hash sha256:3eb76bbb8ebe90d7b3477757d97bd6efea4db49b4160cc1dc0a978bf8f01c055 --ignore-preflight-errors=SystemVerification
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

 

posted @ 2023-03-01 14:46  しみずよしだ  阅读(1071)  评论(2)    收藏  举报