k8s主备Master安装(Containerd)

原创文档编写不易,未经许可请勿转载。文档中有疑问的可以邮件联系我。 邮箱:yinwanit@163.com

文章基于CentOS 7.8系统使用Containerdr作为容器运行时通过kubeadm指导搭建k8s多master节点集群。

必备条件:需要服务器可以联网。

环境

节点说明

主机名 IP地址 操作系统 作用
master01.cc 192.168.100.151 CentOS Linux release 7.8 Master节点
master02.cc 192.168.100.152 CentOS Linux release 7.8 Master节点
master03.cc 192.168.100.153 CentOS Linux release 7.8 Master节点

软件环境

软件名 软件版本 作用
CentOS Linux release 7.8 操作系统
containerd.io 1.6.14-3.1 容器运行时
cri-tools 1.25.0-0 containerd客户端
nerdctl 1.1.0 containerd客户端
cni-plugins v1.1.1 cni插件
metrics-server v0.6.2 k8s集群性能监控插件
kubectl 1.26.0-0 k8s控制工具
kubeadm 1.26.0-0 k8s服务端
kubelet 1.26.0-0 k8s客户端
calico v3.24.5 K8S网络插件
pause 3.7 编车容器
keepalived 1.3.5-19 高可用软件

 

ip地址规划

ip地址 作用
192.168.100.151 k8s Master节点1
192.168.100.152 k8s Master节点2
192.168.100.153 k8s Master节点3
192.168.100.150 Master-vip
10.244.0.0/16 Pod网段地址

 

步骤预览

  1. 操作系统配置:配置IP地址、配置主机名、关闭防火墙、关闭selinux、关闭swap、修改/etc/hosts文件、配置yum源
  2. 安装配置docker
  3. 修改系统内核参数
  4. 安装kubelet软件
  5. 初始化K8S集群
  6. 子节点加入k8s集群
  7. 安装网络插件
  8. 安装metrics-server监控集群性能数据

操作过程

一、操作系统配置

该章节的所有操作所有节点上均要执行。

设置主机名

按照规划文件中的名称依次设置每个节点的主机名。

# hostnamectl set-hostname   规划的主机名

IP地址配置

按照规划文件中的IP地址依次配置每个节点的IP地址。

# systemctl stop NetworkManager;systemctl disabled NetworkManager;systemctl mask NetworkManager
# cat <<EOF > /etc/sysconfig/network-scripts/ifcfg-修改为你的网卡名
TYPE=Ethernet
BOOTPROTO=none
NAME=修改为你的网卡名
DEVICE=修改为你的网卡名
ONBOOT=yes
IPADDR=修改为你需要设置的ip地址
NETMASK=255.255.255.0
GATEWAY=修改为你需要设置的ip地址网关
DNS1=修改为你需要设置的ip地址DNS
EOF
# systemctl restart network
# ip a

关闭selinux

在/etc/selinux/config中设置SELINUX=为disabled

# sed  -i 's/SELINUX=.*/SELINUX=disabled/g'  /etc/selinux/config
# setenforce  0

关闭swap

在/etc/fstab文件中删除swap的挂载信息行,系统中执行swaoff -a临时关闭swap。

# swapoff  -a
# sed -i '/swap/d'  /etc/fstab

关闭防火墙

关闭防火墙,并设置防火墙为开机不启动

# systemctl stop firewalld;systemctl disable firewalld;systemctl mask firewalld
# systemctl status firewalld 

设置/etc/hosts文件

在/etc/hosts文件中添加所有节点的信息。

# cat >> /etc/hosts <<EOF
192.168.100.150 master00.cc  master00
192.168.100.151 master01.cc  master01
192.168.100.152 master02.cc  master02
192.168.100.153 master03.cc  master03
EOF

配置yum源

备份当前的yun配置文件,并创建新的yum源文件。

# mkdir -p /etc/yum.repos.d/bak/
# mv /etc/yum.repos.d/CentOS* /etc/yum.repos.d/bak/
# 在/etc/yum.repos.d/目录中创建k8s.repo、epel.repo、docker.repo、CentOS-Base.repo 四个repo源文件,具体内容如下所示。
# yum clean all
# yum repolist 

 

k8s.repo

[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
View Code

epel.repo

[epel]
name=Extra Packages for Enterprise Linux 7 – $basearch
baseurl=http://mirrors.aliyun.com/epel/7/$basearch
#mirrorlist=https://mirrors.fedoraproject.org/metalink?repo=epel-7&arch=$basearch
failovermethod=priority
enabled=1
gpgcheck=0
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7
View Code

docker.repo

[docker-ce-stable]
name=Docker CE Stable – $basearch
baseurl=https://mirrors.aliyun.com/docker-ce/linux/centos/7/$basearch/stable
enabled=1
gpgcheck=1
gpgkey=https://mirrors.aliyun.com/docker-ce/linux/centos/gpg
View Code

CentOS-Base.repo

[base]
name=CentOS-$releasever – Base – mirrors.aliyun.com
failovermethod=priority
baseurl=http://mirrors.aliyun.com/centos/$releasever/os/$basearch/
#mirrorlist=http://mirrorlist.centos.org/?release=$releasever&arch=$basearch&repo=os
gpgcheck=1
gpgkey=http://mirrors.aliyun.com/centos/RPM-GPG-KEY-CentOS-7

#released updates
[updates]
name=CentOS-$releasever – Updates – mirrors.aliyun.com
failovermethod=priority
baseurl=http://mirrors.aliyun.com/centos/$releasever/updates/$basearch/
#mirrorlist=http://mirrorlist.centos.org/?release=$releasever&arch=$basearch&repo=updates
gpgcheck=1
gpgkey=http://mirrors.aliyun.com/centos/RPM-GPG-KEY-CentOS-7

#additional packages that may be useful
[extras]
name=CentOS-$releasever – Extras – mirrors.aliyun.com
failovermethod=priority
baseurl=http://mirrors.aliyun.com/centos/$releasever/extras/$basearch/
#mirrorlist=http://mirrorlist.centos.org/?release=$releasever&arch=$basearch&repo=extras
gpgcheck=1
gpgkey=http://mirrors.aliyun.com/centos/RPM-GPG-KEY-CentOS-7
View Code

安装常用工具

安装vim、net-tools、bash-completion、wget常用的工具软件。

# yum install vim net-tools bash-completion  wget -y 
# source  /etc/profile.d/bash_completion.sh
# echo 'set paste'  >> ~/.vimrc

二、安装配置containerd

该章节的所有操作所有节点上均要执行。

安装containerd

# yum install containerd.io-1.6.14 cri-tools -y
# crictl config runtime-endpoint unix:///var/run/containerd/containerd.sock
# systemctl restart containerd ; systemctl enable containerd

设置containerd镜像加速

containerd 运行时配置加速:在[plugins.”io.containerd.grpc.v1.cri”.registry.mirrors]行下添加内容:[plugins.”io.containerd.grpc.v1.cri”.registry.mirrors.”docker.io”] 换行再添加 endpoint = [“https://frz7i079.mirror.aliyuncs.com”]。蓝色字体及为需要添加的内容。

# containerd config default > /etc/containerd/config.toml
# vim /etc/containerd/config.toml
      [plugins."io.containerd.grpc.v1.cri".registry.mirrors]
      [plugins."io.containerd.grpc.v1.cri".registry.mirrors."docker.io"]
        endpoint = ["https://sdmy9bft.mirror.aliyuncs.com"]

设置cgroup驱动及sandbox_image地址

在/etc/containerd/config.toml文件修改runc.options选项SystemdCgroup = true。设置sandbox_image镜像下载地址为阿里云的地址。

# sed   -i    's/SystemdCgroup.*/SystemdCgroup\ =\ true/g' /etc/containerd/config.toml 
# sed   -i    's#sandbox_image.*#sandbox_image\ =\ \"registry.aliyuncs.com/google_containers/pause:3.7\"#g'   /etc/containerd/config.toml
# systemctl restart containerd

三、安装容器工具

该章节的所有操作所有节点上均要执行。

安装cni插件

在网址:https://github.com/containernetworking/plugins/releases/ 中下载软件包。

# wget https://github.com/containernetworking/plugins/releases/download/v1.1.1/cni-plugins-linux-amd64-v1.1.1.tgz
# mkdir -p /opt/cni/bin/  
# tar -zxf   cni-plugins-linux-amd64-v1.1.1.tgz  -C  /opt/cni/bin/

安装nerdctl工具

nerdctl工具是操作containerd的客户端,可以实现操作containerd和docker一样命令集操作。

在网址:https://github.com/containerd/nerdctl/releases 中下载软件包只需下载精简包即可(不带full字样的)。

# wget https://github.com/containerd/nerdctl/releases/download/v1.1.0/nerdctl-1.1.0-linux-amd64.tar.gz
# tar -zxvf nerdctl-1.1.0-linux-amd64.tar.gz
# mv nerdctl /bin/
# echo 'source <(nerdctl completion bash)' >> /etc/profile
# echo 'export CONTAINERD_NAMESPACE=k8s.io' >> /etc/profile
# source /etc/profile
# nerdctl images 

四、修改系统内核配置

该章节的所有操作所有节点上均要执行。

设置系统允许转发,加载k8s所需的内核模块。

# cat > /etc/modules-load.d/containerd.conf <<EOF
overlay
br_netfilter
EOF
# modprobe overlay
# modprobe br_netfilter
# cat <<EOF > /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
EOF
# sysctl -p  /etc/sysctl.d/k8s.conf

五、安装kubelet软件

该章节的所有操作所有节点上均要执行。

# yum install -y kubelet-1.26.0-0 kubeadm-1.26.0-0 kubectl-1.26.0-0 --disableexcludes=kubernetes
# systemctl restart kubelet ; systemctl enable kubelet

六、免密登录配置

三个Master节点之间需要配置免密登录。配置master01到master02、master03免密登录。

生成密钥

本步骤只在master01上执行。

# ssh-keygen -t rsa

同步密钥

把master01上生成的密钥复制到到master02、master03上。

# ssh-copy-id -i /root/.ssh/id_rsa.pub root@master02
# ssh-copy-id -i /root/.ssh/id_rsa.pub root@master03

免密登录测试

在master01节点上执行测试免密测试是否正常。

# ssh master02
# ssh 192.168.100.153

七、keepalived安装配置

三个节点上均需要执行以下操作。

安装keepalived

# yum install keepalived -y

配置keepalived

Master01节点keepalived.conf文件内容。

# vim /etc/keepalived/keepalived.conf
! Configuration File for keepalived

global_defs {
   router_id master01
}

vrrp_instance VI_1 {
    state MASTER
    interface ens33
    virtual_router_id 50
    priority 100
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
        192.168.100.150/24
    }
}
View Code

Master02节点keepalived.conf文件内容。

# vim /etc/keepalived/keepalived.conf
! Configuration File for keepalived

global_defs {
   router_id master01
}

vrrp_instance VI_1 {
    state MASTER
    interface ens33
    virtual_router_id 50
    priority 100
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
        192.168.100.150/24
    }
}
View Code

Master03节点keepalived.conf文件内容。

# vim /etc/keepalived/keepalived.conf
! Configuration File for keepalived

global_defs {
   router_id master01
}

vrrp_instance VI_1 {
    state MASTER
    interface ens33
    virtual_router_id 50
    priority 100
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
        192.168.100.150/24
    }
}
View Code

启动keepalived

3个Master节点上均执行该操作。

启动keepalived并设置开机启动。

# systemctl restart keepalived
# systemctl enable keepalived

进行功能确认

先确认当前vip所在机器,然后对该机器进行关机或重启操作,查看vip是否漂移到其他节点上。

八、初始化K8S集群

准备配置文件

# more kubeadm-config.yaml 
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
kubernetesVersion: v1.26.0
apiServer:
  certSANs:    #填写所有kube-apiserver节点的hostname、IP、VIP
  - master01.cc
  - master02.cc
  - master03.cc
  - 192.168.100.151
  - 192.168.100.152
  - 192.168.100.153
  - 192.168.100.150
controlPlaneEndpoint: "192.168.100.150:6443"
imageRepository: "registry.aliyuncs.com/google_containers"
networking:
  podSubnet: "10.244.0.0/16"
# 

初始化集群

使用配置文件完成k8s集群初始化,命令执行完成过后记录命令输出结果。

# kubeadm init --config=kubeadm-config.yaml
# 环境初始完成过后根据提示执行命令
# mkdir -p $HOME/.kube
# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
# sudo chown $(id -u):$(id -g) $HOME/.kube/config
................................................................
.....集群初始化命令过后输出结果....
................................................................

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of control-plane nodes by copying certificate authorities
and service account keys on each node and then running the following as root:

  kubeadm join 192.168.100.150:6443 --token tindrd.t4ko4frmtnd0lkr4 \
        --discovery-token-ca-cert-hash sha256:551883587799689a9692e76f30096e5254ced8889b9af91ed83c68e8c3a94f24 \
        --control-plane

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.100.150:6443 --token tindrd.t4ko4frmtnd0lkr4 \
        --discovery-token-ca-cert-hash sha256:551883587799689a9692e76f30096e5254ced8889b9af91ed83c68e8c3a94f24

九、安装网络插件calico

该章节操作只在master节点上执行。

下载calico配置文件

执行wget命令下载calico的yaml文件。

# wget https://docs.projectcalico.org/manifests/calico.yaml --no-check-certificate

修改calico配置文件

把calico.yaml文件中的CALICO_IPV4POOL_CIDR行和其下一行前的#删除掉,并CALICO_IPV4POOL_CIDR其下行的IP地址段改为规划文件中的网段。并在calico.yaml文件中增加蓝色字体内容,指定ens33为通信网卡(多网卡环境中必须要使用,请根据自己实际情况选择正确的网卡),格式要和CALICO_IPV4POOL_CIDR对齐。

# vim calico.yaml
 - name: CALICO_IPV4POOL_CIDR
   value: "10.244.0.0/16"
- name: IP_AUTODETECTION_METHOD
   value: "interface=ens33"

下载caclio所需镜像(可选)

该操作只在master节点上执行。

手动下载calico所需镜像并上传到所有节点中,执行了kubectl apply -f calico.yaml命令后系统会自东下载,但是会有点慢。我们可以提前下载好传到所有节点中。

# grep image calico.yaml  | grep -i calico | awk -F'/' '{print $2"/"$3 }' | uniq
# docker pull  calico/cni:v3.24.5
# docker pull  calico/node:v3.24.5
# docker pull  calico/kube-controllers:v3.24.5
# docker save -o calico_image_3_24_5.tar.gz   calico/cni:v3.24.5   calico/node:v3.24.5  calico/kube-controllers:v3.24.5
# scp calico_image_3_24_5.tar.gz  root@node01:/root/
# scp calico_image_3_24_5.tar.gz  root@node02:/root/

加载docker镜像到系统中(该操作所有节点上均要执行。)。

# docker load -i /root/calico_image_3_24_5.tar.gz

安装calico插件

该操作只在master01上执行。

# kubectl apply -f calico.yaml
# kubectl get nodes -o wide -A
# kubectl get pod -o wide -A

十、control plane节点加入集群

证书分发

该操作只在master01上执行。

把master01上的证书分发至master02和master03上。

# 分发证书至节点1
# ssh 192.168.100.152 'mkdir -p /etc/kubernetes/pki/etcd/'
# scp /etc/kubernetes/pki/ca.crt root@192.168.100.152:/etc/kubernetes/pki/
# scp /etc/kubernetes/pki/ca.key root@192.168.100.152:/etc/kubernetes/pki/
# scp /etc/kubernetes/pki/sa.key root@192.168.100.152:/etc/kubernetes/pki/
# scp /etc/kubernetes/pki/sa.pub root@192.168.100.152:/etc/kubernetes/pki/
# scp /etc/kubernetes/pki/front-proxy-ca.crt root@192.168.100.152:/etc/kubernetes/pki/
# scp /etc/kubernetes/pki/front-proxy-ca.key root@192.168.100.152:/etc/kubernetes/pki/
# scp /etc/kubernetes/pki/etcd/ca.crt root@192.168.100.152:/etc/kubernetes/pki/etcd/
# scp /etc/kubernetes/pki/etcd/ca.key root@192.168.100.152:/etc/kubernetes/pki/etcd/
# 分发证书至节点2
# ssh 192.168.100.153 'mkdir -p /etc/kubernetes/pki/etcd/'
# scp /etc/kubernetes/pki/ca.crt root@192.168.100.153:/etc/kubernetes/pki/
# scp /etc/kubernetes/pki/ca.key root@192.168.100.153:/etc/kubernetes/pki/
# scp /etc/kubernetes/pki/sa.key root@192.168.100.153:/etc/kubernetes/pki/
# scp /etc/kubernetes/pki/sa.pub root@192.168.100.153:/etc/kubernetes/pki/
# scp /etc/kubernetes/pki/front-proxy-ca.crt root@192.168.100.153:/etc/kubernetes/pki/
# scp /etc/kubernetes/pki/front-proxy-ca.key root@192.168.100.153:/etc/kubernetes/pki/
# scp /etc/kubernetes/pki/etcd/ca.crt root@192.168.100.153:/etc/kubernetes/pki/etcd/
# scp /etc/kubernetes/pki/etcd/ca.key root@192.168.100.153:/etc/kubernetes/pki/etcd/

加入集群

该操作master02和master03上均要执行。

运行初始化master01生成的control plane节点加入集群的命令。

# kubeadm join 192.168.100.150:6443 --token tindrd.t4ko4frmtnd0lkr4 \
        --discovery-token-ca-cert-hash sha256:551883587799689a9692e76f30096e5254ced8889b9af91ed83c68e8c3a94f24 \
        --control-plane
# mkdir -p $HOME/.kube
# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
# sudo chown $(id -u):$(id -g) $HOME/.kube/config 

配置kubectl命令自动补齐

该操作所有节点上都要执行。

# echo 'source <(kubectl completion bash)' >>  /etc/profile
# source /etc/profile

确认集群状态

确认所有节点均为ready状态,同时所有POD运行都正常。

# kubectl  get nodes -o wide 
# kubectl  get pod -A

posted @ 2023-05-15 22:22  飞翔的小胖猪  阅读(212)  评论(0编辑  收藏  举报