附031.Kubernetes_v1.20.4高可用部署架构二

kubeadm介绍

kubeadm概述

参考附003.Kubeadm部署Kubernetes

kubeadm功能

参考附003.Kubeadm部署Kubernetes

本方案描述

  • 本方案采用kubeadm部署Kubernetes 1.20.4版本;
  • etcd采用混部方式;
  • KeepAlived:实现VIP高可用;
  • HAProxy:以系统systemd形式运行,提供反向代理至3个master 6443端口;
  • 其他主要部署组件包括:
    • Metrics:度量;
    • Dashboard:Kubernetes 图形UI界面;
    • Helm:Kubernetes Helm包管理工具;
    • Ingress:Kubernetes 服务暴露;
    • Longhorn:Kubernetes 动态存储组件。

部署规划

节点规划

节点主机名 IP 类型 运行服务
master01 172.16.10.11 Kubernetes master节点 containerd、etcd、kube-apiserver、kube-scheduler、kube-controller-manager、kubectl、kubelet、metrics、calico
master02 172.16.10.12 Kubernetes master节点 containerd、etcd、kube-apiserver、kube-scheduler、kube-controller-manager、kubectl、kubelet、metrics、calico
master03 172.16.10.13 Kubernetes master节点 containerd、etcd、kube-apiserver、kube-scheduler、kube-controller-manager、kubectl、kubelet、metrics、calico
worker01 172.16.10.21 Kubernetes worker节点 containerd、kubelet、proxy、calico
worker02 172.16.10.22 Kubernetes worker节点 containerd、kubelet、proxy、calico
worker03 172.16.10.23 Kubernetes worker节点 containerd、kubelet、proxy、calico

Kubernetes的高可用主要指的是控制平面的高可用,即指多套Master节点组件和Etcd组件,工作节点通过负载均衡连接到各Master。
架构图

Kubernetes高可用架构中etcd与Master节点组件混布方式特点:

  • 所需机器资源少
  • 部署简单,利于管理
  • 容易进行横向扩展
  • 风险大,一台宿主机挂了,master和etcd就都少了一套,集群冗余度受到的影响比较大

提示:本实验使用Keepalived+HAProxy架构实现Kubernetes的高可用。

初始准备

[root@master01 ~]# hostnamectl set-hostname master01	    #其他节点依次修改
[root@master01 ~]# cat >> /etc/hosts << EOF
172.16.10.11 master01
172.16.10.12 master02
172.16.10.13 master03
172.16.10.21 worker01
172.16.10.22 worker02
172.16.10.23 worker03
EOF

[root@master01 ~]# wget http://down.linuxsb.com/k8sconinit.sh

提示:此操作仅需要在master01节点操作。
对于某些特性,可能需要升级内核,内核升级操作见《018.Linux升级内核》。4.19版及以上内核nf_conntrack_ipv4已经改为nf_conntrack。

互信配置

为了更方便远程分发文件和执行命令,本实验配置master01节点到其它节点的 ssh 信任关系。

[root@master01 ~]# ssh-keygen -f ~/.ssh/id_rsa -N ''
[root@master01 ~]# ssh-copy-id -i ~/.ssh/id_rsa.pub root@master01
[root@master01 ~]# ssh-copy-id -i ~/.ssh/id_rsa.pub root@master02
[root@master01 ~]# ssh-copy-id -i ~/.ssh/id_rsa.pub root@master03
[root@master01 ~]# ssh-copy-id -i ~/.ssh/id_rsa.pub root@worker01
[root@master01 ~]# ssh-copy-id -i ~/.ssh/id_rsa.pub root@worker02
[root@master01 ~]# ssh-copy-id -i ~/.ssh/id_rsa.pub root@worker03

提示:此操作仅需要在master01节点操作。

其他准备

[root@master01 ~]# vi environment.sh

#!/bin/sh
#****************************************************************#
# ScriptName: environment.sh
# Author: xhy
# Create Date: 2020-05-30 16:30
# Modify Author: xhy
# Modify Date: 2020-05-30 16:30
# Version: 
#***************************************************************#
# 集群 MASTER 机器 IP 数组
export MASTER_IPS=(172.16.10.11 172.16.10.12 172.16.10.13)

# 集群 MASTER IP 对应的主机名数组
export MASTER_NAMES=(master01 master02 master03)

# 集群 NODE 机器 IP 数组
export NODE_IPS=(172.16.10.21 172.16.10.22 172.16.10.23)

# 集群 NODE IP 对应的主机名数组
export NODE_NAMES=(worker01 worker02 worker03)

# 集群所有机器 IP 数组
export ALL_IPS=(172.16.10.11 172.16.10.12 172.16.10.13 172.16.10.21 172.16.10.22 172.16.10.23)

# 集群所有IP 对应的主机名数组
export ALL_NAMES=(master01 master02 master03 worker01 worker02 worker03)
[root@master01 ~]# source environment.sh
[root@master01 ~]# chmod +x *.sh
[root@master01 ~]# for all_ip in ${ALL_IPS[@]}
  do
    echo ">>> ${all_ip}"
    scp -rp /etc/hosts root@${all_ip}:/etc/hosts
    scp -rp k8sconinit.sh root@${all_ip}:/root/
    ssh root@${all_ip} "bash /root/k8sconinit.sh"
  done

提示:Kubernetes 1.20.4可兼容的containerd版本最新为1.4.3。

集群部署

相关组件包

需要在每台机器上都安装以下的软件包:

  • kubeadm: 用来初始化集群的指令;
  • kubelet: 在集群中的每个节点上用来启动 pod 和 container 等;
  • kubectl: 用来与集群通信的命令行工具。

kubeadm不能安装或管理 kubelet 或 kubectl ,所以得保证他们满足通过 kubeadm 安装的 Kubernetes控制层对版本的要求。如果版本没有满足要求,可能导致一些意外错误或问题。
具体相关组件安装见;附001.kubectl介绍及使用书

提示:Kubernetes 1.20.4版本所有兼容相应组件的版本参考:https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.20.4.md。

正式安装

[root@master01 ~]# for all_ip in ${ALL_IPS[@]}
  do
    echo ">>> ${all_ip}"
    ssh root@${all_ip} "cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF"
    ssh root@${all_ip} "yum install -y kubeadm-1.20.4-0.x86_64 kubelet-1.20.4-0.x86_64 kubectl-1.20.4-0.x86_64 --disableexcludes=kubernetes"
    ssh root@${all_ip} "systemctl enable kubelet"
done

[root@master01 ~]# yum search -y kubelet --showduplicates             #查看相应版本 

提示:如上仅需Master01节点操作,从而实现所有节点自动化安装,同时此时不需要启动kubelet,初始化的过程中会自动启动的,如果此时启动了会出现报错,忽略即可。

说明:同时安装了cri-tools, kubernetes-cni, socat三个依赖:
socat:kubelet的依赖;
cri-tools:即CRI(Container Runtime Interface)容器运行时接口的命令行工具。

部署高可用组件

HAProxy安装

[root@master01 ~]# for master_ip in ${MASTER_IPS[@]}
  do
    echo ">>> ${master_ip}"
    ssh root@${master_ip} "yum -y install gcc gcc-c++ make libnl libnl-devel libnfnetlink-devel openssl-devel wget openssh-clients systemd-devel zlib-devel pcre-devel libnl3-devel"
    ssh root@${master_ip} "wget http://down.linuxsb.com/software/haproxy-2.3.5.tar.gz"
    ssh root@${master_ip} "tar -zxvf haproxy-2.3.5.tar.gz"
    ssh root@${master_ip} "cd haproxy-2.3.5/ && make ARCH=x86_64 TARGET=linux-glibc USE_PCRE=1 USE_ZLIB=1 USE_SYSTEMD=1 PREFIX=/usr/local/haprpxy && make install PREFIX=/usr/local/haproxy"
    ssh root@${master_ip} "cp /usr/local/haproxy/sbin/haproxy /usr/sbin/"
    ssh root@${master_ip} "useradd -r haproxy && usermod -G haproxy haproxy"
    ssh root@${master_ip} "mkdir -p /etc/haproxy && cp -r /root/haproxy-2.3.5/examples/errorfiles/ /usr/local/haproxy/"
  done

Keepalived安装

[root@master01 ~]# for master_ip in ${MASTER_IPS[@]}
  do
    echo ">>> ${master_ip}"
    ssh root@${master_ip} "yum -y install curl gcc gcc-c++ make libnl libnl-devel libnl3-devel libnfnetlink-devel openssl-devel"
    ssh root@${master_ip} "wget http://down.linuxsb.com/software/keepalived-2.2.1.tar.gz"
    ssh root@${master_ip} "tar -zxvf keepalived-2.2.1.tar.gz"
    ssh root@${master_ip} "cd keepalived-2.2.1/ && LDFLAGS=\"$LDFAGS -L /usr/local/openssl/lib/\" ./configure --sysconf=/etc --prefix=/usr/local/keepalived && make && make install"
    ssh root@${master_ip} "systemctl enable keepalived && systemctl start keepalived"
  done

提示:如上仅需Master01节点操作,从而实现所有节点自动化安装。若出现如下报错:undefined reference to `OPENSSL_init_ssl’,可带上openssl lib路径:

LDFLAGS="$LDFAGS -L /usr/local/openssl/lib/" ./configure --sysconf=/etc --prefix=/usr/local/keepalived

创建配置文件

[root@master01 ~]# wget http://down.linuxsb.com/hakek8s.sh				#拉取自动部署脚本
[root@master01 ~]# chmod u+x hakek8s.sh

[root@master01 ~]# vi hakek8s.sh

#!/bin/sh
#****************************************************************#
# ScriptName: hakek8s.sh
# Author: xhy
# Create Date: 2020-06-08 20:00
# Modify Author: xhy
# Modify Date: 2020-06-15 18:15
# Version: v2
#***************************************************************#

####################
# set variables below to create the config files, all files will create at ./config directory
####################

# master keepalived virtual ip address
export K8SHA_VIP=172.16.10.254

# master01 ip address
export K8SHA_IP1=172.16.10.11

# master02 ip address
export K8SHA_IP2=172.16.10.12

# master03 ip address
export K8SHA_IP3=172.16.10.13

# master01 hostname
export K8SHA_HOST1=master01

# master02 hostname
export K8SHA_HOST2=master02

# master03 hostname
export K8SHA_HOST3=master03

# master01 network interface name
export K8SHA_NETINF1=eth0

# master02 network interface name
export K8SHA_NETINF2=eth0

# master03 network interface name
export K8SHA_NETINF3=eth0

# keepalived auth_pass config
export K8SHA_KEEPALIVED_AUTH=412f7dc3bfed32194d1600c483e10ad1d

# kubernetes CIDR pod subnet
export K8SHA_PODCIDR=10.10.0.0

# kubernetes CIDR svc subnet
export K8SHA_SVCCIDR=10.20.0.0

[root@master01 ~]# ./hakek8s.sh


解释:如上仅需Master01节点操作。执行hakek8s.sh脚本后会生产如下配置文件清单:

  • kubeadm-config.yaml:kubeadm初始化配置文件,位于当前目录
  • keepalived:keepalived配置文件,位于各个master节点的/etc/keepalived目录
  • haproxy:haproxy的配置文件,位于各个master节点的/etc/haproxy/目录
  • calico.yaml:calico网络组件部署文件,位于config/calico/目录
[root@master01 ~]# cat kubeadm-config.yaml		#检查集群初始化配置
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
networking:
  serviceSubnet: "10.20.0.0/16"			     	#设置svc网段
  podSubnet: "10.10.0.0/16"                     #设置Pod网段
  dnsDomain: "cluster.local"
kubernetesVersion: "v1.20.4"			    	#设置安装版本
controlPlaneEndpoint: "172.16.10.254:16443"		#设置相关API VIP地址
apiServer:
  certSANs:
  - master01
  - master02
  - master03
  - 127.0.0.1
  - 172.16.10.11
  - 172.16.10.12
  - 172.16.10.13
  - 172.16.10.254
  timeoutForControlPlane: 4m0s
certificatesDir: "/etc/kubernetes/pki"
imageRepository: "k8s.gcr.io"
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
featureGates:
  SupportIPVSProxyMode: true
mode: ipvs


提示:如上仅需Master01节点操作,更多config文件参考:https://godoc.org/k8s.io/kubernetes/cmd/kubeadm/app/apis/kubeadm/v1beta2。
此kubeadm部署初始化配置更多参考:https://pkg.go.dev/k8s.io/kubernetes/cmd/kubeadm/app/apis/kubeadm/v1beta2?tab=doc。
默认kubeadm配置可使用kubeadm config print init-defaults > config.yaml生成。

启动服务

[root@master01 ~]# cat /etc/keepalived/keepalived.conf
[root@master01 ~]# cat /etc/keepalived/check_apiserver.sh	#确认Keepalived配置

[root@master01 ~]# for master_ip in ${MASTER_IPS[@]}
  do
    echo ">>> ${master_ip}"
    ssh root@${master_ip} "systemctl start haproxy.service && systemctl enable haproxy.service"
    ssh root@${master_ip} "systemctl start keepalived.service && systemctl enable keepalived.service"
    ssh root@${master_ip} "systemctl status keepalived.service | grep Active"
    ssh root@${master_ip} "systemctl status haproxy.service | grep Active"
  done

[root@master01 ~]# for all_ip in ${ALL_IPS[@]}
  do
    echo ">>> ${all_ip}"
    ssh root@${all_ip} "ping -c1 172.16.10.254"
  done								#等待10s执行检查

提示:如上仅需Master01节点操作,从而实现所有节点自动启动服务。

初始化集群-Master

拉取镜像

[root@master01 ~]# kubeadm --kubernetes-version=v1.20.4 config images list     	#列出所需镜像

[root@master01 ~]# cat config/conloadimage.sh			                        #确认版本,提前下载镜像
#!/bin/sh
#****************************************************************#
# ScriptName: conloadimage.sh
# Author: xhy
# Create Date: 2021-02-25 14:03
# Modify Author: xhy
# Modify Date: 2021-02-25 14:03
# Version: 
#***************************************************************#

KUBE_VERSION=v1.20.4
CALICO_VERSION=v3.17.1
CALICO_URL='crictl.io/calico'
KUBE_PAUSE_VERSION=3.2
ETCD_VERSION=3.4.13-0
CORE_DNS_VERSION=1.7.0
GCR_URL=k8s.gcr.io
METRICS_SERVER_VERSION=v0.4.1
INGRESS_VERSION=v0.41.2
CSI_PROVISIONER_VERSION=v1.4.0
CSI_NODE_DRIVER_VERSION=v1.2.0
CSI_ATTACHER_VERSION=v2.0.0
CSI_RESIZER_VERSION=v0.3.0 
DEFAULTBACKENDVERSION=1.5
ALIYUN_URL=registry.cn-hangzhou.aliyuncs.com/google_containers
UCLOUD_URL=uhub.service.ucloud.cn/uxhy
QUAY_URL=quay.io

mkdir -p conimages/

# 设置节点信息
export ALL_NAMES=(master02 master03 worker01 worker02 worker03)

kubeimages=(kube-proxy:${KUBE_VERSION}
kube-scheduler:${KUBE_VERSION}
kube-controller-manager:${KUBE_VERSION}
kube-apiserver:${KUBE_VERSION}
pause:${KUBE_PAUSE_VERSION}
pause-amd64:${KUBE_PAUSE_VERSION}
etcd:${ETCD_VERSION}
coredns:${CORE_DNS_VERSION}
)

for kubeimageName in ${kubeimages[@]} ; do
echo ${kubeimageName}
ctr -n k8s.io images pull ${UCLOUD_URL}/${kubeimageName}
ctr -n k8s.io images tag ${UCLOUD_URL}/${kubeimageName} ${GCR_URL}/${kubeimageName}
ctr -n k8s.io images rm ${UCLOUD_URL}/${kubeimageName}
ctr -n k8s.io images export conimages/${kubeimageName}\.tar ${GCR_URL}/${kubeimageName}
done

metricsimages=(metrics-server:${METRICS_SERVER_VERSION})

for metricsimageName in ${metricsimages[@]} ; do
echo ${metricsimageName}
ctr -n k8s.io images pull ${UCLOUD_URL}/${metricsimageName}
ctr -n k8s.io images tag ${UCLOUD_URL}/${metricsimageName} ${GCR_URL}/metrics-server/${metricsimageName}
ctr -n k8s.io images rm ${UCLOUD_URL}/${metricsimageName}
ctr -n k8s.io images export conimages/${metricsimageName}\.tar ${GCR_URL}/metrics-server/${metricsimageName}
done

calimages=(cni:${CALICO_VERSION}
pod2daemon-flexvol:${CALICO_VERSION}
node:${CALICO_VERSION}
kube-controllers:${CALICO_VERSION})

for calimageName in ${calimages[@]} ; do
echo ${calimageName}
ctr -n k8s.io images pull ${UCLOUD_URL}/${calimageName}
ctr -n k8s.io images tag ${UCLOUD_URL}/${calimageName} ${CALICO_URL}/${calimageName}
ctr -n k8s.io images rm ${UCLOUD_URL}/${calimageName}
ctr -n k8s.io images export conimages/${calimageName}\.tar ${CALICO_URL}/${calimageName}
done

ingressimages=(controller:${INGRESS_VERSION})

for ingressimageName in ${ingressimages[@]} ; do
echo ${ingressimageName}
ctr -n k8s.io images pull ${UCLOUD_URL}/${ingressimageName}
ctr -n k8s.io images tag ${UCLOUD_URL}/${ingressimageName} ${GCR_URL}/ingress-nginx/${ingressimageName}
ctr -n k8s.io images rm ${UCLOUD_URL}/${ingressimageName}
ctr -n k8s.io images export conimages/${ingressimageName}\.tar ${GCR_URL}/ingress-nginx/${ingressimageName}
done

csiimages=(csi-provisioner:${CSI_PROVISIONER_VERSION}
csi-node-driver-registrar:${CSI_NODE_DRIVER_VERSION}
csi-attacher:${CSI_ATTACHER_VERSION}
csi-resizer:${CSI_RESIZER_VERSION}
)

for csiimageName in ${csiimages[@]} ; do
echo ${csiimageName}
ctr -n k8s.io images pull ${UCLOUD_URL}/${csiimageName}
ctr -n k8s.io images tag ${UCLOUD_URL}/${csiimageName} ${QUAY_URL}/k8scsi/${csiimageName}
ctr -n k8s.io images rm ${UCLOUD_URL}/${csiimageName}
ctr -n k8s.io images export conimages/${csiimageName}\.tar ${QUAY_URL}/k8scsi/${csiimageName}
done

otherimages=(defaultbackend-amd64:${DEFAULTBACKENDVERSION})

for otherimagesName in ${otherimages[@]} ; do
echo ${otherimagesName}
ctr -n k8s.io images pull ${UCLOUD_URL}/${otherimagesName}
ctr -n k8s.io images tag ${UCLOUD_URL}/${otherimagesName} ${GCR_URL}/${otherimagesName}
ctr -n k8s.io images rm ${UCLOUD_URL}/${otherimagesName}
ctr -n k8s.io images export conimages/${otherimagesName}\.tar ${GCR_URL}/${otherimagesName}
done

allimages=(kube-proxy:${KUBE_VERSION}
kube-scheduler:${KUBE_VERSION}
kube-controller-manager:${KUBE_VERSION}
kube-apiserver:${KUBE_VERSION}
pause:${KUBE_PAUSE_VERSION}
etcd:${ETCD_VERSION}
coredns:${CORE_DNS_VERSION}
metrics-server:${METRICS_SERVER_VERSION}
cni:${CALICO_VERSION}
pod2daemon-flexvol:${CALICO_VERSION}
node:${CALICO_VERSION}
kube-controllers:${CALICO_VERSION}
controller:${INGRESS_VERSION}
csi-provisioner:${CSI_PROVISIONER_VERSION}
csi-node-driver-registrar:${CSI_NODE_DRIVER_VERSION}
csi-attacher:${CSI_ATTACHER_VERSION}
csi-resizer:${CSI_RESIZER_VERSION}
defaultbackend-amd64:${DEFAULTBACKENDVERSION}
)

for all_name in ${ALL_NAMES[@]}
  do  
    echo ">>> ${all_name}"
    ssh root@${all_name} "mkdir /root/conimages"
    scp -rp conimages/* root@${all_name}:/root/conimages/
  done

for allimageName in ${allimages[@]}
  do
  for all_name in ${ALL_NAMES[@]}
    do
    echo "${allimageName} copy to ${all_name}"
    ssh root@${all_name} "ctr -n k8s.io images import conimages/${allimageName}\.tar"
    done
  done
  
[root@master01 ~]# bash config/conloadimage.sh

提示:如上仅需Master01节点操作,从而实现所有节点镜像的分发。

[root@master01 ~]# ctr -n k8s.io images	ls        	#确认验证
[root@master01 ~]# crictl images ls

001

Master上初始化

[root@master01 ~]# kubeadm init --config=kubeadm-config.yaml --upload-certs                 #保留如下命令用于后续节点添加:
You can now join any number of the control-plane node running the following command on each as root:

  kubeadm join 172.16.10.254:16443 --token 4f772m.kiql0dnan4lx5qoj \
    --discovery-token-ca-cert-hash sha256:e066a9a190ea7fa2619250ce4e2bd0d0fd403afb7abdea8acbab4733584ee8c0 \
    --control-plane --certificate-key d3d695b2fcad2de4f1f8054cef94655a61aa615b696e07a1d5a84203a63777a2

Please note that the certificate-key gives access to cluster sensitive data, keep it secret!
As a safeguard, uploaded-certs will be deleted in two hours; If necessary, you can use
"kubeadm init phase upload-certs --upload-certs" to reload certs afterward.

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 172.16.10.254:16443 --token 4f772m.kiql0dnan4lx5qoj \
    --discovery-token-ca-cert-hash sha256:e066a9a190ea7fa2619250ce4e2bd0d0fd403afb7abdea8acbab4733584ee8c0

002

注意:如上token具有默认24小时的有效期,token和hash值可通过如下方式获取:
kubeadm token list
如果 Token 过期以后,可以输入以下命令,生成新的 Token:

kubeadm token create
openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed 's/^.* //'
[root@master01 ~]# mkdir -p $HOME/.kube
[root@master01 ~]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@master01 ~]# sudo chown $(id -u):$(id -g) $HOME/.kube/config

[root@master01 ~]# cat << EOF >> ~/.bashrc
export KUBECONFIG=$HOME/.kube/config
EOF							#设置KUBECONFIG环境变量

[root@master01 ~]# echo "source <(kubectl completion bash)" >> ~/.bashrc
[root@master01 ~]# source ~/.bashrc

附加:初始化过程大致步骤如下:

  1. [kubelet-start] 生成kubelet的配置文件”/var/lib/kubelet/config.yaml”
  2. [certificates]生成相关的各种证书
  3. [kubeconfig]生成相关的kubeconfig文件
  4. [bootstraptoken]生成token记录下来,后边使用kubeadm join往集群中添加节点时会用到

提示:初始化仅需要在master01上执行,若初始化异常可通过kubeadm reset && rm -rf $HOME/.kube重置。

添加其他master节点

[root@master02 ~]# kubeadm join 172.16.10.254:16443 --token 4f772m.kiql0dnan4lx5qoj \
    --discovery-token-ca-cert-hash sha256:e066a9a190ea7fa2619250ce4e2bd0d0fd403afb7abdea8acbab4733584ee8c0 \
    --control-plane --certificate-key d3d695b2fcad2de4f1f8054cef94655a61aa615b696e07a1d5a84203a63777a2
[root@master02 ~]# mkdir -p $HOME/.kube
[root@master02 ~]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@master02 ~]# sudo chown $(id -u):$(id -g) $HOME/.kube/config
[root@master02 ~]# cat << EOF >> ~/.bashrc
export KUBECONFIG=$HOME/.kube/config
EOF						               	#设置KUBECONFIG环境变量
[root@master02 ~]# echo "source <(kubectl completion bash)" >> ~/.bashrc
[root@master02 ~]# source ~/.bashrc

提示:master03也如上执行添加至集群的controlplane。
提示:若添加异常可通过kubeadm reset && rm -rf $HOME/.kube重置。

安装NIC插件

NIC插件介绍

  • Calico 是一个安全的 L3 网络和网络策略提供者。
  • Canal 结合 Flannel 和 Calico, 提供网络和网络策略。
  • Cilium 是一个 L3 网络和网络策略插件, 能够透明的实施 HTTP/API/L7 策略。 同时支持路由(routing)和叠加/封装( overlay/encapsulation)模式。
  • Contiv 为多种用例提供可配置网络(使用 BGP 的原生 L3,使用 vxlan 的 overlay,经典 L2 和 Cisco-SDN/ACI)和丰富的策略框架。Contiv 项目完全开源。安装工具同时提供基于和不基于 kubeadm 的安装选项。
  • Flannel 是一个可以用于 Kubernetes 的 overlay 网络提供者。
    +Romana 是一个 pod 网络的层 3 解决方案,并且支持 NetworkPolicy API。Kubeadm add-on 安装细节可以在这里找到。
  • Weave Net 提供了在网络分组两端参与工作的网络和网络策略,并且不需要额外的数据库。
  • CNI-Genie 使 Kubernetes 无缝连接到一种 CNI 插件,例如:Flannel、Calico、Canal、Romana 或者 Weave。
    提示:本方案使用Calico插件。

设置标签

[root@master01 ~]# kubectl taint nodes --all node-role.kubernetes.io/master-         #允许master部署应用

提示:部署完内部应用后可使用kubectl taint node master01 node-role.kubernetes.io/master="":NoSchedule重新设置Master为Master Only 状态。

部署calico

[root@master01 ~]# cat config/calico/calico.yaml | grep -A1 -E 'CALICO_IPV4POOL_CIDR|IP_AUTODETECTION_METHOD|veth_mtu:'	#检查配置
……
            - name: CALICO_IPV4POOL_CIDR
              value: "10.10.0.0/16"		         	                #检查Pod网段
……
  veth_mtu: "1400"                                                  #calico建议为主机MTU减去50,
……
            - name: IP_AUTODETECTION_METHOD
              value: "interface=eth.*"		     	                #检查节点之间的网卡
# Auto-detect the BGP IP address.
            - name: IP
              value: "autodetect"
……

[root@master01 ~]# kubectl apply -f config/calico/calico.yaml
[root@master01 ~]# kubectl get pods --all-namespaces -o wide		#查看部署
[root@master01 ~]# kubectl get nodes

003

修改node端口范围

[root@master01 ~]# vi /etc/kubernetes/manifests/kube-apiserver.yaml
……
    - --service-node-port-range=1-65535
……

提示:如上需要在所有Master节点操作。

开启非安全端口

root@master01:~# vi /etc/kubernetes/manifests/kube-scheduler.yaml
……
#     - --port=0						#删掉或注释关闭非安全端口的配置,从而打开非安全端口
……
root@master01:~# vi /etc/kubernetes/manifests/kube-controller-manager.yaml
……
#     - --port=0					    #删掉或注释关闭非安全端口的配置,从而打开非安全端口
……

提示:如上需在所有Master节点操作。

添加Worker节点

添加Worker节点

[root@master01 ~]# source environment.sh

[root@master01 ~]# for node_ip in ${NODE_IPS[@]}
  do
    echo ">>> ${node_ip}"
    ssh root@${node_ip} "kubeadm join 172.16.10.254:16443 --token 4f772m.kiql0dnan4lx5qoj \
    --discovery-token-ca-cert-hash sha256:e066a9a190ea7fa2619250ce4e2bd0d0fd403afb7abdea8acbab4733584ee8c0"
    ssh root@${node_ip} "systemctl enable kubelet.service"
  done

提示:如上仅需Master01节点操作,从而实现所有Worker节点添加至集群,若添加异常可通过如下方式重置:

[root@worker01 ~]# kubeadm reset
[root@worker01 ~]# ifconfig cni0 down
[root@worker01 ~]# ip link delete cni0
[root@worker01 ~]# ifconfig flannel.1 down
[root@worker01 ~]# ip link delete flannel.1
[root@worker01 ~]# rm -rf /var/lib/cni/

确认验证

[root@master01 ~]# kubectl get nodes			         	#节点状态
[root@master01 ~]# kubectl get cs			             	#组件状态
[root@master01 ~]# kubectl get serviceaccount		     	#服务账户
[root@master01 ~]# kubectl cluster-info			         	#集群信息
[root@master01 ~]# kubectl get pod -n kube-system -o wide	#所有服务状态

004
005

提示:更多Kubetcl使用参考:https://kubernetes.io/docs/reference/kubectl/kubectl/
https://kubernetes.io/docs/reference/kubectl/overview/
更多kubeadm使用参考:https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm/

Metrics部署

Metrics介绍

Kubernetes的早期版本依靠Heapster来实现完整的性能数据采集和监控功能,Kubernetes从1.8版本开始,性能数据开始以Metrics API的方式提供标准化接口,并且从1.10版本开始将Heapster替换为Metrics Server。在Kubernetes新的监控体系中,Metrics Server用于提供核心指标(Core Metrics),包括Node、Pod的CPU和内存使用指标。
对其他自定义指标(Custom Metrics)的监控则由Prometheus等组件来完成。

开启聚合层

有关聚合层知识参考:https://blog.csdn.net/liukuan73/article/details/81352637
kubeadm方式部署默认已开启。

获取部署文件

[root@master01 ~]# mkdir metrics
[root@master01 ~]# cd metrics/
[root@master01 metrics]# wget https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml
[root@master01 metrics]# vi components.yaml
……
apiVersion: apps/v1
kind: Deployment
……
spec:
  replicas: 3						        	#根据集群规模调整副本数
……
    spec:
      hostNetwork: true
      containers:
      - args:
        - --cert-dir=/tmp
        - --secure-port=4443
        - --kubelet-insecure-tls				#追加此args
        - --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname,InternalDNS,ExternalDNS          #追加此args
        - --kubelet-use-node-status-port
        image: k8s.gcr.io/metrics-server/metrics-server:v0.4.1
        imagePullPolicy: IfNotPresent
……

正式部署

[root@master01 metrics]# kubectl apply -f components.yaml
[root@master01 metrics]# kubectl -n kube-system get pods -l k8s-app=metrics-server
NAME                              READY   STATUS    RESTARTS   AGE
metrics-server-5454499f5b-7h7nr   1/1     Running   0          4m35s
metrics-server-5454499f5b-nrdjv   1/1     Running   0          4m35s
metrics-server-5454499f5b-nskmm   1/1     Running   0          4m35s

查看资源监控

[root@master01 ~]# kubectl top nodes
[root@master01 ~]# kubectl top pods --all-namespaces

006

提示:Metrics Server提供的数据也可以供HPA控制器使用,以实现基于CPU使用率或内存使用值的Pod自动扩缩容功能。
部署参考:https://linux48.com/container/2019-11-13-metrics-server.html
有关metrics更多部署参考:
https://kubernetes.io/docs/tasks/debug-application-cluster/resource-metrics-pipeline/
开启开启API Aggregation参考:
https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/apiserver-aggregation/
API Aggregation介绍参考:
https://kubernetes.io/docs/tasks/access-kubernetes-api/configure-aggregation-layer/

Nginx ingress部署

参考附020.Nginx-ingress部署及使用,建议采用社区版。

Dashboard部署

设置标签

[root@master01 ~]# kubectl label nodes master01 dashboard=yes
[root@master01 ~]# kubectl label nodes master02 dashboard=yes
[root@master01 ~]# kubectl label nodes master03 dashboard=yes

创建证书

本实验已获取免费一年的证书,免费证书获取可参考:https://freessl.cn

[root@master01 ~]# mkdir -p /root/dashboard/certs
[root@master01 ~]# cd /root/dashboard/certs
[root@master01 certs]# mv dashboard.odocker.com.crt tls.crt
[root@master01 certs]# mv dashboard.odocker.com.key tls.key
[root@master01 certs]# ll
total 8.0K
-rw-r--r-- 1 root root 1.9K Jun  8 11:46 tls.crt
-rw-r--r-- 1 root root 1.7K Jun  8 11:46 tls.ke

提示:也可手动如下操作创建自签证书:

[root@master01 ~]# openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout tls.key -out tls.crt -subj "/C=CN/ST=ZheJiang/L=HangZhou/O=Xianghy/OU=Xianghy/CN=dashboard.odocker.com"

手动创建secret

[root@master01 ~]# kubectl create ns kubernetes-dashboard	                                        #v2版本dashboard独立ns
[root@master01 ~]# kubectl create secret generic kubernetes-dashboard-certs --from-file=/root/dashboard/certs/ -n kubernetes-dashboard
[root@master01 ~]# kubectl get secret kubernetes-dashboard-certs -n kubernetes-dashboard -o yaml	#查看新证书`

下载yaml

[root@master01 ~]# cd /root/dashboard
[root@master01 dashboard]# wget http://down.linuxsb.com/kubernetes/dashboard/v2.2.0/aio/deploy/recommended.yaml

提示:官方参考yaml:https://raw.githubusercontent.com/kubernetes/dashboard/v2.2.0/aio/deploy/recommended.yaml。

修改yaml

[root@master01 dashboard]# vi recommended.yaml
……
kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
spec:
  type: NodePort			                	#新增
  ports:
    - port: 443
      targetPort: 8443
      nodePort: 30001			            	#新增
  selector:
    k8s-app: kubernetes-dashboard
---
……						                        #如下全部注释
#apiVersion: v1
#kind: Secret
#metadata:
#  labels:
#    k8s-app: kubernetes-dashboard
#  name: kubernetes-dashboard-certs
#  namespace: kubernetes-dashboard
#type: Opaque
……
kind: Deployment
……
  replicas: 3					                #适当调整为3副本
……
    spec:
      containers:
        - name: kubernetes-dashboard
          image: kubernetesui/dashboard:v2.1.0
          imagePullPolicy: IfNotPresent         #修改镜像下载策略
          ports:
            - containerPort: 8443
              protocol: TCP
          args:
            - --auto-generate-certificates
            - --namespace=kubernetes-dashboard
            - --tls-key-file=tls.key
            - --tls-cert-file=tls.crt
            - --token-ttl=3600	       		    #追加如上args
……
      nodeSelector:
        "kubernetes.io/os": linux
        "dashboard": "yes"	        		    #部署在master节点
……
kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: dashboard-metrics-scraper
  name: dashboard-metrics-scraper
  namespace: kubernetes-dashboard
spec:
  type: NodePort	             			    #新增
  ports:
    - port: 8000
      targetPort: 8000
      nodePort: 30000		         		    #新增
  selector:                                                                                  
    k8s-app: dashboard-metrics-scraper
……
   replicas: 3			            		    #适当调整为3副本
……
      nodeSelector:
        "beta.kubernetes.io/os": linux
        "dashboard": "yes"	        		    #部署在master节点
……

正式部署

[root@master01 dashboard]# kubectl apply -f recommended.yaml
[root@master01 dashboard]# kubectl get deployment kubernetes-dashboard -n kubernetes-dashboard
[root@master01 dashboard]# kubectl get services -n kubernetes-dashboard
[root@master01 dashboard]# kubectl get pods -o wide -n kubernetes-dashboard

007

提示:master01 NodePort 30001/TCP映射到 dashboard pod 443 端口。

创建管理员账户

提示:dashboard v2版本默认没有创建具有管理员权限的账户,可如下操作创建。

[root@master01 dashboard]# vi dashboard-admin.yaml
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: admin-user
  namespace: kubernetes-dashboard

---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: admin-user
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: admin-user
  namespace: kubernetes-dashboard

[root@master01 dashboard]# kubectl apply -f dashboard-admin.yaml

ingress暴露dashboard

创建ingress tls

[root@master01 ~]# cd /root/dashboard/certs
[root@master01 certs]# kubectl -n kubernetes-dashboard create secret tls kubernetes-dashboard-tls --cert=tls.crt --key=tls.key
[root@master01 certs]# kubectl -n kubernetes-dashboard describe secrets kubernetes-dashboard-tls

010

创建ingress策略

[root@master01 ~]# cd /root/dashboard/
[root@master01 dashboard]# vi dashboard-ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: kubernetes-dashboard-ingress
  namespace: kubernetes-dashboard
  annotations:
    kubernetes.io/ingress.class: "nginx"
    nginx.ingress.kubernetes.io/use-regex: "true"
    nginx.ingress.kubernetes.io/ssl-passthrough: "true"
    nginx.ingress.kubernetes.io/rewrite-target: /
    nginx.ingress.kubernetes.io/ssl-redirect: "true"
    #nginx.ingress.kubernetes.io/secure-backends: "true"
    nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
    nginx.ingress.kubernetes.io/proxy-connect-timeout: "600"
    nginx.ingress.kubernetes.io/proxy-read-timeout: "600"
    nginx.ingress.kubernetes.io/proxy-send-timeout: "600"
    nginx.ingress.kubernetes.io/configuration-snippet: |
      proxy_ssl_session_reuse off;
spec:
  rules:
  - host: dashboard.odocker.com
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: kubernetes-dashboard
            port:
              number: 443
  tls:
  - hosts:
    - dashboard.odocker.com
    secretName: kubernetes-dashboard-tls
[root@master01 dashboard]# kubectl apply -f dashboard-ingress.yaml
[root@master01 dashboard]# kubectl -n kubernetes-dashboard get ingress

008

访问dashboard

导入证书

将dashboard.odocker.com证书导入浏览器,并设置为信任,导入操作略。

创建kubeconfig文件

使用token相对复杂,可将token添加至kubeconfig文件中,使用KubeConfig文件访问dashboard。

[root@master01 dashboard]# ADMIN_SECRET=$(kubectl -n kubernetes-dashboard get secret | grep admin-user | awk '{print $1}') 
[root@master01 dashboard]# DASHBOARD_LOGIN_TOKEN=$(kubectl describe secret -n kubernetes-dashboard ${ADMIN_SECRET} | grep -E '^token' | awk '{print $2}') 
[root@master01 dashboard]# kubectl config set-cluster kubernetes \
  --certificate-authority=/etc/kubernetes/pki/ca.crt \
  --embed-certs=true \
  --server=172.16.10.254:16443 \
  --kubeconfig=ucloud-ngkeconk8s-dashboard-admin.kubeconfig		# 设置集群参数
 [root@master01 dashboard]# kubectl config set-credentials dashboard_user \
  --token=${DASHBOARD_LOGIN_TOKEN} \
  --kubeconfig=ucloud-ngkeconk8s-dashboard-admin.kubeconfig		# 设置客户端认证参数,使用上面创建的 Token
[root@master01 dashboard]# kubectl config set-context default \
  --cluster=kubernetes \
  --user=dashboard_user \
  --kubeconfig=ucloud-ngkeconk8s-dashboard-admin.kubeconfig		# 设置上下文参数
[root@master01 dashboard]# kubectl config use-context default --kubeconfig=ucloud-ngkeconk8s-dashboard-admin.kubeconfig		# 设置默认上下文

将ucloud-ngkeconk8s-dashboard-admin.kubeconfig文件导入,以便于浏览器使用该文件登录。

测试访问dashboard

本实验采用ingress所暴露的域名:https://dashboard.odocker.com 方式访问。使用ucloud-ngkeconk8s-dashboard-admin.kubeconfig文件访问。

009

提示:
更多dashboard访问方式及认证可参考附004.Kubernetes Dashboard简介及使用
dashboard登录整个流程可参考:https://www.cnadn.net/post/2613.html

Longhorn存储部署

Longhorn概述

Longhorn是用于Kubernetes的开源分布式块存储系统。
提示:更多介绍参考:https://github.com/longhorn/longhorn。

Longhorn部署

[root@master01 ~]# source environment.sh
[root@master01 ~]# for all_ip in ${ALL_IPS[@]}
  do
    echo ">>> ${all_ip}"
    ssh root@${all_ip} "yum -y install iscsi-initiator-utils &"
  done

提示:所有节点都需要安装。

[root@master01 ~]# mkdir longhorn
[root@master01 ~]# cd longhorn/
[root@master01 longhorn]# wget \
https://raw.githubusercontent.com/longhorn/longhorn/master/deploy/longhorn.yaml
[root@master01 longhorn]# vi longhorn.yaml
#……
---
kind: Service
apiVersion: v1
metadata:
  labels:
    app: longhorn-ui
  name: longhorn-frontend
  namespace: longhorn-system
spec:
  type: NodePort			#修改为nodeport
  selector:
    app: longhorn-ui
  ports:
  - port: 80
    targetPort: 8000
    nodePort: 30002
---
……
kind: DaemonSet
……
        imagePullPolicy: IfNotPresent
……
#……
[root@master01 longhorn]# kubectl apply -f longhorn.yaml
[root@master01 longhorn]# kubectl -n longhorn-system get pods -o wide

010

提示:若部署异常可删除重建,若出现无法删除namespace,可通过如下操作进行删除:

wget https://github.com/longhorn/longhorn/blob/master/uninstall/uninstall.yaml
rm -rf /var/lib/longhorn/
kubectl delete -f uninstall.yaml
kubectl delete -f longhorn.yaml

动态sc创建

提示:默认longhorn部署完成已创建一个sc,也可通过如下手动编写yaml创建。

 [root@master01 longhorn]# kubectl get sc
NAME                   PROVISIONER             RECLAIMPOLICY   VOLUMEBINDINGMODE      ALLOWVOLUMEEXPANSION   AGE
……
longhorn               driver.longhorn.io      Delete          Immediate              true                   15m
[root@master01 longhorn]# vi longhornsc.yaml
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: longhornsc
provisioner: rancher.io/longhorn
parameters:
  numberOfReplicas: "3"
  staleReplicaTimeout: "30"
  fromBackup: "" 

[root@master01 longhorn]# kubectl create -f longhornsc.yaml

测试PV及PVC

[root@master01 longhorn]# vi longhornpod.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: longhorn-pvc
spec:
  accessModes:
    - ReadWriteOnce
  storageClassName: longhorn
  resources:
    requests:
      storage: 2Gi
---
apiVersion: v1
kind: Pod
metadata:
  name: longhorn-pod
  namespace: default
spec:
  containers:
  - name: volume-test
    image: nginx:stable-alpine
    imagePullPolicy: IfNotPresent
    volumeMounts:
    - name: volv
      mountPath: /data
    ports:
    - containerPort: 80
  volumes:
  - name: volv
    persistentVolumeClaim:
      claimName: longhorn-pvc
[root@master01 longhorn]# kubectl apply -f longhornpod.yaml
[root@master01 longhorn]# kubectl get pods
[root@master01 longhorn]# kubectl get pvc
[root@master01 longhorn]# kubectl get pv

014

Ingress暴露Longhorn

[root@master01 longhorn]# yum -y install httpd-tools
[root@master01 longhorn]# htpasswd -c auth xhy			#创建用户名和密码

提示:也可通过如下命令创建:

USER=xhy; PASSWORD=x120952576; echo "${USER}:$(openssl passwd -stdin -apr1 <<< ${PASSWORD})" >> auth
[root@master01 longhorn]# kubectl -n longhorn-system create secret generic longhorn-basic-auth --from-file=auth

[root@master01 longhorn]# vi longhorn-ingress.yaml		#创建ingress规则
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: longhorn-ingress
  namespace: longhorn-system
  annotations:
    nginx.ingress.kubernetes.io/auth-type: basic
    nginx.ingress.kubernetes.io/auth-secret: longhorn-basic-auth
    nginx.ingress.kubernetes.io/auth-realm: 'Authentication Required '
spec:
  rules:
  - host: longhorn.odocker.com
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: longhorn-frontend
            port: 
              number: 80
[root@master01 longhorn]# kubectl apply -f longhorn-ingress.yaml
[root@master01 longhorn]# kubectl -n longhorn-system get ingress
NAME               CLASS    HOSTS                  ADDRESS                                  PORTS   AGE
longhorn-ingress   <none>   longhorn.odocker.com   172.16.10.21,172.16.10.22,172.16.10.23   80      45s

确认验证

浏览器访问:longhorn.odocker.com,并输入账号和密码。

011

登录查看。

012

Helm部署

参考053.集群管理-Helm工具

作者:木二

出处:http://www.cnblogs.com/itzgr/

关于作者:云计算、虚拟化,Linux,多多交流!

本文版权归作者所有,欢迎转载,但未经作者同意必须保留此段声明,且在文章页面明显位置给出原文链接!如有其他问题,可邮件(xhy@itzgr.com)咨询。

posted @ 2021-04-14 13:35  木二  阅读(321)  评论(0编辑  收藏  举报