K8S 使用Kubeadm搭建单个Master节点的Kubernetes(K8S)~本文仅用于测试学习

单Master节点的K8S集群仅适用于开发和测试环境使用,因为控制平面节点不具备高可用性!

目录

0.主机规划

主机系统:CentOS Linux release 7.6.1810 (Core)
Kubernetes版本:Kubernetes-1.23.0
Kubernetes与Docker兼容性:v20.10.7+不兼容 -> v20.10.12+不兼容
Docker版本:Docker-ce-19.03.0
硬件条件:集群中的机器最少需要2GB或者以上的内存,最少需要2核或者以上更多的CPU

主机名 主机地址 主机角色 运行服务
k8s-master 192.168.124.129 control plane node(master) kube-apiserver
etcd
kube-scheduler
kube-controller-manager
docker
kubelet
k8s-node01 192.168.124.132 worker node(node) kubelet
kube-proxy
docker

1.检查和配置主机环境

1.1.验证每个主机上的MAC地址和Product_id的唯一性

所有主机上:

[root@localhost ~]# ip link
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP mode DEFAULT group default qlen 1000
    link/ether 00:0c:29:40:e3:9f brd ff:ff:ff:ff:ff:ff
[root@localhost ~]# cat /sys/class/dmi/id/product_uuid
B70F4D56-1F69-3997-AD55-83725A40E39F

1.2.检查运行Kubernetes所需的端口是否未被占用

角色 协议 方向 服务:端口范围
Master(Control Plane) TCP Inbound Kubernetes API server:6443
etcd server client API:2379-2380
Kubelet API:10250
kube-scheduler:10259
kube-controller-manager:10257
Node(Worker Node) TCP Inbound Kubelet API:10250
NodePort Services†:30000-32767

master主机上:

[root@localhost ~]# ss -alnupt |grep -E '6443|10250|10259|10257|2379|2380'

node主机上:

[root@localhost ~]# ss -alnupt |grep -E '10250|3[0-2][0-7][0-6][0-7]'

1.3.配置主机名称

k8s-master:

[root@localhost ~]# echo "k8s-master" >/etc/hostname
[root@localhost ~]# cat /etc/hostname | xargs hostname
[root@localhost ~]# bash
[root@k8s-master ~]# 

k8s-node01:

[root@localhost ~]# echo "k8s-node01" >/etc/hostname
[root@localhost ~]# cat /etc/hostname | xargs hostname
[root@localhost ~]# bash
[root@k8s-node01 ~]# 

1.4.添加hosts名称解析

所有主机上:

[root@k8s-master ~]# cat >> /etc/hosts << EOF
192.168.124.129 k8s-master
192.168.124.132 k8s-node01
EOF

1.5.主机间时间同步

k8s-master:
设置优先从cn.ntp.org.cn公共时间服务器上同步时间。

# 安装NTP时间服务和NTP客户端
[root@k8s-master ~]# yum -y install epel-release.noarch
[root@k8s-master ~]# yum -y install ntp ntpdate
# 使用NTP客户端从外部公共NTP时间服务器同步本机时间
[root@k8s-master ~]# ntpdate cn.ntp.org.cn
# 配置NTP时间服务
[root@k8s-master ~]# vim /etc/ntp.conf
# 访问控制
# 允许外部客户端从本机同步时间,但不允许外部客户端修改本机时间
restrict default nomodify notrap nopeer noquery
restrict 127.0.0.1
restrict ::1

# 从外部服务器主动同步时间
# 如果外部服务器连接失败时则以本机时间为准
server 127.127.1.0
Fudge 127.127.1.0 stratum 10

server cn.ntp.org.cn prefer iburst minpoll 4 maxpoll 10
server ntp.aliyun.com iburst minpoll 4 maxpoll 10
server ntp.tuna.tsinghua.edu.cn iburst minpoll 4 maxpoll 10
server time.ustc.edu.cn iburst minpoll 4 maxpoll 10
# 启动NTP时间服务并设置服务开机自启
[root@k8s-master ~]# systemctl start ntpd
[root@k8s-master ~]# systemctl enable ntpd
[root@k8s-master ~]# systemctl status ntpd
● ntpd.service - Network Time Service
   Loaded: loaded (/usr/lib/systemd/system/ntpd.service; disabled; vendor preset: disabled)
   Active: active (running) since Mon 2022-03-21 02:59:43 EDT; 4min 52s ago
  Process: 27106 ExecStart=/usr/sbin/ntpd -u ntp:ntp $OPTIONS (code=exited, status=0/SUCCESS)
[root@k8s-master01 ~]# ntpstat
synchronised to NTP server (120.25.108.11) at stratum 3
   time correct to within 70 ms
   polling server every 16 s

node主机均优从k8s-master主机上同步时间:

# 安装NTP时间服务和NTP客户端
[root@k8s-node01 ~]# yum -y install epel-release.noarch
[root@k8s-node01 ~]# yum -y install ntp ntpdate
# 使用NTP客户端从NTP时间服务器同步本机时间
[root@k8s-node01 ~]# ntpdate 192.168.124.129
# 配置NTP时间服务
[root@k8s-node01 ~]# vim /etc/ntp.conf
# 设置从刚刚搭建的NTP时间服务器主动同步时间
# 如果NTP时间服务器连接失败时则以本机时间为准
server 127.127.1.0
Fudge 127.127.1.0 stratum 10

server 192.168.124.129 prefer iburst minpoll 4 maxpoll 10
# 启动NTP时间服务并设置服务开机自启
[root@k8s-node01 ~]# systemctl start ntpd
[root@k8s-node01 ~]# systemctl enable ntpd
[root@k8s-node01 ~]# systemctl status ntpd
● ntpd.service - Network Time Service
   Loaded: loaded (/usr/lib/systemd/system/ntpd.service; disabled; vendor preset: disabled)
   Active: active (running) since Mon 2022-03-21 02:59:43 EDT; 4min 52s ago
  Process: 27106 ExecStart=/usr/sbin/ntpd -u ntp:ntp $OPTIONS (code=exited, status=0/SUCCESS)
[root@k8s-node01 ~]# ntpstat
synchronised to NTP server (192.168.124.129) at stratum 3
   time correct to within 70 ms
   polling server every 16 s

1.6.关闭SWAP

SWAP可能导致容器出现性能下降问题。
所有主机上:

[root@k8s-master ~]# swapoff -a  # 临时关闭
[root@k8s-master ~]# free -mh
              total        used        free      shared  buff/cache   available
Mem:           1.8G        133M        1.4G        9.5M        216M        1.5G
Swap:            0B          0B          0B
[root@k8s-master01 ~]# vim /etc/fstab  # 永久关闭
#/dev/mapper/centos-swap swap                    swap    defaults        0 0

1.7.关闭Firewalld

Kubernetes中的kube-proxy组件需要利用IPtables或者IPVS创建Service对象,CentOS7默认使用Firewalld防火墙服务,为了避免冲突,所以需要禁用和关闭它。
所有主机上:

[root@k8s-master ~]# systemctl stop firewalld
[root@k8s-master ~]# systemctl disable firewalld
Removed symlink /etc/systemd/system/multi-user.target.wants/firewalld.service.
Removed symlink /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.

1.8.关闭SeLinux

所有主机上:

[root@k8s-master ~]# setenforce 0  # 临时关闭
[root@k8s-master ~]# sed -i 's/SELINUX=.*/SELINUX=disabled/g' /etc/sysconfig/selinux  # 永久关闭

1.9.启用bridge-nf功能

开启IPtables的网桥透明工作模式,即二层的流量也会受到IPtables规则影响。
如果该功能模块开机没有加载,则需要加载"br_netfilter"模块。
所有主机上:

[root@k8s-master ~]# modprobe br_netfilter
[root@k8s-master ~]# lsmod | grep br_netfilter
br_netfilter           22256  0 
bridge                151336  1 br_netfilter
[root@k8s-master ~]# cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
br_netfilter
EOF
[root@k8s-master ~]# cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
[root@k8s-master ~]# sysctl --system

1.10.安装并启用IPVS

kube-proxy组件支持三种工作模式转发流量到Pod:userspace、iptables、ipvs。
如果想要使用ipvs模式则需要安装IPVS。
所有主机上:

[root@k8s-master ~]# yum -y install kernel-devel
[root@k8s-master ~]# cat > /etc/sysconfig/modules/ipvs.modules <<EOF
#!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack_ipv4
EOF
[root@k8s-master ~]# chmod 755 /etc/sysconfig/modules/ipvs.modules
[root@k8s-master ~]# bash /etc/sysconfig/modules/ipvs.modules
[root@k8s-master ~]# lsmod |grep ip_vs
ip_vs_sh               12688  0 
ip_vs_wrr              12697  0 
ip_vs_rr               12600  0 
ip_vs                 145497  6 ip_vs_rr,ip_vs_sh,ip_vs_wrr
nf_conntrack          133095  2 ip_vs,nf_conntrack_ipv4
libcrc32c              12644  3 xfs,ip_vs,nf_conntrack
[root@k8s-master ~]# yum -y install ipset ipvsadm

2.安装容器运行平台-Docker

容器运行平台用于承载和管理运行容器应用。

2.1.安装指定版本的Docker

所有主机上:

[root@k8s-master ~]# yum -y install epel-release.noarch yum-utils
[root@k8s-master ~]# yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
[root@k8s-master ~]# yum -y install device-mapper-persistent-data  lvm2
[root@k8s-master ~]# yum list docker-ce --showduplicates | sort -r
[root@k8s-master ~]# yum -y install docker-ce-19.03.0
[root@k8s-master ~]# systemctl start docker
[root@k8s-master ~]# systemctl enable docker

2.2.配置Docker和国内镜像加速

配置Docker在线镜像源为国内镜像源,官方推荐使用的cgroup驱动为"systemd"。
所有主机上:

[root@k8s-master ~]# cat > /etc/docker/daemon.json <<EOF
{
  "registry-mirrors": [
        "https://7mimmp7p.mirror.aliyuncs.com",
        "https://registry.docker-cn.com",
        "http://hub-mirror.c.163.com",
        "https://docker.mirrors.ustc.edu.cn"
        ],
  "exec-opts": ["native.cgroupdriver=systemd"],
  "log-driver": "json-file",
  "log-opts": {
    "max-size": "100m"
  },
  "storage-driver": "overlay2",
  "storage-opts": [
    "overlay2.override_kernel_check=true"
  ]
}
EOF
[root@k8s-master ~]# systemctl restart docker
[root@k8s-master ~]# systemctl status docker
● docker.service - Docker Application Container Engine
   Loaded: loaded (/usr/lib/systemd/system/docker.service; enabled; vendor preset: disabled)
   Active: active (running) since Mon 2022-03-21 06:26:38 EDT; 4s ago
[root@k8s-master ~]# docker info | grep Cgroup
 Cgroup Driver: systemd

3.安装kubeadm、kubelet、kubectl

kubeadm,引导构建集群所使用的工具。
kubelet,在集群中所有机器上要运行的组件,用于管理Pod和容器。
kubectl,在命令行操作和使用集群的客户端工具。

3.1.安装kubeadm、kubelet、kubectl

YUM-Kubernetes存储库由阿里云提供。
在所有主机上:

[root@k8s-master ~]# cat <<EOF >/etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
[root@k8s-master ~]# yum install -y kubelet-1.23.0 kubectl-1.23.0 kubeadm-1.23.0 --disableexcludes=kubernetes --nogpgcheck
[root@k8s-master ~]# systemctl enable kubelet
[root@k8s-master ~]# cat > /etc/sysconfig/kubelet <<EOF
KUBELET_EXTRA_ARGS="--fail-swap-on=false"
EOF

3.2.源码安装kubeadm,防止Kubernetes证书过期(修改为100年有效期)

在Kubernetes中,客户端与APIServer通信需要使用X509证书,各组件之间也是使用证书进行身份验证的,由于官方默认使用kubeadm创建的相关证书有效期只有一年,如果证书到期后可能导致集群不可用,这非常严重。
所以我们这里对kubernetes源码进行修改后编译生成后的kubeadm初始化控制平面节点,在初始化的过程中会生成有效期为其100年的的Kubernetes证书!
注:YUM安装kubeadm重命名为kubeadm-yum,保留以用于后面配置kubelet,由于使用源码编译的kubeadm无法正确的配置YUM安装kubelet。
所有节点主机上:

[root@k8s-master ~]# which kubeadm
/usr/bin/kubeadm
[root@k8s-master ~]# mv /usr/bin/kubeadm /usr/bin/kubeadm-yum

k8s-master-安装GO:

[root@k8s-master ~]# wget https://go.dev/dl/go1.17.8.linux-amd64.tar.gz
[root@k8s-master ~]# tar xzvf go1.17.8.linux-amd64.tar.gz -C /usr/local
[root@k8s-master ~]# vim /etc/profile
export PATH=$PATH:/usr/local/go/bin
export GO111MODULE=auto
export GOPROXY=https://goproxy.cn
[root@k8s-master ~]# source /etc/profile
[root@k8s-master ~]# go version
go version go1.17.8 linux/amd64

k8s-master-从GITHUB克隆官方代码:

[root@k8s-master ~]# yum -y install git
[root@k8s-master ~]# git clone https://github.91chi.fun/https://github.com/kubernetes/kubernetes.git
[root@k8s-master ~]# cd kubernetes
[root@k8s-master kubernetes]# git tag -l
...
v1.23.0
...
[root@k8s-master kubernetes]# git checkout -b v1.23.0 v1.23.0

k8s-master-修改证书有效期相关代码:

[root@k8s-master kubernetes]# vim cmd/kubeadm/app/constants/constants.go
const (
...
        // CertificateValidity defines the validity for all the signed certificates generated by kubeadm
        // CertificateValidity = time.Hour * 24 * 365
        CertificateValidity = time.Hour * 24 * 365 * 100
...
}
[root@k8s-master kubernetes]# vim staging/src/k8s.io/client-go/util/cert/cert.go
...
// NewSelfSignedCACert creates a CA certificate
func NewSelfSignedCACert(cfg Config, key crypto.Signer) (*x509.Certificate, error) {
        now := time.Now()
        tmpl := x509.Certificate{
                SerialNumber: new(big.Int).SetInt64(0),
                Subject: pkix.Name{
                        CommonName:   cfg.CommonName,
                        Organization: cfg.Organization,
                },
                DNSNames:              []string{cfg.CommonName},
                NotBefore:             now.UTC(),
                //NotAfter:              now.Add(duration365d * 10).UTC(),
                NotAfter:              now.Add(duration365d * 100).UTC(),
                KeyUsage:              x509.KeyUsageKeyEncipherment | x509.KeyUsageDigitalSignature | x509.KeyUsageCertSign,
                BasicConstraintsValid: true,
                IsCA:                  true,
        }
}
...

k8s-master-编译生成新的kubeadm命令,这将会输出到_output/bin/目录下:

[root@k8s-master kubernetes]# make WHAT=cmd/kubeadm GOFLAGS=-v

k8s-master-拷贝kubeadm到所有节点主机的/usr/bin目录下:

[root@k8s-master kubernetes]# cd _output/bin/ && cp -rf kubeadm /usr/bin/kubeadm
[root@k8s-master bin]# scp kubeadm root@k8s-node:/usr/bin/kubeadm

4.部署并构建Kubernetes集群

4.1.准备镜像

可以使用以下命令查看kubeadm-v1.23.0部署kubernetes-v1.23.0所需要的镜像列表以及默认所使用的的镜像来源。
所有主机上:

[root@k8s-master ~]# kubeadm config print init-defaults |grep imageRepository
imageRepository: k8s.gcr.io
[root@k8s-master01 ~]# kubeadm config images list --kubernetes-version 1.23.0
k8s.gcr.io/kube-apiserver:v1.23.0
k8s.gcr.io/kube-controller-manager:v1.23.0
k8s.gcr.io/kube-scheduler:v1.23.0
k8s.gcr.io/kube-proxy:v1.23.0
k8s.gcr.io/pause:3.6
k8s.gcr.io/etcd:3.5.1-0
k8s.gcr.io/coredns/coredns:v1.8.6

由于访问k8s.gcr.io可能需要FQ,所以我们可以在国内的镜像仓库中下载它们(比如使用阿里云镜像仓库。阿里云代理镜像仓库地址:registry.aliyuncs.com/google_containers
如果你需要在更多台主机上使用它们,则可以考虑使用Harbor或Docker Register搭建私有化镜像仓库。
所有主机上-从镜像仓库中拉取镜像:

[root@k8s-master ~]# kubeadm config images pull --kubernetes-version=v1.23.0 --image-repository=registry.aliyuncs.com/google_containers
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-apiserver:v1.23.0
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-controller-manager:v1.23.0
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-scheduler:v1.23.0
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-proxy:v1.23.0
[config/images] Pulled registry.aliyuncs.com/google_containers/pause:3.6
[config/images] Pulled registry.aliyuncs.com/google_containers/etcd:3.5.1-0
[config/images] Pulled registry.aliyuncs.com/google_containers/coredns:v1.8.6

所有主机上-查看本地镜像列表:

[root@k8s-master ~]# docker images |grep 'registry.aliyuncs.com/google_containers'
registry.aliyuncs.com/google_containers/kube-apiserver            v1.23.0   e6bf5ddd4098 4 months ago  
registry.aliyuncs.com/google_containers/kube-controller-manager   v1.23.0   37c6aeb3663b 4 months ago  
registry.aliyuncs.com/google_containers/kube-proxy                v1.23.0   e03484a90585 4 months ago  
registry.aliyuncs.com/google_containers/kube-scheduler            v1.23.0   56c5af1d00b5 4 months ago  
registry.aliyuncs.com/google_containers/etcd                      3.5.1-0   25f8c7f3da61 5 months ago  
registry.aliyuncs.com/google_containers/coredns                   v1.8.6    a4ca41631cc7 6 months ago  
registry.aliyuncs.com/google_containers/pause                     3.6       6270bb605e12 7 months ago  

5.安装Pod网络插件-Calico

Calico是一个开源的虚拟化网络方案,支持基础的Pod网络通信和网络策略功能。
Kubernetes有一种资源类型"NetworkPolicy",用于描述Pod的网络策略,要想使用该资源类型,则需要Pod网络插件支持网络策略功能。
任意一台master主机上:

5.1.配置NetworkManager

如果主机系统使用NetworkManager来管理网络的话,则需要配置NetworkManager,以允许Calico管理接口。
NetworkManger操作默认网络命名空间接口的路由表,这可能会干扰Calico代理正确路由的能力。
在所有主机上操作:

[root@k8s-master ~]# cat > /etc/NetworkManager/conf.d/calico.conf <<EOF
[keyfile]
unmanaged-devices=interface-name:cali*;interface-name:tunl*;interface-name:vxlan.calico;interface-name:wireguard.cali
EOF

5.2.下载calico.yaml

[root@k8s-master ~]# wget https://docs.projectcalico.org/v3.23/manifests/calico.yaml --no-check-certificate

5.3.修改calico.yaml

由于默认的Calico清单文件中所使用的镜像来源于docker.io国外镜像源,上面我们配置了Docker镜像加速,应删除docker.io前缀以使镜像从国内镜像加速站点下载。

[root@k8s-master ~]# cat calico.yaml |grep 'image:'
          image: docker.io/calico/cni:v3.23.0
          image: docker.io/calico/cni:v3.23.0
          image: docker.io/calico/node:v3.23.0
          image: docker.io/calico/kube-controllers:v3.23.0
[root@k8s-master ~]# sed -i 's#docker.io/##g' calico.yaml
[root@k8s-master ~]# cat calico.yaml |grep 'image:'
          image: calico/cni:v3.23.0
          image: calico/cni:v3.23.0
          image: calico/node:v3.23.0
          image: calico/kube-controllers:v3.23.0

5.4.应用calico.yaml

[root@k8s-master ~]# kubectl apply -f calico.yaml

Pod-Calico在"kube-system"名称空间下创建并运行起来:

[root@k8s-master ~]# kubectl get pod -n kube-system |grep calico
calico-kube-controllers-77d9858799-c267f   1/1     Running   0              92s
calico-node-6jw5q                          1/1     Running   0              92s
calico-node-krrn6                          1/1     Running   0              92s
calico-node-mgk2g                          1/1     Running   0              92s
calico-node-wr2pv                          1/1     Running   0              92s

6.安装核心插件-Ingress控制器-Ingress-Nginx

Ingress是Kubernetes标准的资源类型之一,用于描述Service的七层实现,实现基于HTTP协议的反向代理功能,这在Web项目中是经常要用的。
"Ingress"功能的提供由Ingress控制器(插件)实现,ingress-nginx是常用的Ingress控制器。
参考文档:
https://github.com/kubernetes/ingress-nginx
https://kubernetes.github.io/ingress-nginx/deploy/

6.2.查看兼容版本

Ingress-NGINX version	k8s supported version	        Alpine Version	Nginx Version
v1.1.3	                1.23, 1.22, 1.21, 1.20, 1.19	3.14.4	        1.19.10†
v1.1.2	                1.23, 1.22, 1.21, 1.20, 1.19	3.14.2	        1.19.9†
v1.1.1	                1.23, 1.22, 1.21, 1.20, 1.19	3.14.2	        1.19.9†

6.3.搜索国内镜像源

注:这边需要修改一下镜像源为国内克隆镜像源,否则可能无法下载镜像。
可以去DockerHUB中搜索一下对应版本的相关镜像!
image.png
image.png

6.4.安装Ingress-Nginx-Controller

[root@k8s-master ~]# wget https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.1.2/deploy/static/provider/cloud/deploy.yaml -O ingress-nginx.yaml
[root@k8s-master01 ~]# vim ingress-nginx.yaml
#image: k8s.gcr.io/ingress-nginx/controllerv1.1.2@...
image: willdockerhub/ingress-nginx-controller:v1.1.2
#image: k8s.gcr.io/ingress-nginx/kube-webhook-certgen:v1.1.1@...
image: liangjw/kube-webhook-certgen:v1.1.1
[root@k8s-master01 ~]# kubectl apply -f ingress-nginx.yaml
namespace/ingress-nginx created
serviceaccount/ingress-nginx created
serviceaccount/ingress-nginx-admission created
role.rbac.authorization.k8s.io/ingress-nginx created
role.rbac.authorization.k8s.io/ingress-nginx-admission created
clusterrole.rbac.authorization.k8s.io/ingress-nginx created
clusterrole.rbac.authorization.k8s.io/ingress-nginx-admission created
rolebinding.rbac.authorization.k8s.io/ingress-nginx created
rolebinding.rbac.authorization.k8s.io/ingress-nginx-admission created
clusterrolebinding.rbac.authorization.k8s.io/ingress-nginx created
clusterrolebinding.rbac.authorization.k8s.io/ingress-nginx-admission created
configmap/ingress-nginx-controller created
service/ingress-nginx-controller created
service/ingress-nginx-controller-admission created
deployment.apps/ingress-nginx-controller created
job.batch/ingress-nginx-admission-create created
job.batch/ingress-nginx-admission-patch created
ingressclass.networking.k8s.io/nginx created
validatingwebhookconfiguration.admissionregistration.k8s.io/ingress-nginx-admission created

6.5.查看运行状态

[root@k8s-master ~]# kubectl get pods --namespace=ingress-nginx
NAME                                        READY   STATUS      RESTARTS   AGE
ingress-nginx-admission-create-6xk5t        0/1     Completed   0          11m
ingress-nginx-admission-patch-sp6w2         0/1     Completed   0          11m
ingress-nginx-controller-7bc7476f95-gdxkz   1/1     Running     0          11m

6.6.使用外部负载均衡器关联Ingress控制器

外部主机想要访问到Pod-Ingress控制器需要通过Service,默认情况下使用.yaml安装Ingress-nginx-controller时会创建LoadBalancer类型的Service,以用于外部负载均衡器关联并将访问请求转发至Ingress控制器处理。
LoadBalancer类型的Service是NodePort类型的上层实现,同理它会在每台节点主机上都开放一个映射端口,可用于外部负载均衡器进行关联。

[root@k8s-master ~]# kubectl get service --namespace=ingress-nginx
NAME                                 TYPE           CLUSTER-IP      EXTERNAL-IP   PORT(S)                      AGE
ingress-nginx-controller             LoadBalancer   10.103.77.111   <pending>     80:30408/TCP,443:32686/TCP   20m
ingress-nginx-controller-admission   ClusterIP      10.98.133.60    <none>        443/TCP                      20m
[root@k8s-master ~]# netstat -lnupt  |grep -E '30408|32686'
tcp        1      0 0.0.0.0:30408           0.0.0.0:*               LISTEN      41631/kube-proxy    
tcp        0      0 0.0.0.0:32686           0.0.0.0:*               LISTEN      41631/kube-proxy    

7.安装常用插件-Metrics-Server

Metrices-Server,指标服务器,Metrices-Server是Kubernetes中的一个常用插件,它类似于Top命令,可以查看Kubernetes中Node和Pod的CPU和内存资源使用情况。
Metrices-Server每15秒收集一次指标,它在集群中的每个节点中运行,可扩展支持多达5000个节点的集群。
Metrices-Server从0.5版本开始默认情况下要求节点上需要的资源请求为100m的CPU和200MiB的内存,以保证100+节点数量的性能是良好的。
参考文档:https://github.com/kubernetes-sigs/metrics-server

7.1.查看与Kuberneres的兼容性

Metrics Server	Metrics API group/version	Supported Kubernetes version
0.6.x	       metrics.k8s.io/v1beta1	        1.19+
0.5.x	       metrics.k8s.io/v1beta1	        *1.8+
0.4.x	       metrics.k8s.io/v1beta1	        *1.8+
0.3.x	       metrics.k8s.io/v1beta1	        1.8-1.21

7.2.搜索国内克隆镜像

官方的安装清单components.yaml默认情况下使用的镜像仓库为k8s.gcr.io,在没有FQ的情况下Pod运行可能无法正常获取到Metrics-Server的安装镜像。
image.png

7.3.安装Metrics-Server

Metrics-Server默认情况下在启动的时候需要验证kubelet提供的CA证书,这可能会导致其启动失败,所以需要添加参数"--kubelet-insecure-tls"禁用此校验证书功能。

[root@k8s-master ~]# wget https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml -O metrics-server.yaml
[root@k8s-master ~]# vim metrics-server.yaml
    spec:
      containers:
      - args:
        - --kubelet-insecure-tls
        - --cert-dir=/tmp
        - --secure-port=4443
        - --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
        - --kubelet-use-node-status-port
        - --metric-resolution=15s
        image: bitnami/metrics-server:0.6.1
[root@k8s-master ~]# kubectl apply -f metrics-server.yaml
serviceaccount/metrics-server created
clusterrole.rbac.authorization.k8s.io/system:aggregated-metrics-reader created
clusterrole.rbac.authorization.k8s.io/system:metrics-server created
rolebinding.rbac.authorization.k8s.io/metrics-server-auth-reader created
clusterrolebinding.rbac.authorization.k8s.io/metrics-server:system:auth-delegator created
clusterrolebinding.rbac.authorization.k8s.io/system:metrics-server created
service/metrics-server created
deployment.apps/metrics-server created
apiservice.apiregistration.k8s.io/v1beta1.metrics.k8s.io created
[root@k8s-master ~]# kubectl get pods --namespace=kube-system |grep -E 'NAME|metrics-server'
NAME                                       READY   STATUS    RESTARTS       AGE
metrics-server-599b4c96ff-njg8b            1/1     Running   0              76s

7.4.查看集群中节点的资源使用情况

[root@k8s-master ~]# kubectl top node
NAME           CPU(cores)   CPU%   MEMORY(bytes)   MEMORY%   
k8s-master   331m         8%     1177Mi          68%       
k8s-node01     246m         6%     997Mi           57%   

7.5.查看集群中指定名称空间下Pod的资源使用情况

[root@k8s-master ~]# kubectl top pod --namespace=kube-system
NAME                                       CPU(cores)   MEMORY(bytes)   
calico-kube-controllers-56fcbf9d6b-phf49   5m           29Mi            
calico-node-8frvw                          98m          120Mi                   
...

8.安装插件-Dashboard

Kubernetes Dashboard是Kubernetes集群的通用、基于Web的UI。它允许用户管理集群中运行的应用程序并对其进行故障排除,以及管理集群本身。
Dashboard是Kubernetes的一个插件,由APIServer提供的一个URL提供访问入口:/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy
当前你也可以通过Service直接访问到DashBoard!
参考文档:
https://github.com/kubernetes/dashboard
https://github.com/kubernetes/dashboard/blob/master/docs/user/accessing-dashboard/README.md#login-not-available

8.1.安装Dashboard

根据配置清单安装Dashboard,会创建Cluster类型的Service,仅只能从集群内部主机访问到Dashboard,所以这边需要简单修改一下,将Service修改为NodePort类型,这样外部主机也可以访问它。

[root@k8s-master ~]# wget https://raw.githubusercontent.com/kubernetes/dashboard/v2.5.1/aio/deploy/recommended.yaml -O dashboard.yaml
[root@k8s-master ~]# vim dashboard.yaml
kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
spec:
  type: NodePort
  ports:
    - port: 443
      targetPort: 8443
  selector:
    k8s-app: kubernetes-dashboard
[root@k8s-master01 ~]# kubectl apply -f dashboard.yaml
namespace/kubernetes-dashboard created
serviceaccount/kubernetes-dashboard created
service/kubernetes-dashboard created
secret/kubernetes-dashboard-certs created
secret/kubernetes-dashboard-csrf created
secret/kubernetes-dashboard-key-holder created
configmap/kubernetes-dashboard-settings created
role.rbac.authorization.k8s.io/kubernetes-dashboard created
clusterrole.rbac.authorization.k8s.io/kubernetes-dashboard created
rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
clusterrolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
deployment.apps/kubernetes-dashboard created
service/dashboard-metrics-scraper created
deployment.apps/dashboard-metrics-scraper created
[root@k8s-master01 ~]# kubectl get pod --namespace=kubernetes-dashboard
NAME                                         READY   STATUS    RESTARTS   AGE
dashboard-metrics-scraper-799d786dbf-xx9j7   1/1     Running   0          3m16s
kubernetes-dashboard-fb8648fd9-rgc2z         1/1     Running   0          3m17s

8.2.访问到Dashboard

[root@k8s-master ~]# kubectl get service --namespace=kubernetes-dashboard
NAME                        TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)         AGE
dashboard-metrics-scraper   ClusterIP   10.97.23.158    <none>        8000/TCP        4m6s
kubernetes-dashboard        NodePort    10.103.40.153   <none>        443:32358/TCP   4m7s
[root@k8s-master ~]# netstat -lnupt |grep 32358
tcp        0      0 0.0.0.0:32358           0.0.0.0:*               LISTEN      41631/kube-proxy    

浏览器输入:https://<任一节点主机IP>:/#/login
image.png

8.3.选择登录到Dashboard要使用的身份认证方式

登录进入Dashboard需要进行身份认证。
Dashboard服务在Pod中运行,Pod想要访问并获取到集群相关信息的话则需要创建一个ServiceAccount以验证身份。
Dashboard想要管理Kubernetes集群需要进行身份认证,目前支持Token和Kubeconfig两种方式。
Token
创建一个拥有集群角色"cluster-admin"的服务账户"dashboard-admin",然后使用dashboard-admin的Token即可!当然你也可以根据特殊需要创建拥有指定权限的集群角色将其绑定到对应的服务账户上,以管理集群中指定资源。

# 创建一个专用于Dashboard的服务账户"dashboard-admin"
[root@k8s-master ~]# kubectl create serviceaccount dashboard-admin -n kubernetes-dashboard
serviceaccount/dashboard-admin created
# 为服务账户"dashboard-admin"绑定到拥有超级管理员权限的集群角色"cluster-admin"
# 则dashboard-admin就拥有了超级管理员权限
[root@k8s-master ~]# kubectl create clusterrolebinding dashboard-admin --clusterrole=cluster-admin --serviceaccount=kubernetes-dashboard:dashboard-admin
clusterrolebinding.rbac.authorization.k8s.io/dashboard-admin created
# 创建的服务账户,会自动生成一个Token,它是Secret类型的资源对象
# 我们可以使用以下操作获取到服务账户"dashboard-admin"的Token以用于Dashboard身份验证
[root@k8s-master ~]# kubectl get secrets -n kubernetes-dashboard |grep dashboard-admin-token
dashboard-admin-token-2bxfl        kubernetes.io/service-account-token   3      66s
[root@k8s-master01 ~]# kubectl describe secrets/dashboard-admin-token-2bxfl -n kubernetes-dashboard
Name:         dashboard-admin-token-2bxfl
Namespace:    kubernetes-dashboard
Labels:       <none>
Annotations:  kubernetes.io/service-account.name: dashboard-admin
              kubernetes.io/service-account.uid: 492a031e-db41-4a65-a8d4-af0e240e7f9d

Type:  kubernetes.io/service-account-token

Data
====
ca.crt:     1103 bytes
namespace:  20 bytes
token:      eyJhbGciOiJSUzI1NiIsImtpZCI6ImFXTzZFUElaS2RoTUpScHFwNzJSNUN5eU1lcFNSZEZqNWNNbi1VbFV2Zk0ifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJkYXNoYm9hcmQtYWRtaW4tdG9rZW4tMmJ4ZmwiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoiZGFzaGJvYXJkLWFkbWluIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiNDkyYTAzMWUtZGI0MS00YTY1LWE4ZDQtYWYwZTI0MGU3ZjlkIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Omt1YmVybmV0ZXMtZGFzaGJvYXJkOmRhc2hib2FyZC1hZG1pbiJ9.l5VEIPd9nIsJuXMh86rjFHhkIoZmg5nlDw7Bixn0b3-KT1r6o7WRegq8DJyVk_iiIfRnrrz5jjuOOkCKwXwvI1NCfVdsuBKXFwFZ1Crc-BwHjIxWbGuZfEGxSbN8du4T4xcUuNU-7HuZQcGDY23uy68aPqWSm8UoIcOFwUgVcYkKlOuW76tIXxG_upxWpWZz74aMDUIkjar7sdWXzMr1m5G43TLE9Z_lKCgoV-hc4Fo9_Er-TIAPqDG6-sfZZZ9Raldvn3j380QDYahUKaGKabnOFDXbODKOQ1VKRizgiRTOqt-z9YRPTcyxQzfheKC8DTb2X8D-E4x6azulenNgqw

Kubeconfig
Token是很长的复杂的密钥字符串,使用它进行身份认证并不方便,所以Dashboard支持使用Kubeconfig文件的方式登陆到Dashboard。
基于上面Token的创建的服务账户,创建一个Kubeconfig配置文件。

# 查看集群信息
[root@k8s-master ~]# kubectl cluster-info
Kubernetes control plane is running at https://192.168.124.129:9443
# 创建kubeconfig文件并设置集群相关
[root@k8s-master ~]# kubectl config set-cluster kubernetes --embed-certs=true --server="https://192.168.124.129:9443" --certificate-authority=/etc/kubernetes/pki/ca.crt --kubeconfig=dashboard-admin.kubeconfig
# 设置认证相关到kubeconfig文件
# 默认情况下服务账户的Token是base64编码格式,如果需要将其写到kubeconfig中的则需要使用"base64 -d"进行解
# 码
[root@k8s-master ~]# Token=$(kubectl get secrets/dashboard-admin-token-2bxfl -n kubernetes-dashboard -o jsonpath={.data.token} |base64 -d)
[root@k8s-master ~]# kubectl config set-credentials dashboard-admin --token=${Token} --kubeconfig=./dashboard-admin.kubeconfig 
# 设置上下文相关到kubeconfig文件
[root@k8s-master ~]# kubectl config set-context dashboard-admin --cluster=kubernetes  --user=dashboard-admin --kubeconfig=./dashboard-admin.kubeconfig 
# 设置当前要使用的上下文到kubeconfig文件
[root@k8s-master ~]# kubectl config use-context dashboard-admin --cluster=kubernetes  --user=dashboard-admin --kubeconfig=./dashboard-admin.kubeconfig
# 最后得到以下文件
[root@k8s-master ~]# cat dashboard-admin.kubeconfig
apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURBRENDQWVpZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFWTVJNd0VRWURWUVFERXdwcmRXSmwKY201bGRHVnpNQ0FYRFRJeU1EUXhNVEEwTXpnME1Gb1lEekl4TWpJd016RTRNRFF6T0RRd1dqQVZNUk13RVFZRApWUVFERXdwcmRXSmxjbTVsZEdWek1JSUJJakFOQmdrcWhraUc5dzBCQVFFRkFBT0NBUThBTUlJQkNnS0NBUUVBCjR0RDRmU2ZmcHU1WS9KUGJrQWgvdG0xS1lSeWQ5YU9MVk9xTDQyc1M5YmxiZGh0WU9QSHYvWEpVb1k1ZSs5MXgKUE9NbnZnWmhiR29uditHQWVFazRKeUl4MTNiTm1XUk1DZ1QyYnJIWlhvcm5QeGE0ZlVsNHg5K2swVEc5ejdIMAo0cjF5MFkzWXNXaGJIeHBwL0hvQzNRR2JVWVJyMm03NVgxTWUvdFFCL25FcUNybUZxNkRveEU3REIxMkRnemE4CjBrM3FwZllGZHBOcnZBakdIcUlSZ0ZxT24ybDVkb0c3bGVhbkIrY2wxQWltUnZCMDdQdlVKdVhrK1N5NUhmdnMKNzYyYXJRYklNMUlISkJ0ZXBaQzVjYi9pNGZhcWNrTXJaeTZvanlnN2JPcjBuMlpQcHV5SnR5QjhLMnJDZCtYZApTeXlrZG44S0MxRlRSR0p6dkdpaVRRSURBUUFCbzFrd1Z6QU9CZ05WSFE4QkFmOEVCQU1DQXFRd0R3WURWUjBUCkFRSC9CQVV3QXdFQi96QWRCZ05WSFE0RUZnUVVucEhxdGJzZ01CcSt4Q1MzTVErWnk4akFpeFV3RlFZRFZSMFIKQkE0d0RJSUthM1ZpWlhKdVpYUmxjekFOQmdrcWhraUc5dzBCQVFzRkFBT0NBUUVBRHhpR3c2bk5NV1hRMnVlRgppK2R2Nittc1FUT0JCWWJENkhIblVETUtnK0loaEcwclA5MkFRRjFWcUZaN1ZDSTgyWmJ5VnVWSmVvMjlkdjZpClBDeFJzWERxdHl0TG1CMkFPRUxXOFdqSCtheTZ5a3JYbGIwK1NIZ1Q3Q1NJRHhRdG9TeE8rK094WjhBb1JGMmUKTy94U1YxM0E0eG45RytmUEJETkVnWUJHbWd6L1RjSjZhYnljZnNNaGNwZ1kwKzJKZlJDemZBeFNiMld6TzBqaApucFRONUg2dG1ST3RlQ2h3anRWVDYrUXBUSzdkN0hjNmZlZ0w0S1pQZDEwZ0hyRFV1eWtpY01UNkpWNXNJSjArCmw5eWt2V1R2M2hEN0NJSmpJWnUySjdod0FGeW1hSmxzekZuZEpNZUFEL21pcDBMQk40OUdER2M2UFROdUw0WHEKeUxrYUhRPT0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
    server: https://192.168.124.129:9443
  name: kubernetes
contexts:
- context:
    cluster: kubernetes
    user: dashboard-admin
  name: dashboard-admin
current-context: dashboard-admin
kind: Config
preferences: {}
users:
- name: dashboard-admin
  user:
    token: eyJhbGciOiJSUzI1NiIsImtpZCI6ImFXTzZFUElaS2RoTUpScHFwNzJSNUN5eU1lcFNSZEZqNWNNbi1VbFV2Zk0ifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJkYXNoYm9hcmQtYWRtaW4tdG9rZW4tMmJ4ZmwiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoiZGFzaGJvYXJkLWFkbWluIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiNDkyYTAzMWUtZGI0MS00YTY1LWE4ZDQtYWYwZTI0MGU3ZjlkIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Omt1YmVybmV0ZXMtZGFzaGJvYXJkOmRhc2hib2FyZC1hZG1pbiJ9.l5VEIPd9nIsJuXMh86rjFHhkIoZmg5nlDw7Bixn0b3-KT1r6o7WRegq8DJyVk_iiIfRnrrz5jjuOOkCKwXwvI1NCfVdsuBKXFwFZ1Crc-BwHjIxWbGuZfEGxSbN8du4T4xcUuNU-7HuZQcGDY23uy68aPqWSm8UoIcOFwUgVcYkKlOuW76tIXxG_upxWpWZz74aMDUIkjar7sdWXzMr1m5G43TLE9Z_lKCgoV-hc4Fo9_Er-TIAPqDG6-sfZZZ9Raldvn3j380QDYahUKaGKabnOFDXbODKOQ1VKRizgiRTOqt-z9YRPTcyxQzfheKC8DTb2X8D-E4x6azulenNgqw

8.4.选择Kubeconfig文件登陆Dashboard即可

image.png
image.png

9.简单使用-运行Nginx应用

9.1.创建Deployment

创建Deployment(Pod控制器),即会自动创建并调度Pod在Node上运行起来。
创建Pod后,Kubernetes会自动在Docker上基于镜像运行起来一个容器应用。
Pod内运行Nginx应用的容器。

[root@k8s-master ~]# kubectl create deployment nginx --image=nginx  # 创建Deployment
[root@k8s-master ~]# kubectl get deployment -o wide  # 查看Deployment
NAME    READY   UP-TO-DATE   AVAILABLE   AGE   CONTAINERS   IMAGES   SELECTOR
nginx   1/1     1            1           24m   nginx        nginx    app=nginx
[root@k8s-master ~]# kubectl get pod -o wide  # 查看Pod
NAME                     READY   STATUS    RESTARTS   AGE   IP           NODE
nginx-554b9c67f9-7bzhw   1/1     Running   0          24m   172.17.0.2   minikube 

9.2.创建Service

创建Service,映射容器应用暴露的端口到主机端口,映射到主机上的端口是随机分配的。

[root@k8s-master ~]# kubectl expose deployment nginx --port=80 --type=NodePort  # 创建Service
[root@k8s-master ~]# kubectl get service nginx -o wide  # 查看Service
NAME    TYPE       CLUSTER-IP   EXTERNAL-IP   PORT(S)        AGE    SELECTOR
nginx   NodePort   10.96.89.7   <none>        80:32756/TCP   2m4s   app=nginx

9.3.通过浏览器访问

访问地址:NodeIP:Port.
image.png

附录

查看Kubernetes与Docker兼容性

访问网址:https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.23.md
image.png

重置节点

当在使用"kubeadm init"或"kubeadm join"部署节点出现失败状况时,可以使用以下操作对节点进行重置!
注:重置会将节点恢复到未部署前状态,若集群已正常工作则无需重置,否则将引起不可恢复的集群故障!

[root@k8s-master ~]# kubeadm reset -f
[root@k8s-master ~]# ipvsadm --clear
[root@k8s-master ~]# iptables -F && iptables -X && iptables -Z

常用查看命令

更多的操作请完整学习Kubernetes的资源和集群管理!
查看令牌(Token)列表:

[root@k8s-master ~]# kubeadm token list
TOKEN                     TTL         EXPIRES   USAGES                   DESCRIPTION                                                EXTRA GROUPS
abcdef.0123456789abcdef   <forever>   <never>   authentication,signing   <none>                                                     system:bootstrappers:kubeadm:default-node-token
ek6xtl.s3dk4vjxzp83bcx3   1h          2022-04-06T13:30:39Z   <none>                   Proxy for managing TTL for the kubeadm-certs secret        <none>

查看kubernetes集群中证书到期时间:

[root@k8s-master ~]# kubeadm certs check-expiration
[check-expiration] Reading configuration from the cluster...
[check-expiration] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'

CERTIFICATE                EXPIRES                  RESIDUAL TIME   CERTIFICATE AUTHORITY   EXTERNALLY MANAGED
admin.conf                 Mar 18, 2122 04:02 UTC   99y                                     no      
apiserver                  Mar 18, 2122 04:02 UTC   99y             ca                      no      
apiserver-etcd-client      Mar 18, 2122 04:02 UTC   99y             etcd-ca                 no      
apiserver-kubelet-client   Mar 18, 2122 04:02 UTC   99y             ca                      no      
controller-manager.conf    Mar 18, 2122 04:02 UTC   99y                                     no      
etcd-healthcheck-client    Mar 18, 2122 04:02 UTC   99y             etcd-ca                 no      
etcd-peer                  Mar 18, 2122 04:02 UTC   99y             etcd-ca                 no      
etcd-server                Mar 18, 2122 04:02 UTC   99y             etcd-ca                 no      
front-proxy-client         Mar 18, 2122 04:02 UTC   99y             front-proxy-ca          no      
scheduler.conf             Mar 18, 2122 04:02 UTC   99y                                     no      

CERTIFICATE AUTHORITY   EXPIRES                  RESIDUAL TIME   EXTERNALLY MANAGED
ca                      Mar 18, 2122 04:02 UTC   99y             no      
etcd-ca                 Mar 18, 2122 04:02 UTC   99y             no      
front-proxy-ca          Mar 18, 2122 04:02 UTC   99y             no    

查看节点运行状态:

[root@k8s-master ~]# kubectl get nodes
NAME           STATUS   ROLES                  AGE   VERSION
k8s-master     Ready    control-plane,master   40h   v1.23.0
k8s-node01     Ready    <none>                 39h   v1.23.0

查看Kubeadm初始化控制平面默认使用的配置信息:

[root@k8s-master ~]# kubeadm config print init-defaults

查看Kubeadm部署安装Kubernetes集群所要使用的容器镜像列表:

[root@k8s-master ~]# kubeadm config images list

Master节点上支持运行Pod

可以,默认情况下Master节点在创建的时候,就已经被填充了污点"taints",如果想要在Master节点上运行Pod,只需要将"taints"删除即可!(不建议的操作)

[root@k8s-master ~]# kubectl describe nodes/k8s-master01
Name:               k8s-master
...
Taints:             node-role.kubernetes.io/master:NoSchedule
...
[root@k8s-master ~]# kubectl taint nodes --all node-role.kubernetes.io/master-

新Node节点加入到集群

生成新的Token并打印命令:

[root@k8s-master ~]# kubeadm token create --print-join-command
kubeadm join 192.168.124.100:9443 --token 8mbm4q.fisfbupt3zv5wwfb --discovery-token-ca-cert-hash sha256:1d3555f2c419ee78a560700130ce08c084c71ca4b8b3b48d159769b217923145 

新的Node节点加入到集群
执行以上命令即可!

[root@k8s-node02 ~]# kubeadm join 192.168.124.100:9443 --token 8mbm4q.fisfbupt3zv5wwfb --discovery-token-ca-cert-hash sha256:1d3555f2c419ee78a560700130ce08c084c71ca4b8b3b48d159769b217923145 
posted @ 2020-10-23 18:25  RidingWind  阅读(1011)  评论(0)    收藏  举报