第三章 Centos7下二进制安装Kubernetes-1.18.8

一、 部署系统版本

软件 版本
CentOS CentOS Linux release 7.9.1908 (Core)
Docker 20.10.2
Kubernetes v1.18.8
Flannel V0.13.1
Kernel-lm kernel-lt-5.4.145-1.el7.elrepo.x86_64.rpm
Kernel-lm-devel kernel-lt-devel-5.4.145-1.el7.elrepo.x86_64.rpm

二、节点规划

Hostname Ip 内核版本
k8s-master-001 192.168.13.110 5.4.145-1.el7.elrepo.x86_64
k8s-master-002 192.168.13.111 5.4.145-1.el7.elrepo.x86_64
k8s-master-003 192.168.13.112 5.4.145-1.el7.elrepo.x86_64
K8s-node-001 192.168.13.113 5.4.145-1.el7.elrepo.x86_64
K8s-node-002 192.168.13.114 5.4.145-1.el7.elrepo.x86_64

三、添加Host解析(所有节点)

[root@k8s-master-001 ~]# vi /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
172.16.1.110 k8s-master-001 m1
172.16.1.111 k8s-master-002 m2
172.16.1.112 k8s-master-003 m3
172.16.1.113 k8s-node-001 n1
172.16.1.114 k8s-node-002 n2

四、Master-001免密登录

[root@k8s-master-001 ~]# ssh-keygen -t rsa
[root@k8s-master-001 ~]# ssh-copy-id -i ~/.ssh/id_rsa.pub m1
[root@k8s-master-001 ~]# ssh-copy-id -i ~/.ssh/id_rsa.pub m2
[root@k8s-master-001 ~]# ssh-copy-id -i ~/.ssh/id_rsa.pub m3
[root@k8s-master-001 ~]# ssh-copy-id -i ~/.ssh/id_rsa.pub n1
[root@k8s-master-001 ~]# ssh-copy-id -i ~/.ssh/id_rsa.pub n2

五、系统初始化(所有节点)

1.关闭防火墙

#1.关闭防火墙
[root@k8s-master-001 ~]# systemctl disable --now firewalld

#2.查看防火墙状态
[root@k8s-master-001 ~]# systemctl  status  firewalld.service
● firewalld.service - firewalld - dynamic firewall daemon
   Loaded: loaded (/usr/lib/systemd/system/firewalld.service; disabled; vendor preset: enabled)
   Active: inactive (dead)
     Docs: man:firewalld(1)

2.关闭SeLinux

#1.永久关闭
[root@k8s-master-001 ~]# sed -i 's#enforcing#disabled#g' /etc/selinux/config

#2.临时关闭
[root@k8s-master-001 ~]# setenforce 0
setenforce: SELinux is disabled

#3.查看selinux状态
[root@k8s-master-001 ~]# getenforce 
Disabled

3.关闭swap分区

[root@k8s-master-001 ~]# swapoff -a
[root@k8s-master-001 ~]# sed -i 's/^.*centos-swap/#&/g' /etc/fstab
[root@k8s-master-001 ~]# echo 'KUBELET_EXTRA_ARGS="--fail-swap-on=false"' > /etc/sysconfig/kubelet

4.配置国内yum源

#1.更改yum源
[root@k8s-master-001 ~]# mv /etc/yum.repos.d/CentOS-Base.repo /etc/yum.repos.d/CentOS-Base.repo.backup
[root@k8s-master-001 ~]# curl -o /etc/yum.repos.d/CentOS-Base.repo https://mirrors.aliyun.com/repo/Centos-7.repo
[root@k8s-master-001 ~]# curl -o /etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-7.repo

#2.刷新缓存
[root@k8s-master-001 ~]# yum makecache

#3.禁止自动更新系统内核
[root@k8s-master-001 ~]# yum update -y --exclud=kernel*

5.更新内核版本

由于Docker运行需要较新的系统内核功能,例如ipvs等等,所以一般情况下,我们需要使用4.0+以上版本的系统内核。
# 内核要求是4.18+,如果是`CentOS 8`则不需要升级内核

#1.导入elrepo的key
[root@k8s-master-001 ~]# rpm -import https://www.elrepo.org/RPM-GPG-KEY-elrepo.org

#2.安装elrepo的yum源
[root@k8s-master-001 ~]# rpm -Uvh http://www.elrepo.org/elrepo-release-7.0-2.el7.elrepo.noarch.rpm

#3.仓库启用后,列出可用的内核相关包
[root@k8s-master-001 ~]# yum --disablerepo="*" --enablerepo="elrepo-kernel" list available
Loaded plugins: fastestmirror
Loading mirror speeds from cached hostfile
 * elrepo-kernel: mirror-hk.koddos.net
elrepo-kernel                                                                                                | 3.0 kB  00:00:00     
elrepo-kernel/primary_db                                                                                     | 2.0 MB  00:00:04     
Available Packages
elrepo-release.noarch                                             7.0-5.el7.elrepo                                     elrepo-kernel
kernel-lt.x86_64                                                  5.4.145-1.el7.elrepo                                 elrepo-kernel
kernel-lt-devel.x86_64                                            5.4.145-1.el7.elrepo                                 elrepo-kernel
kernel-lt-doc.noarch                                              5.4.145-1.el7.elrepo                                 elrepo-kernel
kernel-lt-headers.x86_64                                          5.4.145-1.el7.elrepo                                 elrepo-kernel
kernel-lt-tools.x86_64                                            5.4.145-1.el7.elrepo                                 elrepo-kernel
kernel-lt-tools-libs.x86_64                                       5.4.145-1.el7.elrepo                                 elrepo-kernel
kernel-lt-tools-libs-devel.x86_64                                 5.4.145-1.el7.elrepo                                 elrepo-kernel
kernel-ml.x86_64                                                  5.14.3-1.el7.elrepo                                  elrepo-kernel
kernel-ml-devel.x86_64                                            5.14.3-1.el7.elrepo                                  elrepo-kernel
kernel-ml-doc.noarch                                              5.14.3-1.el7.elrepo                                  elrepo-kernel
kernel-ml-headers.x86_64                                          5.14.3-1.el7.elrepo                                  elrepo-kernel
kernel-ml-tools.x86_64                                            5.14.3-1.el7.elrepo                                  elrepo-kernel
kernel-ml-tools-libs.x86_64                                       5.14.3-1.el7.elrepo                                  elrepo-kernel
kernel-ml-tools-libs-devel.x86_64                                 5.14.3-1.el7.elrepo                                  elrepo-kernel
perf.x86_64                                                       5.14.3-1.el7.elrepo                                  elrepo-kernel
python-perf.x86_64                                                5.14.3-1.el7.elrepo                                  elrepo-kernel

#4.长期维护版本lt为5.4,最新主线稳定版ml为5.14,我们需要安装最新的长期维护版本内核,使用如下命令:(以后这台机器升级内核直接运行这句就可升级为最新维护版本)
[root@k8s-master-001 ~]# yum -y --enablerepo=elrepo-kernel install kernel-lt.x86_64 kernel-lt-devel.x86_64

#5.设置启动优先级
[root@k8s-master-001 ~]# grub2-set-default  0 && grub2-mkconfig -o /etc/grub2.cfg

#6.查看内核版本
[root@k8s-master-001 ~]# grubby --default-kernel
/boot/vmlinuz-5.4.145-1.el7.elrepo.x86_64

6.安装ipvs

ipvs是系统内核中的一个模块,其网络转发性能很高。一般情况下,我们首选ipvs。

#1.安装IPVS
[root@k8s-master-001 ~]# yum install -y conntrack-tools ipvsadm ipset conntrack libseccomp

#2.加载IPVS模块
[root@k8s-master-001 ~]# cat > /etc/sysconfig/modules/ipvs.modules <<EOF
#!/bin/bash
ipvs_modules="ip_vs ip_vs_lc ip_vs_wlc ip_vs_rr ip_vs_wrr ip_vs_lblc ip_vs_lblcr ip_vs_dh ip_vs_sh ip_vs_fo ip_vs_nq ip_vs_sed ip_vs_ftp nf_conntrack"
for kernel_module in \${ipvs_modules}; do
  /sbin/modinfo -F filename \${kernel_module} > /dev/null 2>&1
  if [ $? -eq 0 ]; then
    /sbin/modprobe \${kernel_module}
  fi
done
EOF

[root@k8s-master-001 ~]# chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep ip_vs

7.优化系统内核参数

内核参数优化的主要目的是使其更适合kubernetes的正常运行。

#1.进行内核优化
[root@k8s-master-001 ~]# cat > /etc/sysctl.d/k8s.conf << EOF
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
fs.may_detach_mounts = 1
vm.overcommit_memory=1
vm.panic_on_oom=0
fs.inotify.max_user_watches=89100
fs.file-max=52706963
fs.nr_open=52706963
net.ipv4.tcp_keepalive_time = 600
net.ipv4.tcp.keepaliv.probes = 3
net.ipv4.tcp_keepalive_intvl = 15
net.ipv4.tcp.max_tw_buckets = 36000
net.ipv4.tcp_tw_reuse = 1
net.ipv4.tcp.max_orphans = 327680
net.ipv4.tcp_orphan_retries = 3
net.ipv4.tcp_syncookies = 1
net.ipv4.tcp_max_syn_backlog = 16384
net.ipv4.ip_conntrack_max = 65536
net.ipv4.tcp_max_syn_backlog = 16384
net.ipv4.top_timestamps = 0
net.core.somaxconn = 16384
EOF

#2.查看内核参数
[root@k8s-master-001 ~]# sysctl -p

#3.重启
[root@k8s-master-001 ~]# reboot

8. 安装基础软件

安装一些基础软件,是为了方便我们的日常使用。

[root@k8s-master-001 ~]# yum install wget expect vim net-tools ntp bash-completion ipvsadm ipset jq iptables conntrack sysstat libseccomp -y

9.安装docker

Docker主要是作为k8s管理的常用的容器工具之一。

#1.CentOS7版
yum install -y yum-utils device-mapper-persistent-data lvm2
yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo

yum install docker-ce -y
sudo mkdir -p /etc/docker

# 设置加速器
sudo tee /etc/docker/daemon.json <<-'EOF'
{
  "registry-mirrors": ["https://8mh75mhz.mirror.aliyuncs.com"]
}
EOF

sudo systemctl daemon-reload ; systemctl restart docker;systemctl enable --now docker.service

#2.CentOS8版
wget https://download.docker.com/linux/centos/7/x86_64/stable/Packages/containerd.io-1.2.13-3.2.el7.x86_64.rpm
yum install containerd.io-1.2.13-3.2.el7.x86_64.rpm -y
yum install -y yum-utils device-mapper-persistent-data lvm2
yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo

yum install docker-ce -y
sudo mkdir -p /etc/docker
sudo tee /etc/docker/daemon.json <<-'EOF'
{
  "registry-mirrors": ["https://8mh75mhz.mirror.aliyuncs.com"]
}
EOF
sudo systemctl daemon-reload ; systemctl restart docker;systemctl enable --now docker.service

10.同步集群时间

在集群当中,时间是一个很重要的概念,一旦集群当中某台机器时间跟集群时间不一致,可能会导致集群面临很多问题。所以,在部署集群之前,需要同步集群当中的所有机器的时间。

#1.CentOS7版
yum install ntp -y

ln -sf /usr/share/zoneinfo/Asia/Shanghai /etc/localtime
echo 'Asia/Shanghai' > /etc/timezone

ntpdate time2.aliyun.com

# 写入定时任务
#Timing synchronization time
* * * * * /usr/sbin/ntpdate ntp1.aliyun.com &>/dev/null


#2.CentOS8版
rpm -ivh http://mirrors.wlnmp.com/centos/wlnmp-release-centos.noarch.rpm
yum install wntp -y

ln -sf /usr/share/zoneinfo/Asia/Shanghai /etc/localtime
echo 'Asia/Shanghai' > /etc/timezone

ntpdate time2.aliyun.com

# 写入定时任务
#Timing synchronization time
* * * * * /usr/sbin/ntpdate ntp1.aliyun.com &>/dev/null

六、安装证书生成工具(任意一节点)

kubernetes组件众多,这些组件之间通过 HTTP/GRPC 相互通信,以协同完成集群中应用的部署和管理工作。尤其是master节点,更是掌握着整个集群的操作。其安全就变得尤为重要了,在目前世面上最安全的,使用最广泛的就是数字证书。kubernetes正是使用这种认证方式。

1.安装cfssl证书生成工具

本次我们使用cfssl证书生成工具,这是一款把预先的证书机构、使用期等时间写在json文件里面会更加高效和自动化。cfssl采用go语言编写,是一个开源的证书管理工具,cfssljson用来从cfssl程序获取json输出,并将证书,密钥,csr和bundle写入文件中。
#1.下载cfssl证书生成工具
[root@k8s-master-001 ~]# mkdir -p /data/software
[root@k8s-master-001 ~]# cd /data/software/
[root@k8s-master-001 /data/software]# wget https://github.com/cloudflare/cfssl/releases/download/v1.5.0/cfssl_1.5.0_linux_amd64
[root@k8s-master-001 /data/software]# wget https://github.com/cloudflare/cfssl/releases/download/v1.5.0/cfssljson_1.5.0_linux_amd64

#2.查看安装包
[root@k8s-master-001 /data/software]# ll
total 24196
-rw-r--r-- 1 root root 15108368 Oct 23  2020 cfssl_1.5.0_linux_amd64
-rw-r--r-- 1 root root  9663504 Oct 23  2020 cfssljson_1.5.0_linux_amd64

#3.设置执行权限
[root@k8s-master-001 /data/software]# chmod +x cfssl_1.5.0_linux_amd64
[root@k8s-master-001 /data/software]# chmod +x cfssljson_1.5.0_linux_amd64

#4.移动到/usr/local/bin
[root@k8s-master-001 /data/software]# mv cfssljson_1.5.0_linux_amd64 cfssljson
[root@k8s-master-001 /data/software]# mv cfssl_1.5.0_linux_amd64 cfssl
[root@k8s-master-001 /data/software]# mv cfssljson cfssl /usr/local/bin

#5.查看cfssl证书生成工具版本
[root@k8s-master-001 ~]# cfssl version
Version: 1.5.0
Runtime: go1.12.12

2.生成集群根证书

从整个架构来看,集群环境中最重要的部分就是etcd和API server。所以集群当中的证书都是针对etcd和api server来设置的。

所谓根证书,是CA认证中心与用户建立信任关系的基础,用户的数字证书必须有一个受信任的根证书,用户的数字证书才是有效的。从技术上讲,证书其实包含三部分,用户的信息,用户的公钥,以及证书签名。CA负责数字证书的批审、发放、归档、撤销等功能,CA颁发的数字证书拥有CA的数字签名,所以除了CA自身,其他机构无法不被察觉的改动。
#1.新建证书目录
mkdir -p /opt/cert/ca

#2.编辑生成证书文件
cat > /opt/cert/ca/ca-config.json <<EOF
{
  "signing": {
    "default": {
      "expiry": "8760h"
    },
    "profiles": {
      "kubernetes": {
        "usages": [
          "signing",
          "key encipherment",
          "server auth",
          "client auth"
        ],
           "expiry": "8760h"
      }
    }
  }
}
EOF

#3.证书详解
1.default是默认策略,指定证书默认有效期是1年
2.profiles是定义使用场景,这里只是kubernetes,其实可以定义多个场景,分别指定不同的过期时间,使用场景等参数,后续签名证书时使用某个profile;
3.signing: 表示该证书可用于签名其它证书,生成的ca.pem证书
4.server auth: 表示client 可以用该CA 对server 提供的证书进行校验;
5.client auth: 表示server 可以用该CA 对client 提供的证书进行验证。

#4.创建根CA证书签名请求文件
cat > /opt/cert/ca/ca-csr.json << EOF
{
  "CN": "kubernetes",
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names":[{
    "C": "CN",
    "ST": "ShangHai",
    "L": "ShangHai"
  }]
}
EOF
证书项 解释
C 国家
ST
L 城市
O 组织
OU 组织别名
#5.生成根证书
[root@k8s-master-001 ~]# cd /opt/cert/ca
[root@k8s-master-001 /opt/cert/ca]# cfssl gencert -initca ca-csr.json | cfssljson -bare ca -
2021/09/17 17:45:43 [INFO] generating a new CA key and certificate from CSR
2021/09/17 17:45:43 [INFO] generate received request
2021/09/17 17:45:43 [INFO] received CSR
2021/09/17 17:45:43 [INFO] generating key: rsa-2048
2021/09/17 17:45:43 [INFO] encoded CSR
2021/09/17 17:45:43 [INFO] signed certificate with serial number 458548574547914701562338064412948993289957333620

#6.查看
[root@k8s-master-001 /opt/cert/ca]# ll
total 20
-rw-r--r-- 1 root root  285 Sep 17 17:44 ca-config.json
-rw-r--r-- 1 root root  960 Sep 17 17:45 ca.csr
-rw-r--r-- 1 root root  153 Sep 17 17:45 ca-csr.json
-rw------- 1 root root 1675 Sep 17 17:45 ca-key.pem
-rw-r--r-- 1 root root 1233 Sep 17 17:45 ca.pem
参数项 解释
gencert 生成新的key(密钥)和签名证书
--initca 初始化一个新CA证书

七、部署etcd集群

Etcd是基于Raft的分布式key-value存储系统,由CoreOS团队开发,常用于服务发现,共享配置,以及并发控制(如leader选举,分布式锁等等)。Kubernetes使用Etcd进行状态和数据存储!

1. ETCD集群规划

ETCD节点 IP
Etcd-01 172.16.1.110
Etcd-02 172.16.1.111
Etcd-03 172.16.1.112
#1.下载ETCD安装包
[root@k8s-master-001 /opt/cert/ca]# cd /data/software/
[root@k8s-master-001 /data/software]# wget https://mirrors.huaweicloud.com/etcd/v3.3.24/etcd-v3.3.24-linux-amd64.tar.gz

#2.解压安装包
[root@k8s-master-001 /data/software]# tar xf etcd-v3.3.24-linux-amd64.tar.gz
[root@k8s-master-001 /data/software]# ll
total 14164
drwxr-xr-x 3 630384594 600260513      123 Aug 19  2020 etcd-v3.3.24-linux-amd64
-rw-r--r-- 1 root      root      14503878 Aug 18  2020 etcd-v3.3.24-linux-amd64.tar.gz

#3.分发etcd配置文件
[root@k8s-master-001 /data/software]# cd etcd-v3.3.24-linux-amd64/
[root@k8s-master-001 /data/software/etcd-v3.3.24-linux-amd64]# for i in m1 m2 m3;do scp etcd* $i:/usr/local/bin ; done
   
#4.查看ETCD安装是否成功
[root@k8s-master-001 /data/software/etcd-v3.3.24-linux-amd64]# etcd --version
etcd Version: 3.3.24
Git SHA: bdd57848d
Go Version: go1.12.17
Go OS/Arch: linux/amd64

2.创建etcd证书

[root@k8s-master-001 /data/software/etcd-v3.3.24-linux-amd64]# mkdir -p /opt/cert/etcd
[root@k8s-master-001 /data/software/etcd-v3.3.24-linux-amd64]# cd /opt/cert/etcd
[root@k8s-master-001 /opt/cert/etcd]# cat > etcd-csr.json << EOF
{
    "CN": "etcd",
    "hosts": [
        "127.0.0.1",
        "172.16.1.110",
        "172.16.1.111",
        "172.16.1.112",
        "172.16.1.113",
        "172.16.1.114"
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
          "C": "CN",
          "ST": "ShangHai",
          "L": "ShangHai"
        }
    ]
}
EOF

[root@k8s-master-001 /opt/cert/etcd]# ll
total 4
-rw-r--r-- 1 root root 364 Sep 17 19:03 etcd-csr.json

#生成证书
[root@k8s-master-001 /opt/cert/etcd]# cfssl gencert -ca=../ca/ca.pem -ca-key=../ca/ca-key.pem -config=../ca/ca-config.json -profile=kubernetes etcd-csr.json | cfssljson -bare etcd
2021/09/17 19:05:18 [INFO] generate received request
2021/09/17 19:05:18 [INFO] received CSR
2021/09/17 19:05:18 [INFO] generating key: rsa-2048
2021/09/17 19:05:18 [INFO] encoded CSR
2021/09/17 19:05:18 [INFO] signed certificate with serial number 18966647770608045658542425506164379265919279666
参数项 解释
gencert 生成新的key(密钥)和签名证书
-initca 初始化一个新ca
-ca-key 指明ca的证书
-config 指明ca的私钥文件
-profile 指明请求证书的json文件
-ca 与config中的profile对应,是指根据config中的profile段来生成证书的相关信息
[root@k8s-master-001 /opt/cert/etcd]# ll
total 16
-rw-r--r-- 1 root root 1041 Sep 17 19:05 etcd.csr
-rw-r--r-- 1 root root  364 Sep 17 19:03 etcd-csr.json
-rw------- 1 root root 1679 Sep 17 19:05 etcd-key.pem
-rw-r--r-- 1 root root 1371 Sep 17 19:05 etcd.pem

# 分发证书
[root@k8s-master-001 /opt/cert/etcd]# for ip in m1 m2 m3 n1 n2;do
   ssh root@${ip} "mkdir -pv /etc/etcd/ssl"
   scp ../ca/ca*.pem  root@${ip}:/etc/etcd/ssl
   scp ./etcd*.pem  root@${ip}:/etc/etcd/ssl
 done

3.注册ETCD服务(ETCD集群)

mkdir -pv /etc/kubernetes/conf/etcd

ETCD_NAME=`hostname`
INTERNAL_IP=`hostname -i`
INITIAL_CLUSTER=k8s-master-001=https://172.16.1.110:2380,k8s-master-002=https://172.16.1.111:2380,k8s-master-003=https://172.16.1.112:2380

cat << EOF | sudo tee /usr/lib/systemd/system/etcd.service
[Unit]
Description=etcd
Documentation=https://github.com/coreos

[Service]
ExecStart=/usr/local/bin/etcd \\
  --name ${ETCD_NAME} \\
  --cert-file=/etc/etcd/ssl/etcd.pem \\
  --key-file=/etc/etcd/ssl/etcd-key.pem \\
  --peer-cert-file=/etc/etcd/ssl/etcd.pem \\
  --peer-key-file=/etc/etcd/ssl/etcd-key.pem \\
  --trusted-ca-file=/etc/etcd/ssl/ca.pem \\
  --peer-trusted-ca-file=/etc/etcd/ssl/ca.pem \\
  --peer-client-cert-auth \\
  --client-cert-auth \\
  --initial-advertise-peer-urls https://${INTERNAL_IP}:2380 \\
  --listen-peer-urls https://${INTERNAL_IP}:2380 \\
  --listen-client-urls https://${INTERNAL_IP}:2379,https://127.0.0.1:2379 \\
  --advertise-client-urls https://${INTERNAL_IP}:2379 \\
  --initial-cluster-token etcd-cluster \\
  --initial-cluster ${INITIAL_CLUSTER} \\
  --initial-cluster-state new \\
  --data-dir=/var/lib/etcd
Restart=on-failure
RestartSec=5

[Install]
WantedBy=multi-user.target
EOF
配置选项 选项说明
name 节点名称
data-dir 指定节点的数据存储目录
listen-peer-urls 与集群其它成员之间的通信地址
listen-client-urls 监听本地端口,对外提供服务的地址
initial-advertise-peer-urls 通告给集群其它节点,本地的对等URL地址
advertise-client-urls 客户端URL,用于通告集群的其余部分信息
initial-cluster 集群中的所有信息节点
initial-cluster-token 集群的token,整个集群中保持一致
initial-cluster-state 初始化集群状态,默认为new
--cert-file 客户端与服务器之间TLS证书文件的路径
--key-file 客户端与服务器之间TLS密钥文件的路径
--peer-cert-file 对等服务器TLS证书文件的路径
--peer-key-file 对等服务器TLS密钥文件的路径
--trusted-ca-file 签名client证书的CA证书,用于验证client证书
--peer-trusted-ca-file 签名对等服务器证书的CA证书。
--trusted-ca-file 签名client证书的CA证书,用于验证client证书
--peer-trusted-ca-file 签名对等服务器证书的CA证书。
# 如果不是内网IP,则执行如下命令
[root@k8s-master-01 /opt/cert/etcd]# sed -i 's#192.168.13#172.16.1#g' /usr/lib/systemd/system/etcd.service

# 在三台节点上执行
systemctl enable --now etcd

# 验证集群
[root@k8s-master-001 /etc/etcd/ssl]# ETCDCTL_API=3 etcdctl \
 --cacert=/etc/etcd/ssl/etcd.pem \
 --cert=/etc/etcd/ssl/etcd.pem \
 --key=/etc/etcd/ssl/etcd-key.pem \
 --endpoints="https://172.16.1.110:2379,https://172.16.1.111:2379,https://172.16.1.112:2379"  endpoint status --write-out='table'

八、生成集群Master节点证书

1.master节点规划

主机名(角色) Inner-IP 外网IP
Kubernetes-master-001 172.16.1.110 192.168.13.110
Kubernetes-master-002 172.16.1.111 192.168.13.111
Kubernetes-master-003 172.16.1.112 192.168.13.112

2.创建master CA节点证书

[root@k8s-master-001 ~]# mkdir /opt/cert/k8s
[root@k8s-master-001 ~]# cd /opt/cert/k8s

[root@k8s-master-001 /opt/cert/k8s]# cat > ca-config.json << EOF
{
  "signing": {
    "default": {
      "expiry": "87600h"
    },
    "profiles": {
      "kubernetes": {
         "expiry": "87600h",
         "usages": [
            "signing",
            "key encipherment",
            "server auth",
            "client auth"
        ]
      }
    }
  }
}
EOF

3.创建根证书签名

[root@k8s-master-001 /opt/cert/k8s]# cat > ca-csr.json << EOF
{
    "CN": "kubernetes",
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "ShangHai",
            "ST": "ShangHai"
        }
    ]
}
EOF

[root@k8s-master-01 /opt/cert/k8s]# ll
total 8
-rw-r--r-- 1 root root 294 Jan 19 14:41 ca-config.json
-rw-r--r-- 1 root root 214 Jan 19 14:42 ca-csr.json

4.生成根证书

[root@k8s-master-001 /opt/cert/k8s]# cfssl gencert -initca ca-csr.json | cfssljson -bare ca -
2021/09/17 19:58:43 [INFO] generating a new CA key and certificate from CSR
2021/09/17 19:58:43 [INFO] generate received request
2021/09/17 19:58:43 [INFO] received CSR
2021/09/17 19:58:43 [INFO] generating key: rsa-2048
2021/09/17 19:58:43 [INFO] encoded CSR
2021/09/17 19:58:43 [INFO] signed certificate with serial number 703668691063654686413916500496088648386884306335

[root@k8s-master-001 /opt/cert/k8s]# ll
total 20
-rw-r--r-- 1 root root  294 Sep 17 19:55 ca-config.json
-rw-r--r-- 1 root root  960 Sep 17 19:58 ca.csr
-rw-r--r-- 1 root root  214 Sep 17 19:56 ca-csr.json
-rw------- 1 root root 1675 Sep 17 19:58 ca-key.pem
-rw-r--r-- 1 root root 1233 Sep 17 19:58 ca.pem

5.创建kube-apiserver证书

#创建kube-apiserver证书签名配置
[root@k8s-master-001 /opt/cert/k8s]# cat > server-csr.json << EOF
{
    "CN": "kubernetes",
    "hosts": [
        "127.0.0.1",
        "172.16.1.110",
        "172.16.1.111",
        "172.16.1.112",
        "172.16.1.80",
        "10.96.0.1",
        "kubernetes",
        "kubernetes.default",
        "kubernetes.default.svc",
        "kubernetes.default.svc.cluster",
        "kubernetes.default.svc.cluster.local"
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "ShangHai",
            "ST": "ShangHai"
        }
    ]
}
EOF

host:localhost地址 + master部署节点的ip地址 + etcd节点的部署地址 + 负载均衡指定的VIP(172.16.1.80) + service ip段的第一个合法地址(10.96.0.1) + k8s默认指定的一些地址。

#生成证书
[root@k8s-master-001 /opt/cert/k8s]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes server-csr.json | cfssljson -bare server
2021/09/17 20:04:14 [INFO] generate received request
2021/09/17 20:04:14 [INFO] received CSR
2021/09/17 20:04:14 [INFO] generating key: rsa-2048
2021/09/17 20:04:14 [INFO] encoded CSR
2021/09/17 20:04:14 [INFO] signed certificate with serial number 352940451730084036274418955684844345897350230516

#查看证书
[root@k8s-master-001 /opt/cert/k8s]# ll
total 36
-rw-r--r-- 1 root root 1228 Sep 17 20:04 server.csr
-rw-r--r-- 1 root root  548 Sep 17 20:03 server-csr.json
-rw------- 1 root root 1675 Sep 17 20:04 server-key.pem
-rw-r--r-- 1 root root 1513 Sep 17 20:04 server.pem

6.创建kube-controller-manager证书

#创建kube-controller-manager证书签名配置
[root@k8s-master-001 /opt/cert/k8s]# cat > kube-controller-manager-csr.json << EOF
{
    "CN": "system:kube-controller-manager",
    "hosts": [
        "127.0.0.1",
        "172.16.1.110",
        "172.16.1.111",
        "172.16.1.112",
        "172.16.1.80"
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "BeiJing",
            "ST": "BeiJing",
            "O": "system:kube-controller-manager",
            "OU": "System"
        }
    ]
}
EOF

#生成证书
[root@k8s-master-001 /opt/cert/k8s]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-controller-manager-csr.json | cfssljson -bare kube-controller-manager
2021/09/17 20:06:49 [INFO] generate received request
2021/09/17 20:06:49 [INFO] received CSR
2021/09/17 20:06:49 [INFO] generating key: rsa-2048
2021/09/17 20:06:49 [INFO] encoded CSR
2021/09/17 20:06:49 [INFO] signed certificate with serial number 441875491718339997560281657898252080213134928585

#查看证书
[root@k8s-master-001 /opt/cert/k8s]# ll
total 52
-rw-r--r-- 1 root root 1143 Sep 17 20:06 kube-controller-manager.csr
-rw-r--r-- 1 root root  448 Sep 17 20:06 kube-controller-manager-csr.json
-rw------- 1 root root 1675 Sep 17 20:06 kube-controller-manager-key.pem
-rw-r--r-- 1 root root 1476 Sep 17 20:06 kube-controller-manager.pem

7.创建kube-scheduler证书

#创建kube-scheduler签名配置
[root@k8s-master-001 /opt/cert/k8s]# cat > kube-scheduler-csr.json << EOF
{
    "CN": "system:kube-scheduler",
    "hosts": [
        "127.0.0.1",
        "172.16.1.110",
        "172.16.1.111",
        "172.16.1.112",
        "172.16.1.80"
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "BeiJing",
            "ST": "BeiJing",
            "O": "system:kube-scheduler",
            "OU": "System"
        }
    ]
}
EOF

#生成证书
[root@k8s-master-001 /opt/cert/k8s]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-scheduler-csr.json | cfssljson -bare kube-scheduler
2021/09/17 20:40:23 [INFO] generate received request
2021/09/17 20:40:23 [INFO] received CSR
2021/09/17 20:40:23 [INFO] generating key: rsa-2048
2021/09/17 20:40:23 [INFO] encoded CSR
2021/09/17 20:40:23 [INFO] signed certificate with serial number 249413426361564814995780889044095844776548862295

#查看证书
[root@k8s-master-001 /opt/cert/k8s]# ll
-rw-r--r-- 1 root root 1119 Sep 17 20:40 kube-scheduler.csr
-rw-r--r-- 1 root root  430 Sep 17 20:37 kube-scheduler-csr.json
-rw------- 1 root root 1675 Sep 17 20:40 kube-scheduler-key.pem
-rw-r--r-- 1 root root 1452 Sep 17 20:40 kube-scheduler.pem

8.创建kube-proxy证书

#创建kube-proxy证书签名配置
[root@k8s-master-001 /opt/cert/k8s]# cat > kube-proxy-csr.json << EOF
{
    "CN":"system:kube-proxy",
    "hosts":[],
    "key":{
        "algo":"rsa",
        "size":2048
    },
    "names":[
        {
            "C":"CN",
            "L":"BeiJing",
            "ST":"BeiJing",
            "O":"system:kube-proxy",
            "OU":"System"
        }
    ]
}
EOF

#生成证书
[root@k8s-master-001 /opt/cert/k8s]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy
2021/09/18 13:39:57 [INFO] generate received request
2021/09/18 13:39:57 [INFO] received CSR
2021/09/18 13:39:57 [INFO] generating key: rsa-2048
2021/09/18 13:39:57 [INFO] encoded CSR
2021/09/18 13:39:57 [INFO] signed certificate with serial number 458041757229799170034822370641607964325886371660
2021/09/18 13:39:57 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 ("Information Requirements").


#查看证书
[root@k8s-master-001 /opt/cert/k8s]# ll
total 84
-rw-r--r-- 1 root root 1029 Sep 18 13:39 kube-proxy.csr
-rw-r--r-- 1 root root  291 Sep 18 13:39 kube-proxy-csr.json
-rw------- 1 root root 1675 Sep 18 13:39 kube-proxy-key.pem
-rw-r--r-- 1 root root 1383 Sep 18 13:39 kube-proxy.pem

9.签发管理员用户证书

为了能让集群客户端工具安全的访问集群,所以要为集群客户端创建证书,使其具有所有的集群权限。

#创建证书签名配置
[root@k8s-master-001 /opt/cert/k8s]# cat > admin-csr.json << EOF
{
    "CN":"admin",
    "key":{
        "algo":"rsa",
        "size":2048
    },
    "names":[
        {
            "C":"CN",
            "L":"BeiJing",
            "ST":"BeiJing",
            "O":"system:masters",
            "OU":"System"
        }
    ]
}
EOF

#生成证书
[root@k8s-master-001 /opt/cert/k8s]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes admin-csr.json | cfssljson -bare admin
2021/09/18 13:41:25 [INFO] generate received request
2021/09/18 13:41:25 [INFO] received CSR
2021/09/18 13:41:25 [INFO] generating key: rsa-2048
2021/09/18 13:41:26 [INFO] encoded CSR
2021/09/18 13:41:26 [INFO] signed certificate with serial number 317774978572605392464806147733912114602576968403
2021/09/18 13:41:26 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 ("Information Requirements").


#查看证书
[root@k8s-master-001 /opt/cert/k8s]# ll
total 100
-rw-r--r-- 1 root root 1009 Sep 18 13:41 admin.csr
-rw-r--r-- 1 root root  260 Sep 18 13:41 admin-csr.json
-rw------- 1 root root 1679 Sep 18 13:41 admin-key.pem
-rw-r--r-- 1 root root 1363 Sep 18 13:41 admin.pem

10.颁发证书

Master节点所需证书:ca、kube-apiservver、kube-controller-manager、kube-scheduler、用户证书、Etcd证书。

[root@k8s-master-001 /opt/cert/k8s]# mkdir -pv /etc/kubernetes/ssl

[root@k8s-master-001 /opt/cert/k8s]# cp -p ./{ca*pem,server*pem,kube-controller-manager*pem,kube-scheduler*.pem,kube-proxy*pem,admin*.pem} /etc/kubernetes/ssl

[root@k8s-master-01 /opt/cert/k8s]# for i in m2 m3 ;do
 ssh root@$i "mkdir -pv /etc/kubernetes/ssl"
 scp /etc/kubernetes/ssl/* root@$i:/etc/kubernetes/ssl
done

九、Master节点部署

1.下载Kubernetes二进制组件

#1.下载server安装包
[root@k8s-master-001 ~]# cd /data/software/
[root@k8s-master-001 /data/software]# wget https://storage.googleapis.com/kubernetes-release/release/v1.18.8/kubernetes-server-linux-amd64.tar.gz

如果无法下载,可用以下方法:
[root@k8s-master-001 /data/software]#  docker run -dit --rm registry.cn-hangzhou.aliyuncs.com/k8sos/k8s:v1.18.8.1
[root@k8s-master-001 /data/software]#  docker cp 7c876178af54:kubernetes-server-linux-amd64.tar.gz .
[root@k8s-master-001 /data/software]#  ll
total 369580
drwxr-xr-x 3 630384594 600260513       123 Aug 19 02:47 etcd-v3.3.24-linux-amd64
-rw-r--r-- 1 root      root       14503878 Aug 18 18:53 etcd-v3.3.24-linux-amd64.tar.gz
-rw-r--r-- 1 root      root      363943527 Aug 14 02:09 kubernetes-server-linux-amd64.tar.gz

#2.解压安装包
[root@k8s-master-001 /data/software]# tar xf kubernetes-server-linux-amd64.tar.gz 
[root@k8s-master-001 /data/software]# cd kubernetes/server/bin

#3.推送配置文件
[root@k8s-master-01 /data/software/kubernetes/server/bin]# for i in m1 m2 m3;do scp kube-apiserver kube-controller-manager kubectl kubelet  kube-proxy kube-scheduler $i:/usr/local/bin ; done

#4.查看版本
[root@k8s-master-001 /data/software/kubernetes/server/bin]# cd
[root@k8s-master-001 ~]# kube-apiserver --version
Kubernetes v1.18.8

2. 创建集群配置文件

在kubernetes中,我们需要创建一个配置文件,用来配置集群、用户、命名空间及身份认证等信息。

[root@k8s-master-001 ~]# cd /opt/cert/k8s/

1)创建kube-controller-manager集群配置文件

export KUBE_APISERVER="https://172.16.1.80:8443"

# 设置集群参数
kubectl config set-cluster kubernetes \
  --certificate-authority=/etc/kubernetes/ssl/ca.pem \
  --embed-certs=true \
  --server=${KUBE_APISERVER} \
  --kubeconfig=kube-controller-manager.kubeconfig

# 设置客户端认证参数
kubectl config set-credentials "kube-controller-manager" \
  --client-certificate=/etc/kubernetes/ssl/kube-controller-manager.pem \
  --client-key=/etc/kubernetes/ssl/kube-controller-manager-key.pem \
  --embed-certs=true \
  --kubeconfig=kube-controller-manager.kubeconfig

# 设置上下文参数(在上下文参数中将集群参数和用户参数关联起来)
kubectl config set-context default \
  --cluster=kubernetes \
  --user="kube-controller-manager" \
  --kubeconfig=kube-controller-manager.kubeconfig

# 配置默认上下文
kubectl config use-context default --kubeconfig=kube-controller-manager.kubeconfig
#参数详解
1.--certificate-authority:验证 kube-apiserver 证书的根证书。
2.--client-certificate、--client-key:刚生成的kube-controller-manager证书和私钥,连接 kube-apiserver 时使用。
3.--embed-certs=true:将ca.pem和kube-controller-manager 证书内容嵌入到生成的 kubectl.kubeconfig 文件中(不加时,写入的是证书文件路径)。

2)创建kube-scheduler集群配置文件

export KUBE_APISERVER="https://172.16.1.80:8443"

# 设置集群参数
kubectl config set-cluster kubernetes \
  --certificate-authority=/etc/kubernetes/ssl/ca.pem \
  --embed-certs=true \
  --server=${KUBE_APISERVER} \
  --kubeconfig=kube-scheduler.kubeconfig

# 设置客户端认证参数
kubectl config set-credentials "kube-scheduler" \
  --client-certificate=/etc/kubernetes/ssl/kube-scheduler.pem \
  --client-key=/etc/kubernetes/ssl/kube-scheduler-key.pem \
  --embed-certs=true \
  --kubeconfig=kube-scheduler.kubeconfig

# 设置上下文参数(在上下文参数中将集群参数和用户参数关联起来)
kubectl config set-context default \
  --cluster=kubernetes \
  --user="kube-scheduler" \
  --kubeconfig=kube-scheduler.kubeconfig

# 配置默认上下文
kubectl config use-context default --kubeconfig=kube-scheduler.kubeconfig

3)创建kube-proxy集群配置文件

export KUBE_APISERVER="https://172.16.1.80:8443"

# 设置集群参数
kubectl config set-cluster kubernetes \
  --certificate-authority=/etc/kubernetes/ssl/ca.pem \
  --embed-certs=true \
  --server=${KUBE_APISERVER} \
  --kubeconfig=kube-proxy.kubeconfig

# 设置客户端认证参数
kubectl config set-credentials "kube-proxy" \
  --client-certificate=/etc/kubernetes/ssl/kube-proxy.pem \
  --client-key=/etc/kubernetes/ssl/kube-proxy-key.pem \
  --embed-certs=true \
  --kubeconfig=kube-proxy.kubeconfig

# 设置上下文参数(在上下文参数中将集群参数和用户参数关联起来)
kubectl config set-context default \
  --cluster=kubernetes \
  --user="kube-proxy" \
  --kubeconfig=kube-proxy.kubeconfig

# 配置默认上下文
kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig

4)创建集群管理员集群配置文件

export KUBE_APISERVER="https://172.16.1.80:8443"

# 设置集群参数
kubectl config set-cluster kubernetes \
  --certificate-authority=/etc/kubernetes/ssl/ca.pem \
  --embed-certs=true \
  --server=${KUBE_APISERVER} \
  --kubeconfig=admin.kubeconfig

# 设置客户端认证参数
kubectl config set-credentials "admin" \
  --client-certificate=/etc/kubernetes/ssl/admin.pem \
  --client-key=/etc/kubernetes/ssl/admin-key.pem \
  --embed-certs=true \
  --kubeconfig=admin.kubeconfig

# 设置上下文参数(在上下文参数中将集群参数和用户参数关联起来)
kubectl config set-context default \
  --cluster=kubernetes \
  --user="admin" \
  --kubeconfig=admin.kubeconfig

# 配置默认上下文
kubectl config use-context default --kubeconfig=admin.kubeconfig

5)配置TLS bootstrapping

# 必须要用自己机器创建的Token
[root@k8s-master-001 /opt/cert/k8s]# TLS_BOOTSTRAPPING_TOKEN=`head -c 16 /dev/urandom | od -An -t x | tr -d ' '`

[root@k8s-master-001 /opt/cert/k8s]# cat > token.csv << EOF
${TLS_BOOTSTRAPPING_TOKEN},kubelet-bootstrap,10001,"system:kubelet-bootstrap"
EOF

#查看自己机器创建的token
[root@k8s-master-001 /opt/cert/k8s]# cat token.csv
a4219ae53c8d1db1a01bbb28bbafc6c2,kubelet-bootstrap,10001,"system:kubelet-bootstrap"

6)创建bootstrapping集群配置文件

export KUBE_APISERVER="https://172.16.1.80:8443"

# 设置集群参数
kubectl config set-cluster kubernetes \
  --certificate-authority=/etc/kubernetes/ssl/ca.pem \
  --embed-certs=true \
  --server=${KUBE_APISERVER} \
  --kubeconfig=kubelet-bootstrap.kubeconfig

# 设置客户端认证参数,此处token必须用上叙token.csv中的token
kubectl config set-credentials "kubelet-bootstrap" \
  --token=a4219ae53c8d1db1a01bbb28bbafc6c2 \
  --kubeconfig=kubelet-bootstrap.kubeconfig

# 设置上下文参数(在上下文参数中将集群参数和用户参数关联起来)
kubectl config set-context default \
  --cluster=kubernetes \
  --user="kubelet-bootstrap" \
  --kubeconfig=kubelet-bootstrap.kubeconfig

# 配置默认上下文
kubectl config use-context default --kubeconfig=kubelet-bootstrap.kubeconfig

7)分发集群配置文件

[root@k8s-master-001 /opt/cert/k8s]# for i in m1 m2 m3; do
   ssh root@$i "mkdir -p  /etc/kubernetes/cfg";
   scp token.csv kube-scheduler.kubeconfig kube-controller-manager.kubeconfig admin.kubeconfig kube-proxy.kubeconfig kubelet-bootstrap.kubeconfig root@$i:/etc/kubernetes/cfg;
 done

3.部署组件

1)部署api-server

创建kube-apiserver服务配置文件(三个节点都要执行,不能复制,注意api server IP)。

KUBE_APISERVER_IP=`hostname -i`

cat > /etc/kubernetes/cfg/kube-apiserver.conf << EOF
KUBE_APISERVER_OPTS="--logtostderr=false \\
--v=2 \\
--log-dir=/var/log/kubernetes \\
--advertise-address=${KUBE_APISERVER_IP} \\
--default-not-ready-toleration-seconds=360 \\
--default-unreachable-toleration-seconds=360 \\
--max-mutating-requests-inflight=2000 \\
--max-requests-inflight=4000 \\
--default-watch-cache-size=200 \\
--delete-collection-workers=2 \\
--bind-address=0.0.0.0 \\
--secure-port=6443 \\
--allow-privileged=true \\
--service-cluster-ip-range=10.96.0.0/16 \\
--service-node-port-range=10-52767 \\
--enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota,NodeRestriction \\
--authorization-mode=RBAC,Node \\
--enable-bootstrap-token-auth=true \\
--token-auth-file=/etc/kubernetes/cfg/token.csv \\
--kubelet-client-certificate=/etc/kubernetes/ssl/server.pem \\
--kubelet-client-key=/etc/kubernetes/ssl/server-key.pem \\
--tls-cert-file=/etc/kubernetes/ssl/server.pem  \\
--tls-private-key-file=/etc/kubernetes/ssl/server-key.pem \\
--client-ca-file=/etc/kubernetes/ssl/ca.pem \\
--service-account-key-file=/etc/kubernetes/ssl/ca-key.pem \\
--audit-log-maxage=30 \\
--audit-log-maxbackup=3 \\
--audit-log-maxsize=100 \\
--audit-log-path=/var/log/kubernetes/k8s-audit.log \\
--etcd-servers=https://172.16.1.110:2379,https://172.16.1.111:2379,https://172.16.1.112:2379 \\
--etcd-cafile=/etc/etcd/ssl/ca.pem \\
--etcd-certfile=/etc/etcd/ssl/etcd.pem \\
--etcd-keyfile=/etc/etcd/ssl/etcd-key.pem"
EOF

# 如果hostname -i获取的是外网IP,则需要执行:
sed -i 's#192.168.13#172.16.1#g' /etc/kubernetes/cfg/kube-apiserver.conf
配置选项 选项说明
--logtostderr=false 输出日志到文件中,不输出到标准错误控制台
--v=2 指定输出日志的级别
--advertise-address 向集群成员通知 apiserver 消息的 IP 地址
--etcd-servers 连接的 etcd 服务器列表
--etcd-cafile 用于etcd 通信的 SSL CA 文件
--etcd-certfile 用于 etcd 通信的的 SSL 证书文件
--etcd-keyfile 用于 etcd 通信的 SSL 密钥文件
--service-cluster-ip-range Service网络地址分配
--bind-address 监听 --seure-port 的 IP 地址,如果为空,则将使用所有接口(0.0.0.0)
--secure-port=6443 用于监听具有认证授权功能的 HTTPS 协议的端口,默认值是6443
--allow-privileged 是否启用授权功能
--service-node-port-range Service使用的端口范围
--default-not-ready-toleration-seconds 表示 notReady状态的容忍度秒数
--default-unreachable-toleration-seconds 表示 unreachable状态的容忍度秒数:
--max-mutating-requests-inflight=2000 在给定时间内进行中可变请求的最大数量,0 值表示没有限制(默认值 200)
--default-watch-cache-size=200 默认监视缓存大小,0 表示对于没有设置默认监视大小的资源,将禁用监视缓存
--delete-collection-workers=2 用于 DeleteCollection 调用的工作者数量,这被用于加速 namespace 的清理( 默认值 1)
--enable-admission-plugins 资源限制的相关配置
--authorization-mode 在安全端口上进行权限验证的插件的顺序列表,以逗号分隔的列表。

2)注册kube-apiserver服务

[root@k8s-master-001 /opt/cert/k8s]#  cat > /usr/lib/systemd/system/kube-apiserver.service << EOF
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes
After=network.target

[Service]
EnvironmentFile=/etc/kubernetes/cfg/kube-apiserver.conf
ExecStart=/usr/local/bin/kube-apiserver \$KUBE_APISERVER_OPTS
Restart=on-failure
RestartSec=10
Type=notify
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
EOF

#推送
[root@k8s-master-001 /opt/cert/k8s]# for i in m2 m3;do scp /usr/lib/systemd/system/kube-apiserver.service $i:/usr/lib/systemd/system/kube-apiserver.service;done


# 创建kubernetes日志目录
[root@k8s-master-001 /opt/cert/k8s]# mkdir -p /var/log/kubernetes/

#启动kube-apiserver
[root@k8s-master-001 /opt/cert/k8s]# systemctl daemon-reload
[root@k8s-master-001 /opt/cert/k8s]# systemctl enable --now kube-apiserver
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-apiserver.service to /usr/lib/systemd/system/kube-apiserver.service.

3)kube-apiserver高可用

负载均衡器有很多种,只要能实现api-server高可用都行,这里我们采用官方推荐的haproxy + keepalived。

#1.所有安装高可用软件
[root@k8s-master-001 /opt/cert/k8s]# yum install -y keepalived haproxy

#2.配置haproxy服务
[root@k8s-master-001 /opt/cert/k8s]# cat > /etc/haproxy/haproxy.cfg <<EOF
global
  maxconn  2000
  ulimit-n  16384
  log  127.0.0.1 local0 err
  stats timeout 30s

defaults
  log global
  mode  http
  option  httplog
  timeout connect 5000
  timeout client  50000
  timeout server  50000
  timeout http-request 15s
  timeout http-keep-alive 15s

frontend monitor-in
  bind *:33305
  mode http
  option httplog
  monitor-uri /monitor

listen stats
  bind    *:8006
  mode    http
  stats   enable
  stats   hide-version
  stats   uri       /stats
  stats   refresh   30s
  stats   realm     Haproxy\ Statistics
  stats   auth      admin:admin

frontend k8s-master
  bind 0.0.0.0:8443
  bind 127.0.0.1:8443
  mode tcp
  option tcplog
  tcp-request inspect-delay 5s
  default_backend k8s-master

backend k8s-master
  mode tcp
  option tcplog
  option tcp-check
  balance roundrobin
  default-server inter 10s downinter 5s rise 2 fall 2 slowstart 60s maxconn 250 maxqueue 256 weight 100
  server k8s-master-001    172.16.1.110:6443  check inter 2000 fall 2 rise 2 weight 100
  server k8s-master-002    172.16.1.111:6443  check inter 2000 fall 2 rise 2 weight 100
  server k8s-master-003    172.16.1.112:6443  check inter 2000 fall 2 rise 2 weight 100
EOF

#3.分发至其他节点
[root@k8s-master-001 /opt/cert/k8s]# for i in m2 m3;do scp /etc/haproxy/haproxy.cfg $i:/etc/haproxy/haproxy.cfg;done

#4.配置keepalived服务
[root@k8s-master-001 /opt/cert/k8s]# mv /etc/keepalived/keepalived.conf /etc/keepalived/keepalived.conf_bak
[root@k8s-master-001 /opt/cert/k8s]# cd /etc/keepalived/
[root@k8s-master-001 /etc/keepalived]# cat > /etc/keepalived/keepalived.conf <<EOF
! Configuration File for keepalived
global_defs {
    router_id LVS_DEVEL
}
vrrp_script chk_kubernetes {
    script "/etc/keepalived/check_kubernetes.sh"
    interval 2
    weight -5
    fall 3
    rise 2
}
vrrp_instance VI_1 {
    state MASTER
    interface eth1
    mcast_src_ip 172.16.1.110
    virtual_router_id 51
    priority 100
    advert_int 2
    authentication {
        auth_type PASS
        auth_pass K8SHA_KA_AUTH
    }
    virtual_ipaddress {
        172.16.1.80
    }
#    track_script {
#       chk_kubernetes
#    }
}
EOF

#5.分发keepalived配置文件
[root@k8s-master-001 /etc/keepalived]# for i in m2 m3;do ssh root@$i "mv /etc/keepalived/keepalived.conf /etc/keepalived/keepalived.conf_bak"
scp /etc/keepalived/keepalived.conf root@$i:/etc/keepalived/keepalived.conf; 
done

#6.配置kubernetes-master-002节点
sed -i 's#state MASTER#state BACKUP#g' /etc/keepalived/keepalived.conf
sed -i 's#172.16.1.110#172.16.1.111#g' /etc/keepalived/keepalived.conf
sed -i 's#priority 100#priority 90#g' /etc/keepalived/keepalived.conf

#7.配置kubernetes-master-003节点
sed -i 's#state MASTER#state BACKUP#g' /etc/keepalived/keepalived.conf
sed -i 's#172.16.1.110#172.16.1.112#g' /etc/keepalived/keepalived.conf
sed -i 's#priority 100#priority 98#g' /etc/keepalived/keepalived.conf

#8.设置监控检查脚本
[root@k8s-master-001 /etc/keepalived]# cat > /etc/keepalived/check_kubernetes.sh <<EOF
#!/bin/bash

function check_kubernetes() {
    for ((i=0;i<5;i++));do
        apiserver_pid_id=53460
        if [[ ! -z $apiserver_pid_id ]];then
            return
        else
            sleep 2
        fi
        apiserver_pid_id=0
    done
}

# 1:running  0:stopped
check_kubernetes
if [[ $apiserver_pid_id -eq 0 ]];then
    /usr/bin/systemctl stop keepalived
    exit 1
else
    exit 0
fi

[root@k8s-master-001 /etc/keepalived]# chmod +x /etc/keepalived/check_kubernetes.sh

#分发监控脚本
[root@k8s-master-001 /etc/keepalived]# for i in m2 m3;do scp /etc/keepalived/check_kubernetes.sh $i:/etc/keepalived/check_kubernetes.sh; done

#master节点启动keeplived和haproxy服务
[root@k8s-master-001 /etc/keepalived]# systemctl enable --now keepalived haproxy

4.配置TLS bootstrapping

TLS bootstrapping 是用来简化管理员配置kubelet 与 apiserver 双向加密通信的配置步骤的一种机制。当集群开启了 TLS 认证后,每个节点的 kubelet 组件都要使用由 apiserver 使用的 CA 签发的有效证书才能与 apiserver 通讯,此时如果有很多个节点都需要单独签署证书那将变得非常繁琐且极易出错,导致集群不稳。

TLS bootstrapping 功能就是让 node节点上的kubelet组件先使用一个预定的低权限用户连接到 apiserver,然后向 apiserver 申请证书,由 apiserver 动态签署颁发到Node节点,实现证书签署自动化。
[root@k8s-master-001 /etc/keepalived]# kubectl create clusterrolebinding kubelet-bootstrap \
 --clusterrole=system:node-bootstrapper \
 --user=kubelet-bootstrap

5.部署kube-controller-manager服务

Controller Manager作为集群内部的管理控制中心,负责集群内的Node、Pod副本、服务端点(Endpoint)、命名空间(Namespace)、服务账号(ServiceAccount)、资源定额(ResourceQuota)的管理,当某个Node意外宕机时,Controller Manager会及时发现并执行自动化修复流程,确保集群始终处于预期的工作状态。如果多个控制器管理器同时生效,则会有一致性问题,所以kube-controller-manager的高可用,只能是主备模式,而kubernetes集群是采用租赁锁实现leader选举,需要在启动参数中加入 --leader-elect=true。
#1.创建kube-controller-manager配置文件
[root@k8s-master-001 /etc/keepalived]# cat > /etc/kubernetes/cfg/kube-controller-manager.conf << EOF
KUBE_CONTROLLER_MANAGER_OPTS="--logtostderr=false \\
--v=2 \\
--log-dir=/var/log/kubernetes \\
--leader-elect=true \\
--cluster-name=kubernetes \\
--bind-address=127.0.0.1 \\
--allocate-node-cidrs=true \\
--cluster-cidr=10.244.0.0/12 \\
--service-cluster-ip-range=10.96.0.0/16 \\
--cluster-signing-cert-file=/etc/kubernetes/ssl/ca.pem \\
--cluster-signing-key-file=/etc/kubernetes/ssl/ca-key.pem  \\
--root-ca-file=/etc/kubernetes/ssl/ca.pem \\
--service-account-private-key-file=/etc/kubernetes/ssl/ca-key.pem \\
--kubeconfig=/etc/kubernetes/cfg/kube-controller-manager.kubeconfig \\
--tls-cert-file=/etc/kubernetes/ssl/kube-controller-manager.pem \\
--tls-private-key-file=/etc/kubernetes/ssl/kube-controller-manager-key.pem \\
--experimental-cluster-signing-duration=87600h0m0s \\
--controllers=*,bootstrapsigner,tokencleaner \\
--use-service-account-credentials=true \\
--node-monitor-grace-period=10s \\
--horizontal-pod-autoscaler-use-rest-clients=true"
EOF
配置选项 选项意义
--leader-elect 高可用时启用选举功能。
--master 通过本地非安全本地端口8080连接apiserver
--bind-address 监控地址
--allocate-node-cidrs 是否应在node节点上分配和设置Pod的CIDR
--cluster-cidr Controller Manager在启动时如果设置了--cluster-cidr参数,防止不同的节点的CIDR地址发生冲突
--service-cluster-ip-range 集群Services 的CIDR范围
--cluster-signing-cert-file 指定用于集群签发的所有集群范围内证书文件(根证书文件)
--cluster-signing-key-file 指定集群签发证书的key
--root-ca-file 如果设置,该根证书权限将包含service acount的toker secret,这必须是一个有效的PEM编码CA 包
--service-account-private-key-file 包含用于签署service account token的PEM编码RSA或者ECDSA私钥的文件名
--experimental-cluster-signing-duration 证书签发时间
#2.注册kube-controller-manager服务
[root@k8s-master-001 /etc/keepalived]# cat > /usr/lib/systemd/system/kube-controller-manager.service << EOF
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes
After=network.target

[Service]
EnvironmentFile=/etc/kubernetes/cfg/kube-controller-manager.conf
ExecStart=/usr/local/bin/kube-controller-manager \$KUBE_CONTROLLER_MANAGER_OPTS
Restart=on-failure
RestartSec=5

[Install]
WantedBy=multi-user.target
EOF

#3.分发配置文件
[root@k8s-master-001 /etc/keepalived]# for i in m1 m2 m3; do   scp /etc/kubernetes/cfg/kube-controller-manager.conf root@$i:/etc/kubernetes/cfg;   scp /usr/lib/systemd/system/kube-controller-manager.service root@$i:/usr/lib/systemd/system/kube-controller-manager.service; done
#4.分别在三台master节点上启动
[root@k8s-master-001 /etc/keepalived]# systemctl daemon-reload
[root@k8s-master-001 /etc/keepalived]# systemctl enable --now kube-controller-manager
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-controller-manager.service to /usr/lib/systemd/system/kube-controller-manager.service.
[root@k8s-master-001 /etc/keepalived]# systemctl status kube-controller-manager
● kube-controller-manager.service - Kubernetes Controller Manager
   Loaded: loaded (/usr/lib/systemd/system/kube-controller-manager.service; enabled; vendor preset: disabled)
   Active: active (running) since Sat 2021-09-18 15:24:49 CST; 8s ago
     Docs: https://github.com/kubernetes/kubernetes
 Main PID: 25164 (kube-controller)
    Tasks: 8
   Memory: 119.7M
   CGroup: /system.slice/kube-controller-manager.service
           └─25164 /usr/local/bin/kube-controller-manager --logtostderr=false --v=2 --log-dir=/var/log/kubernetes --leader-elect=t...

Sep 18 15:24:49 k8s-master-001 systemd[1]: Started Kubernetes Controller Manager.
Sep 18 15:24:52 k8s-master-001 kube-controller-manager[25164]: Flag --horizontal-pod-autoscaler-use-rest-clients has been depr...ics.
Sep 18 15:24:53 k8s-master-001 kube-controller-manager[25164]: E0918 15:24:53.792693   25164 core.go:89] Failed to start servi...fail
Sep 18 15:24:55 k8s-master-001 kube-controller-manager[25164]: E0918 15:24:55.298333   25164 core.go:229] failed to start clou...ided
Hint: Some lines were ellipsized, use -l to show in full.

6. 部署kube-scheduler服务

kube-scheduler是 Kubernetes 集群的默认调度器,并且是集群 控制面 的一部分。对每一个新创建的 Pod 或者是未被调度的 Pod,kube-scheduler 会过滤所有的node,然后选择一个最优的 Node 去运行这个 Pod。kube-scheduler 调度器是一个策略丰富、拓扑感知、工作负载特定的功能,调度器显著影响可用性、性能和容量。调度器需要考虑个人和集体的资源要求、服务质量要求、硬件/软件/政策约束、亲和力和反亲和力规范、数据局部性、负载间干扰、完成期限等。工作负载特定的要求必要时将通过 API 暴露。
#1.创建kube-scheduler配置文件
[root@k8s-master-001 /etc/keepalived]# cat > /etc/kubernetes/cfg/kube-scheduler.conf << EOF
KUBE_SCHEDULER_OPTS="--logtostderr=false \\
--v=2 \\
--log-dir=/var/log/kubernetes \\
--kubeconfig=/etc/kubernetes/cfg/kube-scheduler.kubeconfig \\
--leader-elect=true \\
--master=http://127.0.0.1:8080 \\
--bind-address=127.0.0.1 "
EOF
#2. 注册kube-scheduler服务
[root@k8s-master-001 /etc/keepalived]# cat > /usr/lib/systemd/system/kube-scheduler.service << EOF
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/kubernetes/kubernetes
After=network.target

[Service]
EnvironmentFile=/etc/kubernetes/cfg/kube-scheduler.conf
ExecStart=/usr/local/bin/kube-scheduler \$KUBE_SCHEDULER_OPTS
Restart=on-failure
RestartSec=5

[Install]
WantedBy=multi-user.target
EOF
#3.分发配置文件
[root@k8s-master-001 /etc/keepalived]# for ip in m2 m3; do scp /usr/lib/systemd/system/kube-scheduler.service root@${ip}:/usr/lib/systemd/system;     scp /etc/kubernetes/cfg/kube-scheduler.conf root@${ip}:/etc/kubernetes/cfg; done
#4.分别在三台master节点上启动
[root@k8s-master-001 /etc/keepalived]# systemctl daemon-reload
[root@k8s-master-001 /etc/keepalived]# systemctl enable --now kube-scheduler
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-scheduler.service to /usr/lib/systemd/system/kube-scheduler.service.
[root@k8s-master-001 /etc/keepalived]# systemctl  status kube-scheduler
● kube-scheduler.service - Kubernetes Scheduler
   Loaded: loaded (/usr/lib/systemd/system/kube-scheduler.service; enabled; vendor preset: disabled)
   Active: active (running) since Sat 2021-09-18 15:29:15 CST; 18s ago
     Docs: https://github.com/kubernetes/kubernetes
 Main PID: 25248 (kube-scheduler)
    Tasks: 9
   Memory: 9.0M
   CGroup: /system.slice/kube-scheduler.service
           └─25248 /usr/local/bin/kube-scheduler --logtostderr=false --v=2 --log-dir=/var/log/kubernetes --kubeconfig=/etc/kuberne...

Sep 18 15:29:15 k8s-master-001 systemd[1]: Started Kubernetes Scheduler.
Sep 18 15:29:15 k8s-master-001 kube-scheduler[25248]: I0918 15:29:15.243210   25248 registry.go:150] Registering EvenPodsSpre...ction
Sep 18 15:29:15 k8s-master-001 kube-scheduler[25248]: I0918 15:29:15.243373   25248 registry.go:150] Registering EvenPodsSpre...ction
Hint: Some lines were ellipsized, use -l to show in full.

7.查看集群状态

至此,master所有节点均安装完毕。现在我们要检验集群安装是否成功。

[root@k8s-master-001 /etc/keepalived]# kubectl get cs
NAME                 STATUS    MESSAGE             ERROR
controller-manager   Healthy   ok                  
scheduler            Healthy   ok                  
etcd-1               Healthy   {"health":"true"}   
etcd-0               Healthy   {"health":"true"}   
etcd-2               Healthy   {"health":"true"}

十、部署kubelet服务

#1.创建kubelet配置
[root@k8s-master-001 /etc/keepalived]# KUBE_HOSTNAME=`hostname`

[root@k8s-master-001 /etc/keepalived]# cat > /etc/kubernetes/cfg/kubelet.conf << EOF
KUBELET_OPTS="--logtostderr=false \\
--v=2 \\
--log-dir=/var/log/kubernetes \\
--hostname-override=${KUBE_HOSTNAME} \\
--container-runtime=docker \\
--kubeconfig=/etc/kubernetes/cfg/kubelet.kubeconfig \\
--bootstrap-kubeconfig=/etc/kubernetes/cfg/kubelet-bootstrap.kubeconfig \\
--config=/etc/kubernetes/cfg/kubelet-config.yml \\
--cert-dir=/etc/kubernetes/ssl \\
--image-pull-progress-deadline=15m \\
--pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/k8sos/pause:3.2"
EOF
配置选项 选项意义
--hostname-override 用来配置该节点在集群中显示的主机名,kubelet设置了-–hostname-override参数后,kube-proxy也需要设置,否则会出现找不到Node的情况
--container-runtime 指定容器运行时引擎
--kubeconfig kubelet作为客户端使用的kubeconfig认证文件,此文件是由kube-controller-mananger自动生成的
--bootstrap-kubeconfig 指定令牌认证文件
--config 指定kubelet配置文件
--cert-dir 设置kube-controller-manager生成证书和私钥的目录
--image-pull-progress-deadline 镜像拉取进度最大时间,如果在这段时间拉取镜像没有任何进展,将取消拉取,默认:1m0s
--pod-infra-container-image 每个pod中的network/ipc 名称空间容器将使用的镜像
#2.创建kubelet-config配置文件
[root@k8s-master-001 /etc/keepalived]# cat > /etc/kubernetes/cfg/kubelet-config.yml << EOF
kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
address: 172.16.1.110
port: 10250
readOnlyPort: 10255
cgroupDriver: cgroupfs
clusterDNS:
- 10.96.0.2
clusterDomain: cluster.local
failSwapOn: false
authentication:
  anonymous:
    enabled: false
  webhook:
    cacheTTL: 2m0s
    enabled: true
  x509:
    clientCAFile: /etc/kubernetes/ssl/ca.pem
authorization:
  mode: Webhook
  webhook:
    cacheAuthorizedTTL: 5m0s
    cacheUnauthorizedTTL: 30s
evictionHard:
  imagefs.available: 15%
  memory.available: 100Mi
  nodefs.available: 10%
  nodefs.inodesFree: 5%
maxOpenFiles: 1000000
maxPods: 110
EOF
配置选项 选项意义
address kubelet 服务监听的地址
port kubelet 服务的端口,默认 10250
readOnlyPort 没有认证/授权的只读 kubelet 服务端口 ,设置为 0 表示禁用,默认10255
clusterDNS DNS 服务器的IP地址列表
clusterDomain 集群域名, kubelet 将配置所有容器除了主机搜索域还将搜索当前域
#3.创建kubelet启动脚本
[root@k8s-master-001 /etc/keepalived]# cat > /usr/lib/systemd/system/kubelet.service << EOF
[Unit]
Description=Kubernetes Kubelet
After=docker.service

[Service]
EnvironmentFile=/etc/kubernetes/cfg/kubelet.conf
ExecStart=/usr/local/bin/kubelet \$KUBELET_OPTS
Restart=on-failure
RestartSec=10
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
EOF

#4.分发配置文件
[root@k8s-master-001 /etc/keepalived]# for ip in m2 m3;do  scp /etc/kubernetes/cfg/{kubelet-config.yml,kubelet.conf} root@${ip}:/etc/kubernetes/cfg;     scp /usr/lib/systemd/system/kubelet.service root@${ip}:/usr/lib/systemd/system; done
#5.配置文件处理
1) master-002
sed -i 's#master-001#master-002#g' /etc/kubernetes/cfg/kubelet.conf
sed -i 's#172.16.1.110#172.16.1.111#g' /etc/kubernetes/cfg/kubelet-config.yml

2)master-003
sed -i 's#master-001#master-003#g' /etc/kubernetes/cfg/kubelet.conf
sed -i 's#172.16.1.110#172.16.1.112#g' /etc/kubernetes/cfg/kubelet-config.yml
#6.开启kubelet服务
systemctl daemon-reload;systemctl enable --now kubelet;systemctl status kubelet.service

十一、配置kube-proxy服务

kube-proxy是Kubernetes的核心组件,部署在每个Node节点上,它是实现Kubernetes Service的通信与负载均衡机制的重要组件; kube-proxy负责为Pod创建代理服务,从apiserver获取所有server信息,并根据server信息创建代理服务,实现server到Pod的请求路由和转发,从而实现K8s层级的虚拟转发网络。

1.创建kube-proxy配置文件

[root@k8s-master-001 /etc/keepalived]# cat > /etc/kubernetes/cfg/kube-proxy.conf << EOF
KUBE_PROXY_OPTS="--logtostderr=false \\
--v=2 \\
--log-dir=/var/log/kubernetes \\
--config=/etc/kubernetes/cfg/kube-proxy-config.yml"
EOF

2.创建kube-proxy-config配置文件

[root@k8s-master-001 /etc/keepalived]# cat > /etc/kubernetes/cfg/kube-proxy-config.yml << EOF
kind: KubeProxyConfiguration
apiVersion: kubeproxy.config.k8s.io/v1alpha1
bindAddress: 172.16.1.110
healthzBindAddress: 172.16.1.110:10256
metricsBindAddress: 172.16.1.110:10249
clientConnection:
  burst: 200
  kubeconfig: /etc/kubernetes/cfg/kube-proxy.kubeconfig
  qps: 100
hostnameOverride: k8s-master-001
clusterCIDR: 10.96.0.0/16
enableProfiling: true
mode: "ipvs"
kubeProxyIPTablesConfiguration:
  masqueradeAll: false
kubeProxyIPVSConfiguration:
  scheduler: rr
  excludeCIDRs: []
EOF
选项配置 选项意义
clientConnection 与kube-apiserver交互时的参数设置
burst: 200 临时允许该事件记录值超过qps设定值
kubeconfig kube-proxy 客户端连接 kube-apiserver 的 kubeconfig 文件路径设置
qps: 100 与kube-apiserver交互时的QPS,默认值5
bindAddress kube-proxy监听地址
healthzBindAddress 用于检查服务的IP地址和端口
metricsBindAddress metrics服务的ip地址和端口。默认:127.0.0.1:10249
clusterCIDR kube-proxy 根据 --cluster-cidr 判断集群内部和外部流量,指定 --cluster-cidr 或 --masquerade-all 选项后 kube-proxy 才会对访问 Service IP 的请求做 SNAT
hostnameOverride 参数值必须与 kubelet 的值一致,否则 kube-proxy 启动后会找不到该 Node,从而不会创建任何 ipvs 规则;
masqueradeAll 如果使用纯iptables代理,SNAT所有通过服务集群ip发送的通信
mode 使用ipvs模式
scheduler 当proxy为ipvs模式时,ipvs调度类型

3.创建kube-proxy启动脚本

[root@k8s-master-001 /etc/keepalived]# cat > /usr/lib/systemd/system/kube-proxy.service << EOF
[Unit]
Description=Kubernetes Proxy
After=network.target

[Service]
EnvironmentFile=/etc/kubernetes/cfg/kube-proxy.conf
ExecStart=/usr/local/bin/kube-proxy \$KUBE_PROXY_OPTS
Restart=on-failure
RestartSec=10
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
EOF

4.分发配置文件

[root@k8s-master-01 /etc/keepalived]# for ip in m2 m3;do scp /etc/kubernetes/cfg/{kube-proxy-config.yml,kube-proxy.conf} root@${ip}:/etc/kubernetes/cfg/
    scp /usr/lib/systemd/system/kube-proxy.service root@${ip}:/usr/lib/systemd/system/
done

5.分发文件处理

# master-002
sed -i 's#172.16.1.110#172.16.1.111#g' /etc/kubernetes/cfg/kube-proxy-config.yml
sed -i 's#master-001#master-002#g' /etc/kubernetes/cfg/kube-proxy-config.yml

# master-003
sed -i 's#172.16.1.110#172.16.1.112#g' /etc/kubernetes/cfg/kube-proxy-config.yml
sed -i 's#master-001#master-003#g' /etc/kubernetes/cfg/kube-proxy-config.yml

6.启动

systemctl daemon-reload; systemctl enable --now kube-proxy; systemctl status kube-proxy

7.查看kubelet加入集群请求

[root@k8s-master-001 /etc/keepalived]# kubectl  get csr
NAME                                                   AGE     SIGNERNAME                                    REQUESTOR           CONDITION
node-csr-9igOHltohDsQDy7lO8DPPkQAE_U4se_oi_FdzmZu9T4   3m9s    kubernetes.io/kube-apiserver-client-kubelet   kubelet-bootstrap   Pending
node-csr-s_j59jS030od8J5UE2_tcbs2K_zs-lap00x8xLFxVd0   3m10s   kubernetes.io/kube-apiserver-client-kubelet   kubelet-bootstrap   Pending
node-csr-yTM-nXfIkK7WVpkIZCZe7CVWTvd91qnBB2Zp6FNqW0M   3m8s    kubernetes.io/kube-apiserver-client-kubelet   kubelet-bootstrap   Pending

8.批准加入

[root@k8s-master-001 /opt/cert/k8s]# kubectl certificate approve `kubectl get csr | grep "Pending" | awk '{print $1}'`
certificatesigningrequest.certificates.k8s.io/node-csr-6yjSqBvepzBwb-gOSzk6cC5AATPv9ksDne7YzTZ-mHE approved
certificatesigningrequest.certificates.k8s.io/node-csr-JBGHucHOuwGlBXI9or--T0Jhs2fLzBW1L21wKLuoKzs approved
certificatesigningrequest.certificates.k8s.io/node-csr-Vcw_UTRlCOK8xYAQnxarsa9Q5FNB30W5EtvxdbvKfXE approved

9. 查看加入集群的新节点

[root@k8s-master-001 /etc/keepalived]# kubectl get node
NAME             STATUS   ROLES    AGE   VERSION
k8s-master-001   Ready    <none>   14s   v1.18.8
k8s-master-002   Ready    <none>   16s   v1.18.8
k8s-master-003   Ready    <none>   16s   v1.18.8

十二、部署集群网络插件

kubernetes设计了网络模型,但却将它的实现交给了网络插件,CNI网络插件最主要的功能就是实现POD资源能够跨主机进行通讯。常见的CNI网络插件:
1.     Flannel
2.     Calico
3.     Canal
4.     Contiv
5.     OpenContrail
6.     NSX-T
7.     Kube-router

1.安装网络插件

#1.下载网络插件
[root@k8s-master-001 /etc/keepalived]# cd /data/software/
[root@k8s-master-001 /data/software]# wget https://github.com/coreos/flannel/releases/download/v0.13.1-rc1/flannel-v0.13.1-rc1-linux-amd64.tar.gz

#上传
[root@k8s-master-01 ~]# cd /opt/data/
[root@k8s-master-001 /data/software]#  rz
-rw-r--r-- 1 root      root       12639495 Jan 19 19:19 flannel-v0.13.1-rc1-linux-amd64.tar.gz

#2.解压安装包
[root@k8s-master-001 /data/software]# tar xf flannel-v0.13.1-rc1-linux-amd64.tar.gz

#3.分发
[root@k8s-master-001 /data/software]#  for i in m1 m2 m3;do scp flanneld mk-docker-opts.sh  root@$i:/usr/local/bin; done;

2.将flanneld配置写入集群数据库

etcdctl \
 --ca-file=/etc/etcd/ssl/ca.pem \
 --cert-file=/etc/etcd/ssl/etcd.pem \
 --key-file=/etc/etcd/ssl/etcd-key.pem \
 --endpoints="https://172.16.1.110:2379,https://172.16.1.111:2379,https://172.16.1.112:2379" \
 mk /coreos.com/network/config '{"Network":"10.244.0.0/12", "SubnetLen": 21, "Backend": {"Type": "vxlan", "DirectRouting": true}}' 
 
#使用get查看信息
[root@k8s-master-001 /data/software]#  etcdctl  --ca-file=/etc/etcd/ssl/ca.pem  --cert-file=/etc/etcd/ssl/etcd.pem  --key-file=/etc/etcd/ssl/etcd-key.pem  --endpoints="https://172.16.1.110:2379,https://172.16.1.111:2379,https://172.16.1.112:2379"  get /coreos.com/network/config

3.注册Flanneld服务

cat > /usr/lib/systemd/system/flanneld.service << EOF
[Unit]
Description=Flanneld address
After=network.target
After=network-online.target
Wants=network-online.target
After=etcd.service
Before=docker.service

[Service]
Type=notify
ExecStart=/usr/local/bin/flanneld \\
  -etcd-cafile=/etc/etcd/ssl/ca.pem \\
  -etcd-certfile=/etc/etcd/ssl/etcd.pem \\
  -etcd-keyfile=/etc/etcd/ssl/etcd-key.pem \\
  -etcd-endpoints=https://172.16.1.110:2379,https://172.16.1.111:2379,https://172.16.1.112:2379 \\
  -etcd-prefix=/coreos.com/network \\
  -ip-masq
ExecStartPost=/usr/local/bin/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/subnet.env
Restart=always
RestartSec=5
StartLimitInterval=0
[Install]
WantedBy=multi-user.target
RequiredBy=docker.service
EOF
配置选项 选项说明
-etcd-cafile 用于etcd 通信的 SSL CA 文件
-etcd-certfile 用于 etcd 通信的的 SSL 证书文件
-etcd-keyfile 用于 etcd 通信的 SSL 密钥文件
--etcd-endpoints 所有etcd的endpoints
-etcd-prefix etcd中存储的前缀
-ip-masq -ip-masq=true 如果设置为true,这个参数的目的是让flannel进行ip伪装,而不让docker进行ip伪装。这么做的原因是如果docker进行ip伪装,流量再从flannel出去,其他host上看到的source ip就是flannel的网关ip,而不是docker容器的ip

4.分发文件

[root@k8s-master-001 /data/software]#  for i in m2 m3;do scp /usr/lib/systemd/system/flanneld.service root@$i:/usr/lib/systemd/system;done

此举是将docker的网络交给flanneld来管理,形成集群统一管理的网络。
# 使flannel托管docker
sed -i '/ExecStart/s/\(.*\)/#\1/' /usr/lib/systemd/system/docker.service
sed -i '/ExecReload/a ExecStart=/usr/bin/dockerd $DOCKER_NETWORK_OPTIONS -H fd:// --containerd=/run/containerd/containerd.sock' /usr/lib/systemd/system/docker.service
sed -i '/ExecReload/a EnvironmentFile=-/run/flannel/subnet.env' /usr/lib/systemd/system/docker.service

[root@k8s-master-001 /data/software]#  for ip in m2 m3;do scp /usr/lib/systemd/system/docker.service root@${ip}:/usr/lib/systemd/system; done

5.启动Flanneld服务

systemctl daemon-reload
systemctl  enable  flanneld.service 
systemctl restart flanneld
systemctl restart docker

十三、部署CoreDNS

CoreDNS用于集群中Pod解析Service的名字,Kubernetes基于CoreDNS用于服务发现功能。

1.下载配置文件

[root@k8s-master-001 /data/software]#  git clone https://github.com/coredns/deployment.git

2.修改镜像并运行

[root@k8s-master-001 /data/software]#  cd deployment/kubernetes

#修改阿里云私有镜像,也可以不修改
[root@k8s-master-01 /opt/data/deployment/kubernetes]# sed -i 's#coredns/coredns#registry.cn-hangzhou.aliyuncs.com/k8sos/coredns#g' coredns.yaml.sed

[root@k8s-master-01 /opt/data/deployment/kubernetes]# ./deploy.sh -i 10.96.0.2 -s | kubectl apply -f -
serviceaccount/coredns created
clusterrole.rbac.authorization.k8s.io/system:coredns created
clusterrolebinding.rbac.authorization.k8s.io/system:coredns created
configmap/coredns created
deployment.apps/coredns created
service/kube-dns created

# 测试启动是否成功
[root@k8s-master-001 /data/software/deployment/kubernetes]# kubectl get pods -n kube-system
NAME                       READY   STATUS              RESTARTS   AGE
coredns-6d99d5879f-2pvbr   0/1     ContainerCreating   0          24s

[root@k8s-master-001 /data/software/deployment/kubernetes]# kubectl describe pods -n kube-system coredns-6d99d5879f-2pvbr
Name:                 coredns-6d99d5879f-2pvbr
Namespace:            kube-system
Priority:             2000000000
Priority Class Name:  system-cluster-critical
Node:                 k8s-master-003/172.16.1.112
Start Time:           Wed, 22 Sep 2021 17:00:07 +0800
Labels:               k8s-app=kube-dns
                      pod-template-hash=6d99d5879f
Annotations:          <none>
Status:               Running
IP:                   10.241.144.2
IPs:
  IP:           10.241.144.2
Controlled By:  ReplicaSet/coredns-6d99d5879f
Containers:
  coredns:
    Container ID:  docker://8467306b7979dc669008f1eb2a0e2022611bcb4081f7408f7a7084e9123598b6
    Image:         coredns/coredns:1.8.4
    Image ID:      docker-pullable://coredns/coredns@sha256:6e5a02c21641597998b4be7cb5eb1e7b02c0d8d23cce4dd09f4682d463798890
    Ports:         53/UDP, 53/TCP, 9153/TCP
    Host Ports:    0/UDP, 0/TCP, 0/TCP
    Args:
      -conf
      /etc/coredns/Corefile
    State:          Running
      Started:      Wed, 22 Sep 2021 17:00:39 +0800
    Ready:          False
    Restart Count:  0
    Limits:
      memory:  170Mi
    Requests:
      cpu:        100m
      memory:     70Mi
    Liveness:     http-get http://:8080/health delay=60s timeout=5s period=10s #success=1 #failure=5
    Readiness:    http-get http://:8181/ready delay=0s timeout=1s period=10s #success=1 #failure=3
    Environment:  <none>
    Mounts:
      /etc/coredns from config-volume (ro)
      /var/run/secrets/kubernetes.io/serviceaccount from coredns-token-gr6bw (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             False 
  ContainersReady   False 
  PodScheduled      True 
Volumes:
  config-volume:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      coredns
    Optional:  false
  coredns-token-gr6bw:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  coredns-token-gr6bw
    Optional:    false
QoS Class:       Burstable
Node-Selectors:  kubernetes.io/os=linux
Tolerations:     CriticalAddonsOnly
                 node.kubernetes.io/not-ready:NoExecute for 360s
                 node.kubernetes.io/unreachable:NoExecute for 360s
Events:
  Type    Reason     Age        From                     Message
  ----    ------     ----       ----                     -------
  Normal  Scheduled  <unknown>  default-scheduler        Successfully assigned kube-system/coredns-6d99d5879f-2pvbr to k8s-master-003
  Normal  Pulling    34s        kubelet, k8s-master-003  Pulling image "coredns/coredns:1.8.4"
  Normal  Pulled     5s         kubelet, k8s-master-003  Successfully pulled image "coredns/coredns:1.8.4"
  Normal  Created    4s         kubelet, k8s-master-003  Created container coredns
  Normal  Started    4s         kubelet, k8s-master-003  Started container coredns
  
[root@k8s-master-001 /data/software/deployment/kubernetes]# kubectl get pods -n kube-system
NAME                       READY   STATUS    RESTARTS   AGE
coredns-6d99d5879f-2pvbr   1/1     Running   0          44s

# 删除coredns
[root@k8s-master-001 /data/software/deployment/kubernetes]# kubectl delete pod coredns-6d99d5879f-2pvbr -n kube-system

3.绑定集群匿名用户权限

此举是将超管的用户权限绑定到集群。
[root@k8s-master-01 /opt/data/deployment/kubernetes]# kubectl create clusterrolebinding cluster-system-anonymous --clusterrole=cluster-admin --user=kubernetes
clusterrolebinding.rbac.authorization.k8s.io/cluster-system-anonymous created

4.测试集群DNS

[root@k8s-master-001 /data/software/deployment/kubernetes]# kubectl run test02 -it --rm --image=busybox:1.28.3
If you don't see a command prompt, try pressing enter.
/ # nslookup kubernetes
Server:    10.96.0.2
Address 1: 10.96.0.2 kube-dns.kube-system.svc.cluster.local

Name:      kubernetes
Address 1: 10.96.0.1 kubernetes.default.svc.cluster.local
/ #

十四、node节点加入集群

1.分发组件

# 分发组件
[root@k8s-master-001 /data/software/deployment/kubernetes]# cd /data/software/kubernetes/server/bin/
[root@k8s-master-001 /data/software/kubernetes/server/bin]# for i in n1 n2;do 
 scp kubelet kube-proxy $i:/usr/local/bin
 done

2.分发证书

# 分发证书
[root@k8s-master-001 /data/software/kubernetes/server/bin]# cd /opt/cert/k8s
[root@k8s-master-001 /opt/cert/k8s]# for i in n1 n2; do   ssh root@$i "mkdir -pv /etc/kubernetes/ssl";   scp -pr ./{ca*.pem,admin*pem,kube-proxy*pem} root@$i:/etc/kubernetes/ssl;  done

3.分发网络插件安装包

# 分发网络插件安装包
[root@k8s-master-001 /opt/cert/k8s]# cd /data/software/
[root@k8s-master-001 /data/software]#  for i in n1 n2;do scp flanneld mk-docker-opts.sh $i:/usr/local/bin; done

4.分发kubelet配置及启动脚本

# 分发kubelet配置及启动脚本
[root@k8s-master-001 /data/software]#  cd /etc/kubernetes/cfg
[root@k8s-master-01 /etc/kubernetes/cfg]# for ip in n1 n2; do ssh root@${ip} "mkdir -pv /var/log/kubernetes"; ssh root@${ip} "mkdir -pv /etc/kubernetes/cfg/";     scp /etc/kubernetes/cfg/{kubelet-config.yml,kubelet.conf,kubelet-bootstrap.kubeconfig} root@${ip}:/etc/kubernetes/cfg;     scp /usr/lib/systemd/system/kubelet.service root@${ip}:/usr/lib/systemd/system; done

5.处理配置文件

# 处理配置文件
# 修改k8s-node-001配置
sed -i 's#master-001#node-001#g' /etc/kubernetes/cfg/kubelet.conf
sed -i 's#172.16.1.110#172.16.1.113#g' /etc/kubernetes/cfg/kubelet-config.yml

# 修改k8s-node-002配置
sed -i 's#master-001#node-002#g' /etc/kubernetes/cfg/kubelet.conf
sed -i 's#172.16.1.110#172.16.1.114#g' /etc/kubernetes/cfg/kubelet-config.yml

6.启动kubelet

# 启动kubelet
systemctl daemon-reload ;systemctl enable --now kubelet.service;systemctl status kubelet.service;

7.分发kube-proxy配置

# 分发kube-proxy配置
[root@k8s-master-01 /etc/kubernetes/cfg]# for ip in n1 n2;do     scp /etc/kubernetes/cfg/{kube-proxy-config.yml,kube-proxy.conf,kube-proxy.kubeconfig} root@${ip}:/etc/kubernetes/cfg/;     scp /usr/lib/systemd/system/kube-proxy.service root@${ip}:/usr/lib/systemd/system/; done

8.修改配置文件

# 修改配置文件
# 修改k8s-node-001节点
sed -i 's#master-001#node-001#g' /etc/kubernetes/cfg/kube-proxy-config.yml
sed -i 's#172.16.1.110#172.16.1.113#g' /etc/kubernetes/cfg/kube-proxy-config.yml

# 修改k8s-node-002节点
sed -i 's#172.16.1.110#172.16.1.114#g' /etc/kubernetes/cfg/kube-proxy-config.yml
sed -i 's#master-001#node-002#g' /etc/kubernetes/cfg/kube-proxy-config.yml

9.启动kube-proxy

# 启动kube-proxy
systemctl daemon-reload ;systemctl enable --now kube-proxy.service;systemctl status kube-proxy.service;

10.配置工作节点连接etcd

# 配置工作节点连接etcd
[root@k8s-master-001 /etc/kubernetes/cfg]# for i in n1 n2; do ssh root@$i "mkdir -pv /etc/etcd/ssl"; scp -p /etc/etcd/ssl/*.pem root@$i:/etc/etcd/ssl;  done
[root@k8s-master-001 /etc/kubernetes/cfg]# for i in n1 n2; do scp /data/software/etcd-v3.3.24-linux-amd64/etcdctl $i:/usr/local/bin;  done
[root@k8s-master-001 /etc/kubernetes/cfg]# ETCDCTL_API=3 etcdctl  --cacert=/etc/etcd/ssl/etcd.pem  --cert=/etc/etcd/ssl/etcd.pem  --key=/etc/etcd/ssl/etcd-key.pem  --endpoints="https://172.16.1.110:2379,https://172.16.1.111:2379,https://172.16.1.112:2379"  endpoint status --write-out='table'
+--------------------------+------------------+---------+---------+-----------+-----------+------------+
|         ENDPOINT         |        ID        | VERSION | DB SIZE | IS LEADER | RAFT TERM | RAFT INDEX |
+--------------------------+------------------+---------+---------+-----------+-----------+------------+
| https://172.16.1.71:2379 | 80d0ace027643b4e |  3.3.24 |  3.0 MB |     false |       214 |     417075 |
| https://172.16.1.72:2379 | 9a7cf2dc57ec669f |  3.3.24 |  3.0 MB |     false |       214 |     417078 |
| https://172.16.1.73:2379 | 54f8db1a175b9c73 |  3.3.24 |  3.0 MB |      true |       214 |     417079 |
+--------------------------+------------------+---------+---------+-----------+-----------+------------+

11.同步docker.service

# 同步docker.service
[root@k8s-master-001 /etc/kubernetes/cfg]# for ip in n1 n2; do 
scp /usr/lib/systemd/system/docker.service root@${ip}:/usr/lib/systemd/system; 
scp /usr/lib/systemd/system/flanneld.service root@${ip}:/usr/lib/systemd/system;
done

# 启动网络插件及docker
systemctl daemon-reload
systemctl enable --now flanneld
systemctl restart docker

12.加入集群

# 加入集群
[root@k8s-master-001 /etc/kubernetes/cfg]# kubectl get csr
NAME                                                   AGE    SIGNERNAME                                    REQUESTOR           CONDITION
node-csr-9igOHltohDsQDy7lO8DPPkQAE_U4se_oi_FdzmZu9T4   122m   kubernetes.io/kube-apiserver-client-kubelet   kubelet-bootstrap   Approved,Issued
node-csr-XlJg22f9SIk2sEE_013Jjs05jqYtM1Kv4P4Ls53MxMI   31m    kubernetes.io/kube-apiserver-client-kubelet   kubelet-bootstrap   Pending
node-csr-mGK-blD2X1kRGSVca9iW-FWuCza8u44OapTdwiDWUPM   31m    kubernetes.io/kube-apiserver-client-kubelet   kubelet-bootstrap   Pending
node-csr-s_j59jS030od8J5UE2_tcbs2K_zs-lap00x8xLFxVd0   122m   kubernetes.io/kube-apiserver-client-kubelet   kubelet-bootstrap   Approved,Issued
node-csr-yTM-nXfIkK7WVpkIZCZe7CVWTvd91qnBB2Zp6FNqW0M   122m   kubernetes.io/kube-apiserver-client-kubelet   kubelet-bootstrap   Approved,Issued

[root@k8s-master-001 /etc/kubernetes/cfg]# kubectl certificate approve `kubectl get csr | grep "Pending" | awk '{print $1}'`
certificatesigningrequest.certificates.k8s.io/node-csr-XlJg22f9SIk2sEE_013Jjs05jqYtM1Kv4P4Ls53MxMI approved
certificatesigningrequest.certificates.k8s.io/node-csr-mGK-blD2X1kRGSVca9iW-FWuCza8u44OapTdwiDWUPM approved

[root@k8s-master-001 /etc/kubernetes/cfg]# kubectl get nodes
NAME             STATUS   ROLES    AGE    VERSION
k8s-master-001   Ready    <none>   119m   v1.18.8
k8s-master-002   Ready    <none>   119m   v1.18.8
k8s-master-003   Ready    <none>   119m   v1.18.8
k8s-node-001     Ready    <none>   6s     v1.18.8
k8s-node-002     Ready    <none>   8s     v1.18.8

13.增加命令提示

# 增加命令提示
yum install -y bash-completion
source /usr/share/bash-completion/bash_completion
source <(kubectl completion bash)
echo "source <(kubectl completion bash)" >> ~/.bashrc

14.增加角色

# 配置主从权限
[root@k8s-master-001 ~]# kubectl label nodes k8s-master-001 node-role.kubernetes.io/master=k8s-master-001
[root@k8s-master-001 ~]# kubectl label nodes k8s-master-002 node-role.kubernetes.io/master=k8s-master-002
[root@k8s-master-001 ~]# kubectl label nodes k8s-master-003 node-role.kubernetes.io/master=k8s-master-003
[root@k8s-master-001 ~]# kubectl label nodes k8s-node-001 node-role.kubernetes.io/node=k8s-node-001
[root@k8s-master-001 ~]# kubectl label nodes k8s-node-002 node-role.kubernetes.io/node=k8s-node-002

[root@k8s-master-001 /etc/kubernetes/cfg]# kubectl get nodes
NAME             STATUS   ROLES    AGE    VERSION
k8s-master-001   Ready    master   130m   v1.18.8
k8s-master-002   Ready    master   130m   v1.18.8
k8s-master-003   Ready    master   130m   v1.18.8
k8s-node-001     Ready    node     11m    v1.18.8
k8s-node-002     Ready    node     11m    v1.18.8

# 给集群master节点打污点,避免不必要的容器调度到master节点
[root@k8s-master-001 ~]# kubectl taint nodes k8s-master-001 node-role.kubernetes.io/master=k8s-master-01:NoSchedule --overwrite
[root@k8s-master-001 ~]# kubectl taint nodes k8s-master-002 node-role.kubernetes.io/master=k8s-master-02:NoSchedule --overwrite
[root@k8s-master-001 ~]# kubectl taint nodes k8s-master-003 node-role.kubernetes.io/master=k8s-master-03:NoSchedule --overwrite

十五、测试集群网络

# 测试整个集群网络、DNS
[root@k8s-master-001 ~]# kubectl create deployment nginx --image=nginx
deployment.apps/nginx created

[root@k8s-master-001 ~]# kubectl expose deployment nginx --port=80 --type=NodePort
service/nginx exposed

[root@k8s-master-001 ~]# kubectl get svc
NAME         TYPE        CLUSTER-IP    EXTERNAL-IP   PORT(S)        AGE
kubernetes   ClusterIP   10.96.0.1     <none>        443/TCP        3h7m
nginx        NodePort    10.96.95.61   <none>        80:46847/TCP   8s

[root@k8s-master-001 ~]#  kubectl get svc
NAME         TYPE        CLUSTER-IP    EXTERNAL-IP   PORT(S)        AGE
kubernetes   ClusterIP   10.96.0.1     <none>        443/TCP        3h7m
nginx        NodePort    10.96.95.61   <none>        80:46847/TCP   12s

[root@k8s-master-001 ~]#  curl 192.168.13.110:46847
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
html { color-scheme: light dark; }
body { width: 35em; margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif; }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>
posted @ 2021-09-23 09:30  年少纵马且长歌  阅读(833)  评论(0)    收藏  举报