二进制部署k8s高可用集群

1.软件版本

软件 版本
OS Centos7.6 mini
docker 1.20
kubernetes 1.20

2.服务规划

类型 IP 服务
master1 192.168.1.161 apiserver,controller-manager,scheduler
kubelet,kube-proxy,docker,etcd,nginx,keepalived
master2 192.168.1.162 apiserver,controller-manager,scheduler
kubelet,kube-proxy,docker,etcd,nginx,keepalived
node1 192.168.1.163 kubelet,kube-proxy,docker,etcd
node2 192.168.1.164 kubelet,kube-proxy,docker,etcd
node3 192.168.1.165 kubelet,kube-proxy,docker,etcd
vip 192.168.1.160

3.初始化

  • 安装基础软件(all)
yum install -y yum-utils device-mapper-persistent-data lvm2 wget net-tools nfs-utils lrzsz gcc gcc-c++ make cmake libxml2-devel openssl-devel curl curl-devel unzip sudo ntp libaio-devel wget vim ncurses-devel autoconf automake zlibdevel python-devel epel-release openssh-server socat ipvsadm conntrack ntpdate telnet ipvsadm
  • 分别更改主机名称(all)
hostnamectl set-hostname master1 && bash
hostnamectl set-hostname master2 && bash
hostnamectl set-hostname node1 && bash
hostnamectl set-hostname node2 && bash
hostnamectl set-hostname node3 && bash
  • 修改host文件/etc/hosts (all)
192.168.1.161 master1
192.168.1.162 master2
192.168.1.163 node1
192.168.1.164 node2
192.168.1.165 node3
  • 配置主机之间免密登陆 (all)
ssh-keygen
ssh-copy-id master1
ssh-copy-id master2
ssh-copy-id node1
ssh-copy-id node2
ssh-copy-id node3
  • 关闭交换分区 swap,提升性能(all)
# 临时关闭
swapoff -a

# 永久关闭
vi /etc/fstab
删除swap那一行

# 关闭原因
Swap 是交换分区,如果机器内存不够,会使用 swap 分区,但是 swap 分区的性能较低,k8s 设计的时候为了能提升性能,默认是不允许使用姜欢分区的。Kubeadm 初始化的时候会检测 swap 是否关闭,如果没关闭,那就初始化失败。如果不想要关闭交换分区,安装 k8s 的时候可以指定--ignore-preflight-errors=Swap 来解决。
  • 修改机器内核参数(all)
# 加载模块,开启内核转发功能
modprobe br_netfilter
echo "modprobe br_netfilter" >> /etc/profile
cat > /etc/sysctl.d/k8s.conf <<EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
EOF
sysctl -p /etc/sysctl.d/k8s.conf


#net.ipv4.ip_forward 是数据包转发:
出于安全考虑,Linux 系统默认是禁止数据包转发的。所谓转发即当主机拥有多于一块的网卡时,其中一块收到数据包,根据数据包的目的 ip 地址将数据包发往本机另一块网卡,该网卡根据路由表继续发送数据包。这通常是路由器所要实现的功能。
要让 Linux 系统具有路由转发功能,需要配置一个 Linux 的内核参数net.ipv4.ip_forward。这个参数指定了 Linux 系统当前对路由转发功能的支持情况;其值为 0 时表示禁止进行 IP 转发;如果是 1,则说明 IP 转发功能已经打开。
  • 关闭防火墙和selinux(all)
systemctl stop firewalld && systemctl disable firewalld
sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config
setenforce 0
  • 配置软件源(all)
# 基础软件源
wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo

# docker源
wget -O /etc/yum.repos.d/docker-ce.repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
  • 配置时间同步(all)
crontab -e
* */1 * * * /usr/sbin/ntpdate cn.pool.ntp.org
systemctl restart crond
  • 开启 ipvs (all)
# 每次开机都会加载这些模块 (EOF前面反斜杠是防止变量替换用的)
cat > /etc/sysconfig/modules/ipvs.modules << \EOF
#!/bin/bash
ipvs_modules="ip_vs ip_vs_lc ip_vs_wlc ip_vs_rr ip_vs_wrr ip_vs_lblc ip_vs_lblcr ip_vs_dh ip_vs_sh ip_vs_nq ip_vs_sed ip_vs_ftp nf_conntrack"
for kernel_module in ${ipvs_modules}; do
 /sbin/modinfo -F filename ${kernel_module} > /dev/null 2>&1
 if [ 0 -eq 0 ]; then
 /sbin/modprobe ${kernel_module}
 fi
done
EOF

chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules

# 查看模块是否被加载上
lsmod | grep ip_vs
ip_vs_ftp 13079 0
nf_nat 26583 1 ip_vs_ftp
ip_vs_sed 12519 0
ip_vs_nq 12516 0
ip_vs_sh 12688 0
ip_vs_dh 12688 0

# 问题 1:ipvs 是什么?
ipvs (IP Virtual Server) 实现了传输层负载均衡,也就是我们常说的 4 层 LAN 交换,作为 Linux 内核的一部分。ipvs 运行在主机上,在真实服务器集群前充当负载均衡器。ipvs 可以将基于 TCP 和 UDP的服务请求转发到真实服务器上,并使真实服务器的服务在单个 IP 地址上显示为虚拟服务。

# 问题 2:ipvs 和 iptable 对比分析
kube-proxy 支持 iptables 和 ipvs 两种模式, 在 kubernetes v1.8 中引入了 ipvs 模式,在 v1.9 中处于 beta 阶段,在 v1.11 中已经正式可用了。iptables 模式在 v1.1 中就添加支持了,从 v1.2 版本开始 iptables 就是 kube-proxy 默认的操作模式,ipvs 和 iptables 都是基于 netfilter的,但是 ipvs 采用的是 hash 表,因此当 service 数量达到一定规模时,hash 查表的速度优势就会显现出来,从而提高 service 的服务性能。那么 ipvs 模式和 iptables 模式之间有哪些差异呢?
1、ipvs 为大型集群提供了更好的可扩展性和性能
2、ipvs 支持比 iptables 更复杂的复制均衡算法(最小负载、最少连接、加权等等)
3、ipvs 支持服务器健康检查和连接重试等功能
  • 安装iptables服务,用来开机关闭iptables (all)
yum install iptables-services -y
service iptables stop && systemctl disable iptables
iptables -F

4.生成自签CA(建议在任意master节点)

  • 准备cfssl工具
wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64
wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64
wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64
chmod +x cfssl_linux-amd64 cfssljson_linux-amd64 cfssl-certinfo_linux-amd64
mv cfssl_linux-amd64 /usr/local/bin/cfssl
mv cfssljson_linux-amd64 /usr/local/bin/cfssljson
mv cfssl-certinfo_linux-amd64 /usr/bin/cfssl-certinfo
  • 建立工作目录
mkdir -p ~/TLS/{etcd,k8s}
cd ~/TLS/etcd
  • 创建ca请求文件
{
    "CN": "kubernetes",
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "Beijing",
            "ST": "Beijing",
            "O": "k8s",
            "OU": "system"
        }
    ]
}

注:
CN:Common Name(公用名称),kube-apiserver 从证书中提取该字段作为请求的用户名 (User Name);浏览器使用该字段验证网站是否合法;
对于 SSL 证书,一般为网站域名
对于代码签名证书则为申请单位名称
对于客户端证书则为证书申请者的姓名。

O:Organization(单位名称),kube-apiserver 从证书中提取该字段作为请求用户所属的组 (Group);
对于 SSL 证书,一般为网站域名
对于代码签名证书则为申请单位名称
对于客户端单位证书则为证书申请者所在单位名称。

L 字段:所在城市
S 字段:所在省份
C 字段:只能是国家字母缩写,如中国:CN

  • 生成CA
cfssl gencert -initca ca-csr.json | cfssljson -bare ca
# 生成ca.pem和ca-key.pem文件
  • 创建CA证书配置文件
{
  "signing": {
    "default": {
      "expiry": "87600h"
    },
    "profiles": {
      "kubernetes": {
         "expiry": "87600h",
         "usages": [
            "signing",
            "key encipherment",
            "server auth",
            "client auth"
        ]
      }
    }
  }
}

5.部署etcd集群

  • 创建etcd证书请求文件 etcd-csr.json
{
    "CN": "etcd",
    "hosts": [
    "192.168.1.161",
    "192.168.1.162",
    "192.168.1.163",
    "192.168.1.164",
    "192.168.1.165"
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "BeiJing",
            "ST": "BeiJing"
        }
    ]
}

注:
上述文件hosts字段中IP为所有etcd节点的集群内部通信IP,一个都不能少!为了方便后期扩容可以多写几个预留的IP。

  • 生成etcd证书文件
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes etcd-csr.json | cfssljson -bare etcd
#会生成etcd.pem和etcd-key.pem
#-profile就是ca-config.json中profile字段中的第一段
tar xf etcd-v3.4.13-linux-amd64.tar.gz
mkdir /opt/etcd/{bin,cfg,ssl} -p
mv etcd-v3.4.13-linux-amd64/{etcd,etcdctl} /opt/etcd/bin/
  • 创建etcd配置文件 /opt/etcd/cfg/etcd.conf
#[Member]
ETCD_NAME="etcd-1"
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://192.168.1.161:2380"
ETCD_LISTEN_CLIENT_URLS="https://192.168.1.161:2379"

#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.1.161:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://192.168.1.161:2379"
ETCD_INITIAL_CLUSTER="etcd-1=https://192.168.1.161:2380,etcd-2=https://192.168.1.162:2380,etcd-3=https://192.168.1.163:2380,etcd-4=https://192.168.1.164:2380,etcd-5=https://192.168.1.165:2380,"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"

注:
ETCD_NAME:节点名称,集群中唯一
ETCD_DATA_DIR:数据目录
ETCD_LISTEN_PEER_URLS:集群通信监听地址
ETCD_LISTEN_CLIENT_URLS:客户端访问监听地址
ETCD_INITIAL_ADVERTISE_PEERURLS:集群通告地址
ETCD_ADVERTISE_CLIENT_URLS:客户端通告地址
ETCD_INITIAL_CLUSTER:集群节点地址
ETCD_INITIALCLUSTER_TOKEN:集群Token
ETCD_INITIALCLUSTER_STATE:加入集群的当前状态,new是新集群,existing表示加入已有集群

  • 创建etcd启动服务文件 /usr/lib/systemd/system/etcd.service
[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target

[Service]
Type=notify
EnvironmentFile=/opt/etcd/cfg/etcd.conf
ExecStart=/opt/etcd/bin/etcd \
  --cert-file=/opt/etcd/ssl/server.pem \
  --key-file=/opt/etcd/ssl/server-key.pem \
  --peer-cert-file=/opt/etcd/ssl/server.pem \
  --peer-key-file=/opt/etcd/ssl/server-key.pem \
  --trusted-ca-file=/opt/etcd/ssl/ca.pem \
  --peer-trusted-ca-file=/opt/etcd/ssl/ca.pem \
  --logger=zap
Restart=on-failure
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
  • 复制所需的证书文件到etcd工作目录
cp ~/TLS/etcd/ca*pem ~/TLS/etcd/etcd*pem /opt/etcd/ssl/
  • 将当前节点的所有文件复制到其他etcd工作节点
scp -r /opt/etcd/ master2:/opt/
scp -r /opt/etcd/ node1:/opt/
scp -r /opt/etcd/ node2:/opt/
scp -r /opt/etcd/ node3:/opt/

scp /usr/lib/systemd/system/etcd.service master2:/usr/lib/systemd/system/etcd.service
scp /usr/lib/systemd/system/etcd.service node1:/usr/lib/systemd/system/etcd.service
scp /usr/lib/systemd/system/etcd.service node2:/usr/lib/systemd/system/etcd.service
scp /usr/lib/systemd/system/etcd.service node3:/usr/lib/systemd/system/etcd.service
  • 修改其他etcd节点配置文件中的ip和名称
ETCD_NAME="etcd-1"   # 修改此处,节点2改为etcd-2,节点3改为etcd-3
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://192.168.1.161:2380" # 修改此处为当前服务器IP
ETCD_LISTEN_CLIENT_URLS="https://192.168.1.161:2379" # 修改此处为当前服务器IP

#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.1.161:2380" # 修改此处为当前服务器IP
ETCD_ADVERTISE_CLIENT_URLS="https://192.168.1.161:2379" # 修改此处为当前服务器IP
ETCD_INITIAL_CLUSTER="etcd-1=https://192.168.31.71:2380,etcd-2=https://192.168.31.72:2380,etcd-3=https://192.168.31.73:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"
  • 所有节点启动etcd
systemctl daemon-reload
systemctl enable etcd
systemctl start etcd
systemctl status etcd
  • 查看etcd集群状态
ETCDCTL_API=3
/opt/etcd/bin/etcdctl \
--cacert=/opt/etcd/ssl/ca.pem \
--cert=/opt/etcd/ssl/etcd.pem \
--key=/opt/etcd/ssl/etcd-key.pem \
--endpoints="https://192.168.1.161:2379,https://192.168.1.162:2379,https://192.168.1.163:2379,https://192.168.1.164:2379,https://192.168.1.165:2379" endpoint health --write-out=table

如果输出如下信息,就说明集群部署成功。如果有问题第一步先看日志:/var/log/message 或 journalctl -u etcd

+----------------------------+--------+-------------+-------+
|          ENDPOINT          | HEALTH |    TOOK     | ERROR |
+----------------------------+--------+-------------+-------+
| https://192.168.1.163:2379 |   true | 16.241254ms |       |
| https://192.168.1.165:2379 |   true | 16.085793ms |       |
| https://192.168.1.164:2379 |   true | 18.689268ms |       |
| https://192.168.1.161:2379 |   true | 19.353898ms |       |
| https://192.168.1.162:2379 |   true | 20.558988ms |       |
+----------------------------+--------+-------------+-------+

6.部署docker

  • 所有节点安装docker,又可以使用二进制形式安装
yum install docker-ce -y
systemctl start docker && systemctl enable docker
  • 配置docker镜像加速器和驱动 (all)
#修改 docker 文件驱动为 systemd,默认为 cgroupfs,kubelet 默认使用 systemd,两者必须一致才可
以。
cat > /etc/docker/daemon.json << \EOF
{
 "registry-mirrors":["https://rsbud4vc.mirror.aliyuncs.com","https://registry.docker-cn.com","https://docker.mirrors.ustc.edu.cn","https://dockerhub.azk8s.cn","http://hub-mirror.c.163.com","http://qtid6917.mirror.aliyuncs.com", "https://rncxm540.mirror.aliyuncs.com"],
  "exec-opts": ["native.cgroupdriver=systemd"]
}
EOF

systemctl restart docker && systemctl status docker

7.部署kube-apiserver

cd ~/TLS/k8s
  • 创建kube-apiserver自签证书 kube-apiserver-csr.json
{
    "CN": "kubernetes",
    "hosts": [
      "10.0.0.1",
      "127.0.0.1",
      "192.168.1.161",
      "192.168.1.162",
      "192.168.1.160",
      "kubernetes",
      "kubernetes.default",
      "kubernetes.default.svc",
      "kubernetes.default.svc.cluster",
      "kubernetes.default.svc.cluster.local"
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "BeiJing",
            "ST": "BeiJing",
            "O": "k8s",
            "OU": "System"
        }
    ]
}

注:上述文件hosts字段中IP为所有Master/LB/VIP IP,一个都不能少!为了方便后期扩容可以多写几个预留的IP。同时还需要填写 service 网络的首个IP。(一般是 kube-apiserver 指定的 service-cluster-ip-range 网段的第一个 IP,如 10.0.0.1)

  • 生成证书
cfssl gencert -ca=../etcd/ca.pem -ca-key=../etcd/ca-key.pem -config=../etcd/ca-config.json -profile=kubernetes kube-apiserver-csr.json | cfssljson -bare kube-apiserver

会生成kube-apiserver-key.pem 和 kube-apiserver.pem

  • 下载二进制程序包

下载地址: https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.20.md

注:打开链接会发现里面有很多包,下载一个server包就够了,包含了Master和Worker Node二进制文件。

  • 解压二进制包
mkdir -p /opt/kubernetes/{bin,cfg,ssl,logs}
tar zxvf kubernetes-server-linux-amd64.tar.gz
cd kubernetes/server/bin
cp kube-apiserver kube-scheduler kube-controller-manager kubelet kube-proxy /opt/kubernetes/bin
cp kubectl /usr/bin/
  • 创建配置文件 /opt/kubernetes/cfg/kube-apiserver.conf
KUBE_APISERVER_OPTS="--logtostderr=false \
--v=2 \
--log-dir=/opt/kubernetes/logs \
--etcd-servers=https://192.168.1.161:2379,https://192.168.1.162:2379,https://192.168.1.163:2379,https://192.168.1.164:2379,https://192.168.1.165:2379 \
--bind-address=192.168.1.161 \
--secure-port=6443 \
--advertise-address=192.168.1.161 \
--allow-privileged=true \
--service-cluster-ip-range=10.0.0.0/24 \
--enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota,NodeRestriction \
--authorization-mode=RBAC,Node \
--enable-bootstrap-token-auth=true \
--token-auth-file=/opt/kubernetes/cfg/token.csv \
--service-node-port-range=30000-32767 \
--kubelet-client-certificate=/opt/kubernetes/ssl/kube-apiserver.pem \
--kubelet-client-key=/opt/kubernetes/ssl/kube-apiserver-key.pem \
--tls-cert-file=/opt/kubernetes/ssl/kube-apiserver.pem  \
--tls-private-key-file=/opt/kubernetes/ssl/kube-apiserver-key.pem \
--client-ca-file=/opt/kubernetes/ssl/ca.pem \
--service-account-key-file=/opt/kubernetes/ssl/ca-key.pem \
--service-account-issuer=api \
--service-account-signing-key-file=/opt/kubernetes/ssl/kube-apiserver-key.pem \
--etcd-cafile=/opt/etcd/ssl/ca.pem \
--etcd-certfile=/opt/etcd/ssl/etcd.pem \
--etcd-keyfile=/opt/etcd/ssl/etcd-key.pem \
--requestheader-client-ca-file=/opt/kubernetes/ssl/ca.pem \
--proxy-client-cert-file=/opt/kubernetes/ssl/kube-apiserver.pem \
--proxy-client-key-file=/opt/kubernetes/ssl/kube-apiserver-key.pem \
--requestheader-allowed-names=kubernetes \
--requestheader-extra-headers-prefix=X-Remote-Extra- \
--requestheader-group-headers=X-Remote-Group \
--requestheader-username-headers=X-Remote-User \
--enable-aggregator-routing=true \
--audit-log-maxage=30 \
--audit-log-maxbackup=3 \
--audit-log-maxsize=100 \
--audit-log-path=/opt/kubernetes/logs/k8s-audit.log"

注:
--logtostderr:启用日志
---v:日志等级
--log-dir:日志目录
--etcd-servers:etcd集群地址
--bind-address:监听地址
--secure-port:https安全端口
--advertise-address:集群通告地址
--allow-privileged:启用授权
--service-cluster-ip-range:Service虚拟IP地址段
--enable-admission-plugins:准入控制模块
--authorization-mode:认证授权,启用RBAC授权和节点自管理
--enable-bootstrap-token-auth:启用TLS bootstrap机制
--token-auth-file:bootstrap token文件
--service-node-port-range:Service nodeport类型默认分配端口范围
--kubelet-client-xxx:apiserver访问kubelet客户端证书
--tls-xxx-file:apiserver https证书
1.20版本必须加的参数:--service-account-issuer,--service-account-signing-key-file
--etcd-xxxfile:连接Etcd集群证书
--audit-log-xxx:审计日志
启动聚合层相关配置:--requestheader-client-ca-file,--proxy-client-cert-file,--proxy-client-key-file,--requestheader-allowed-names,--requestheader-extra-headers-prefix,--requestheader-group-headers,--requestheader-username-headers,--enable-aggregator-routing

  • TLS Bootstrapping 机制

创建token文件

[root@master1 k8s]# cat > /opt/kubernetes/cfg/token.csv << EOF
$(head -c 16 /dev/urandom | od -An -t x | tr -d ' '),kubelet-bootstrap,10001,"system:kubelet-bootstrap"
EOF

在上述的配置文件中kube-apiserver启用了Bootstrapping机制

  1. Master apiserver 启用 TLS 认证后,每个节点的 kubelet 组件都要使用由 apiserver 使用的 CA 签发的有效证书才能与 apiserver 通讯,当 Node 节点很多时,这种客户端证书颁发需要大量工作,同样也会增加集群扩展复杂度。为了简化流程,Kubernetes 引入了 TLS bootstraping 机制来自动颁发客户端证书,kubelet 会以一个低权限用户自动向 apiserver 申请证书,kubelet 的证书由 apiserver 动态签署。
  2. Bootstrap 是很多系统中都存在的程序,比如 Linux 的 bootstrap,bootstrap 一般都是作为预先配置在开启或者系统启动的时候加载,这可以用来生成一个指定环境。Kubernetes 的 kubelet 在启动时同样可以加载一个这样的配置文件,这个文件的内容类似如下形式:
apiVersion: v1
clusters: null
contexts:
- context:
    cluster: kubernetes
    user: kubelet-bootstrap
  name: default
current-context: default
kind: Config
preferences: {}
users:
- name: kubelet-bootstrap
  user: {}

TLS 作用
TLS 的作用就是对通讯加密,防止中间人窃听;同时如果证书不信任的话根本就无法与 apiserver建立连接,更不用提有没有权限向 apiserver 请求指定内容。

RBAC 作用
当 TLS 解决了通讯问题后,那么权限问题就应由 RBAC 解决(可以使用其他权限模型,如 ABAC);RBAC 中规定了一个用户或者用户组(subject)具有请求哪些 api 的权限;在配合 TLS 加密的时候,实际上 apiserver 读取客户端证书的 CN 字段作为用户名,读取 O 字段作为用户组.以上说明:

  1. 想要与 apiserver 通讯就必须采用由 apiserver CA 签发的证书,这样才能形成信任关系,建立 TLS 连接;
  2. 可以通过证书的 CN、O 字段来提供 RBAC 所需的用户与用户组。

kubelet 首次启动流程
TLS bootstrapping 功能是让 kubelet 组件去 apiserver 申请证书,然后用于连接 apiserver;那么第一次启动时没有证书如何连接 apiserver ?
在 apiserver 配置中指定了一个 token.csv 文件,该文件中是一个预设的用户配置;同时该用户的 Token 和 由apiserver 的 CA 签发的用户被写入了 kubelet 所使用的 bootstrap.kubeconfig 配置文件中;这样在首次请求时,kubelet使用 bootstrap.kubeconfig 中被 apiserver CA 签发证书时信任的用户来与 apiserver 建立TLS 通讯,使用 bootstrap.kubeconfig 中的用户 Token 来向 apiserver 声明自己的 RBAC 授权身份.

token.csv 格式

3940fd7fbb391d1b4d861ad17a1f0613,kubelet-bootstrap,10001,"system:kubelet-bootstrap"

首次启动时,可能与遇到 kubelet 报 401 无权访问 apiserver 的错误;这是因为在默认情况下,kubelet 通过 bootstrap.kubeconfig 中的预设用户 Token 声明了自己的身份,然后创建CSR 请求;但是不要忘记这个用户在我们不处理的情况下他没任何权限的,包括创建 CSR 请求;所以需要创建一个 ClusterRoleBinding,将预设用户 kubelet-bootstrap 与内置的ClusterRole system:node-bootstrapper 绑定到一起,使其能够发起 CSR 请求。

  • 复制所需证书到工作目录
cp ~/TLS/etcd/ca*pem ~/TLS/k8s/kube-apiserver*pem /opt/kubernetes/ssl/
  • 创建kube-apiserver启动托管文件 /usr/lib/systemd/system/kube-apiserver.service
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes

[Service]
EnvironmentFile=/opt/kubernetes/cfg/kube-apiserver.conf
ExecStart=/opt/kubernetes/bin/kube-apiserver \$KUBE_APISERVER_OPTS
Restart=on-failure

[Install]
WantedBy=multi-user.target
  • 启动kube-apiserver
systemctl daemon-reload
systemctl start kube-apiserver
systemctl enable kube-apiserver
systemctl status kube-apiserver

8.部署kube-controller-manager

cd ~/TLS/k8s
  • 创建证书请求文件 kube-controller-manager-csr.json
{
  "CN": "system:kube-controller-manager",
  "hosts": [],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "L": "BeiJing",
      "ST": "BeiJing",
      "O": "system:masters",
      "OU": "System"
    }
  ]
}
  • 生成证书
cfssl gencert -ca=../etcd/ca.pem -ca-key=../etcd/ca-key.pem -config=../etcd/ca-config.json -profile=kubernetes kube-controller-manager-csr.json | cfssljson -bare  kube-controller-manager

会生成kube-controller-manager.pem 和 kube-controller-manager-key.pem

  • 创建配置文件 /opt/kubernetes/cfg/kube-controller-manager.conf
KUBE_CONTROLLER_MANAGER_OPTS="--logtostderr=false \
--v=2 \
--log-dir=/opt/kubernetes/logs \
--leader-elect=true \
--kubeconfig=/opt/kubernetes/cfg/kube-controller-manager.kubeconfig \
--bind-address=127.0.0.1 \
--allocate-node-cidrs=true \
--cluster-cidr=10.244.0.0/16 \
--service-cluster-ip-range=10.0.0.0/24 \
--cluster-signing-cert-file=/opt/kubernetes/ssl/ca.pem \
--cluster-signing-key-file=/opt/kubernetes/ssl/ca-key.pem  \
--root-ca-file=/opt/kubernetes/ssl/ca.pem \
--service-account-private-key-file=/opt/kubernetes/ssl/ca-key.pem \
--cluster-signing-duration=87600h0m0s"

--kubeconfig:连接apiserver配置文件
--leader-elect:当该组件启动多个时,自动选举(HA)
--cluster-signing-cert-file/--cluster-signing-key-file:自动为kubelet颁发证书的CA,与apiserver保持一致

  • 创建所需kubeconfig文件
KUBE_CONFIG="/opt/kubernetes/cfg/kube-controller-manager.kubeconfig"
KUBE_APISERVER="https://192.168.1.161:6443"

# 设置集群信息
kubectl config set-cluster kubernetes \
  --certificate-authority=/opt/kubernetes/ssl/ca.pem \
  --embed-certs=true \
  --server=${KUBE_APISERVER} \
  --kubeconfig=${KUBE_CONFIG}

# 设置用户认证信息
kubectl config set-credentials kube-controller-manager \
  --client-certificate=./kube-controller-manager.pem \
  --client-key=./kube-controller-manager-key.pem \
  --embed-certs=true \
  --kubeconfig=${KUBE_CONFIG}

# 设置上下文
kubectl config set-context default \
  --cluster=kubernetes \
  --user=kube-controller-manager \
  --kubeconfig=${KUBE_CONFIG}

# 设置当前上下文
kubectl config use-context default --kubeconfig=${KUBE_CONFIG}

文件创建完成后如下

[root@master1 k8s]# kubectl config view --kubeconfig=${KUBE_CONFIG}
apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: DATA+OMITTED
    server: https://192.168.1.161:6443
  name: kubernetes
contexts:
- context:
    cluster: kubernetes
    user: kube-controller-manager
  name: default
current-context: default
kind: Config
preferences: {}
users:
- name: kube-controller-manager
  user:
    client-certificate-data: REDACTED
    client-key-data: REDACTED
  • 创建启动托管文件 /usr/lib/systemd/system/kube-controller-manager.service
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes

[Service]
EnvironmentFile=/opt/kubernetes/cfg/kube-controller-manager.conf
ExecStart=/opt/kubernetes/bin/kube-controller-manager $KUBE_CONTROLLER_MANAGER_OPTS
Restart=on-failure

[Install]
WantedBy=multi-user.target
  • 启动kube-controller-manager
systemctl daemon-reload
systemctl start kube-controller-manager
systemctl enable kube-controller-manager
systemctl status kube-controller-manager

9.部署kube-scheduler

cd ~/TLS/k8s
  • 创建证书请求文件 kube-scheduler-csr.json
{
  "CN": "system:kube-scheduler",
  "hosts": [],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "L": "BeiJing",
      "ST": "BeiJing",
      "O": "system:masters",
      "OU": "System"
    }
  ]
}
  • 生成证书
cfssl gencert -ca=../etcd/ca.pem -ca-key=../etcd/ca-key.pem -config=../etcd/ca-config.json -profile=kubernetes kube-scheduler-csr.json | cfssljson -bare kube-scheduler

会生成kube-scheduler.pem 和 kube-scheduler-key.pem

  • 创建配置文件 /opt/kubernetes/cfg/kube-scheduler.conf
KUBE_SCHEDULER_OPTS="--logtostderr=false \
--v=2 \
--log-dir=/opt/kubernetes/logs \
--leader-elect \
--kubeconfig=/opt/kubernetes/cfg/kube-scheduler.kubeconfig \
--bind-address=127.0.0.1"

--kubeconfig:连接apiserver配置文件
--leader-elect:当该组件启动多个时,自动选举(HA)

  • 创建所需kubeconfig文件
KUBE_CONFIG="/opt/kubernetes/cfg/kube-scheduler.kubeconfig"
KUBE_APISERVER="https://192.168.1.161:6443"

# 设置集群信息
kubectl config set-cluster kubernetes \
  --certificate-authority=/opt/kubernetes/ssl/ca.pem \
  --embed-certs=true \
  --server=${KUBE_APISERVER} \
  --kubeconfig=${KUBE_CONFIG}

# 设置用户认证信息
kubectl config set-credentials kube-scheduler \
  --client-certificate=./kube-scheduler.pem \
  --client-key=./kube-scheduler-key.pem \
  --embed-certs=true \
  --kubeconfig=${KUBE_CONFIG}

# 设置上下文
kubectl config set-context default \
  --cluster=kubernetes \
  --user=kube-scheduler \
  --kubeconfig=${KUBE_CONFIG}

# 设置当前上下文
kubectl config use-context default --kubeconfig=${KUBE_CONFIG}

文件创建完成后如下

[root@master1 k8s]# kubectl config view --kubeconfig=${KUBE_CONFIG}
apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: DATA+OMITTED
    server: https://192.168.1.161:6443
  name: kubernetes
contexts:
- context:
    cluster: kubernetes
    user: kube-scheduler
  name: default
current-context: default
kind: Config
preferences: {}
users:
- name: kube-scheduler
  user:
    client-certificate-data: REDACTED
    client-key-data: REDACTED
  • 创建启动托管文件 /usr/lib/systemd/system/kube-scheduler.service
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/kubernetes/kubernetes

[Service]
EnvironmentFile=/opt/kubernetes/cfg/kube-scheduler.conf
ExecStart=/opt/kubernetes/bin/kube-scheduler $KUBE_SCHEDULER_OPTS
Restart=on-failure

[Install]
WantedBy=multi-user.target
  • 启动kube-controller-manager
systemctl daemon-reload
systemctl start kube-scheduler
systemctl enable kube-scheduler
systemctl status kube-scheduler

10.配置kubectl

cd ~/TLS/k8s
  • 创建证书请求文件 admin-csr.json
{
  "CN": "admin",
  "hosts": [],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "L": "BeiJing",
      "ST": "BeiJing",
      "O": "system:masters",
      "OU": "System"
    }
  ]
}
  • 生成证书
cfssl gencert -ca=../etcd/ca.pem -ca-key=../etcd/ca-key.pem -config=../etcd/ca-config.json -profile=kubernetes admin-csr.json | cfssljson -bare admin

会生成admin.pem 和 admin-key.pem

  • 创建所需kubeconfig文件
mkdir /root/.kube

KUBE_CONFIG="/root/.kube/config"
KUBE_APISERVER="https://192.168.1.161:6443"

# 设置集群信息
kubectl config set-cluster kubernetes \
  --certificate-authority=/opt/kubernetes/ssl/ca.pem \
  --embed-certs=true \
  --server=${KUBE_APISERVER} \
  --kubeconfig=${KUBE_CONFIG}

# 设置用户认证信息
kubectl config set-credentials cluster-admin \
  --client-certificate=./admin.pem \
  --client-key=./admin-key.pem \
  --embed-certs=true \
  --kubeconfig=${KUBE_CONFIG}

# 设置上线文
kubectl config set-context default \
  --cluster=kubernetes \
  --user=cluster-admin \
  --kubeconfig=${KUBE_CONFIG}

# 设置当前上下文
kubectl config use-context default --kubeconfig=${KUBE_CONFIG}

文件创建完成后如下

[root@master1 k8s]# kubectl config view --kubeconfig=${KUBE_CONFIG}
apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: DATA+OMITTED
    server: https://192.168.1.161:6443
  name: kubernetes
contexts:
- context:
    cluster: kubernetes
    user: cluster-admin
  name: default
current-context: default
kind: Config
preferences: {}
users:
- name: cluster-admin
  user:
    client-certificate-data: REDACTED
    client-key-data: REDACTED
  • 使用kubectl查看集群状态
[root@master1 k8s]# kubectl get cs
Warning: v1 ComponentStatus is deprecated in v1.19+
NAME                 STATUS    MESSAGE             ERROR
scheduler            Healthy   ok
controller-manager   Healthy   ok
etcd-1               Healthy   {"health":"true"}
etcd-0               Healthy   {"health":"true"}
etcd-4               Healthy   {"health":"true"}
etcd-3               Healthy   {"health":"true"}
etcd-2               Healthy   {"health":"true"}
  • 授权kubelet-bootstrap用户允许请求证书
kubectl create clusterrolebinding kubelet-bootstrap \
--clusterrole=system:node-bootstrapper \
--user=kubelet-bootstrap

11.部署kubelet

  • 创建配置文件 /opt/kubernetes/cfg/kubelet.conf
KUBELET_OPTS="--logtostderr=false \
--v=2 \
--log-dir=/opt/kubernetes/logs \
--hostname-override=master1 \
--network-plugin=cni \
--kubeconfig=/opt/kubernetes/cfg/kubelet.kubeconfig \
--bootstrap-kubeconfig=/opt/kubernetes/cfg/bootstrap.kubeconfig \
--config=/opt/kubernetes/cfg/kubelet-config.json \
--cert-dir=/opt/kubernetes/ssl \
--pod-infra-container-image=liswei/pause-amd64:3.0"

--hostname-override:显示名称,集群中唯一
--network-plugin:启用CNI
--kubeconfig:空路径,会自动生成,后面用于连接apiserver
--bootstrap-kubeconfig:首次启动向apiserver申请证书
--config:配置参数文件
--cert-dir:kubelet证书生成目录
--pod-infra-container-image:管理Pod网络容器的镜像

  • 创建--config配置参数文件 /opt/kubernetes/cfg/kubelet-config.json
{
  "kind": "KubeletConfiguration",
  "apiVersion": "kubelet.config.k8s.io/v1beta1",
  "address": "0.0.0.0",
  "port": 10250,
  "readOnlyPort": 10255,
  "cgroupDriver": "systemd",
  "hairpinMode": "promiscuous-bridge",
  "serializeImagePulls": false,
  "clusterDNS": [
    "10.0.0.2"
  ],
  "clusterDomain": "cluster.local",
  "failSwapOn": false,
  "authentication": {
    "anonymous": {
      "enabled": false
    },
    "webhook": {
      "cacheTTL": "2m0s",
      "enabled": true
    },
    "x509": {
      "clientCAFile": "/opt/kubernetes/ssl/ca.pem"
    }
  },
  "authorization": {
    "mode": "Webhook",
    "webhook": {
      "cacheAuthorizedTTL": "5m0s",
      "cacheUnauthorizedTTL": "30s"
    }
  },
  "evictionHard": {
    "imagefs.available": "15%",
    "memory.available": "100Mi",
    "nodefs.available": "10%",
    "nodefs.inodesFree": "5%"
  },
  "maxOpenFiles": 1000000,
  "maxPods": 110
}

"cgroupDriver": "systemd"要和 docker 的驱动一致。

  • 生成kubelet初次加入集群引导kubeconfig文件
KUBE_CONFIG="/opt/kubernetes/cfg/bootstrap.kubeconfig"
KUBE_APISERVER="https://192.168.1.161:6443"
# token
TOKEN=$(awk -F "," '{print $1}' /opt/kubernetes/cfg/token.csv)

# 设置集群信息
kubectl config set-cluster kubernetes \
  --certificate-authority=/opt/kubernetes/ssl/ca.pem \
  --embed-certs=true \
  --server=${KUBE_APISERVER} \
  --kubeconfig=${KUBE_CONFIG}

# 设置token认证信息
kubectl config set-credentials "kubelet-bootstrap" \
  --token=${TOKEN} \
  --kubeconfig=${KUBE_CONFIG}

# 设置上线文
kubectl config set-context default \
  --cluster=kubernetes \
  --user="kubelet-bootstrap" \
  --kubeconfig=${KUBE_CONFIG}

# 设置当前上下文
kubectl config use-context default --kubeconfig=${KUBE_CONFIG}

文件创建完成后如下

[root@master1 ~]# kubectl config view --kubeconfig=${KUBE_CONFIG}
apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: DATA+OMITTED
    server: https://192.168.1.161:6443
  name: kubernetes
contexts:
- context:
    cluster: kubernetes
    user: kubelet-bootstrap
  name: default
current-context: default
kind: Config
preferences: {}
users:
- name: kubelet-bootstrap
  user:
    token: REDACTED
  • 创建启动托管文件 /usr/lib/systemd/system/kubelet.service
[Unit]
Description=Kubernetes Kubelet
After=docker.service

[Service]
EnvironmentFile=/opt/kubernetes/cfg/kubelet.conf
ExecStart=/opt/kubernetes/bin/kubelet $KUBELET_OPTS
Restart=on-failure
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
  • 启动kublet
systemctl daemon-reload
systemctl start kubelet
systemctl enable kubelet
systemctl status kubelet
  • 查看kubelet证书请求
[root@master1 ~]# kubectl get csr
NAME                                                   AGE   SIGNERNAME                                    REQUESTOR           CONDITION
node-csr-h-N1-dku58DdmLl5OTqEsdOqftncNHsbUKFa4QNb4Ic   42s   kubernetes.io/kube-apiserver-client-kubelet   kubelet-bootstrap   Pending
  • 批准kubelet证书申请
[root@master1 ~]# kubectl certificate approve node-csr-h-N1-dku58DdmLl5OTqEsdOqftncNHsbUKFa4QNb4Ic
certificatesigningrequest.certificates.k8s.io/node-csr-h-N1-dku58DdmLl5OTqEsdOqftncNHsbUKFa4QNb4Ic approved
  • 查看加入的节点信息
[root@master1 ~]# kubectl get nodes
NAME      STATUS     ROLES    AGE   VERSION
master1   NotReady   <none>   30s   v1.20.15

注:由于网络插件还没有部署,节点会没有准备就绪 NotReady

12.部署kube-proxy

cd ~/TLS/k8s
  • 创建证书请求文件 kube-proxy-csr.json
{
  "CN": "system:kube-proxy",
  "hosts": [],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "L": "BeiJing",
      "ST": "BeiJing",
      "O": "k8s",
      "OU": "System"
    }
  ]
}
  • 生成证书
cfssl gencert -ca=../etcd/ca.pem -ca-key=../etcd/ca-key.pem -config=../etcd/ca-config.json -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy

会生成kube-proxy.pem 和 kube-proxy-key.pem

  • 创建配置文件 /opt/kubernetes/cfg/kube-proxy.conf
KUBE_PROXY_OPTS="--logtostderr=false \
--v=2 \
--log-dir=/opt/kubernetes/logs \
--config=/opt/kubernetes/cfg/kube-proxy-config.json"
  • 创建--config配置参数文件 /opt/kubernetes/cfg/kube-proxy-config.json
{
  "kind": "KubeProxyConfiguration",
  "apiVersion": "kubeproxy.config.k8s.io/v1alpha1",
  "bindAddress": "0.0.0.0",
  "metricsBindAddress": "0.0.0.0:10249",
  "clientConnection": {
    "kubeconfig": "/opt/kubernetes/cfg/kube-proxy.kubeconfig"
  },
  "hostnameOverride": "master1",
  "clusterCIDR": "10.0.0.0/24"
}
  • 创建所需kube-config文件
KUBE_CONFIG="/opt/kubernetes/cfg/kube-proxy.kubeconfig"
KUBE_APISERVER="https://192.168.1.161:6443"

# 设置集群信息
kubectl config set-cluster kubernetes \
  --certificate-authority=/opt/kubernetes/ssl/ca.pem \
  --embed-certs=true \
  --server=${KUBE_APISERVER} \
  --kubeconfig=${KUBE_CONFIG}

# 设置用户认证信息
kubectl config set-credentials kube-proxy \
  --client-certificate=./kube-proxy.pem \
  --client-key=./kube-proxy-key.pem \
  --embed-certs=true \
  --kubeconfig=${KUBE_CONFIG}

# 设置上线文
kubectl config set-context default \
  --cluster=kubernetes \
  --user=kube-proxy \
  --kubeconfig=${KUBE_CONFIG}

# 设置当前上下文
kubectl config use-context default --kubeconfig=${KUBE_CONFIG}

文件创建完成后如下

[root@master1 k8s]# kubectl config view --kubeconfig=${KUBE_CONFIG}
apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: DATA+OMITTED
    server: https://192.168.1.161:6443
  name: kubernetes
contexts:
- context:
    cluster: kubernetes
    user: kube-proxy
  name: default
current-context: default
kind: Config
preferences: {}
users:
- name: kube-proxy
  user:
    client-certificate-data: REDACTED
    client-key-data: REDACTED
  • 创建启动托管文件 /usr/lib/systemd/system/kube-proxy.service
[Unit]
Description=Kubernetes Proxy
After=network.target

[Service]
EnvironmentFile=/opt/kubernetes/cfg/kube-proxy.conf
ExecStart=/opt/kubernetes/bin/kube-proxy $KUBE_PROXY_OPTS
Restart=on-failure
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
  • 启动kube-proxy
systemctl daemon-reload
systemctl start kube-proxy
systemctl enable kube-proxy
systemctl status kube-proxy

13.授权apiserver访问kubelet的所有权限

13.部署Calico网络组件

yaml配置:https://www.cnblogs.com/forlive/articles/16028472.html

复制出配置文件到本地
CALICO_IPV4POOL_CIDR 这个配置需要改成--cluster-cidr所设置的地址

[root@master1 TLS]# kubectl apply -f calico.yaml
configmap/calico-config created
customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created
clusterrole.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrole.rbac.authorization.k8s.io/calico-node created
clusterrolebinding.rbac.authorization.k8s.io/calico-node created
daemonset.apps/calico-node created
serviceaccount/calico-node created
deployment.apps/calico-kube-controllers created
serviceaccount/calico-kube-controllers created

查看部署状态

[root@master1 TLS]# kubectl get pods -n kube-system
NAME                                      READY   STATUS    RESTARTS   AGE
calico-kube-controllers-97769f7c7-59v2q   1/1     Running   0          119s
calico-node-2fmz4                         1/1     Running   0          119s

查看节点状态

[root@master1 TLS]# kubectl get nodes
NAME      STATUS   ROLES    AGE   VERSION
master1   Ready    <none>   44m   v1.20.15

14.部署CoreDNS

yaml配置:https://www.cnblogs.com/forlive/articles/16028534.html

复制出配置文件到本地
clusterIP 需要在cluster-ip的地址段内

[root@master1 TLS]# kubectl apply -f coredns.yaml 
serviceaccount/coredns created
clusterrole.rbac.authorization.k8s.io/system:coredns created
clusterrolebinding.rbac.authorization.k8s.io/system:coredns created
configmap/coredns created
deployment.apps/coredns created
service/kube-dns created

查看部署状态

[root@master1 TLS]# kubectl get pods -n kube-system
NAME                                      READY   STATUS    RESTARTS   AGE
calico-kube-controllers-97769f7c7-59v2q   1/1     Running   0          15m
calico-node-2fmz4                         1/1     Running   0          15m
coredns-79495b5589-svzqw                  1/1     Running   0          110s

[root@master1 TLS]# kubectl get svc -n kube-system
NAME       TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)                  AGE
kube-dns   ClusterIP   10.0.0.2     <none>        53/UDP,53/TCP,9153/TCP   2m33s

测试dns可用性

[root@master1 TLS]# kubectl run -it --rm dns-test --image=busybox:1.28.4 sh 
If you don't see a command prompt, try pressing enter.
/ # 
/ # nslookup kubernetes
Server:    10.0.0.2
Address 1: 10.0.0.2 kube-dns.kube-system.svc.cluster.local

Name:      kubernetes
Address 1: 10.0.0.1 kubernetes.default.svc.cluster.local
/ # nslookup www.baidu.com
Server:    10.0.0.2
Address 1: 10.0.0.2 kube-dns.kube-system.svc.cluster.local

Name:      www.baidu.com
Address 1: 39.156.66.14
Address 2: 39.156.66.18

15.部署node节点kubelet

  • node节点创建工作目录
mkdir -p /opt/kubernetes/{bin,cfg,ssl,logs}
  • 从刚刚部署的master节点复制所需文件到node节点
# 复制二进制程序
scp /opt/kubernetes/bin/kubelet node1:/opt/kubernetes/bin/

# 复制配置文件
scp /opt/kubernetes/cfg/{kubelet.conf,bootstrap.kubeconfig,kubelet-config.json} node1:/opt/kubernetes/cfg/

# 复制ca证书文件(不需要私钥)
scp /opt/kubernetes/ssl/ca.pem node1:/opt/kubernetes/ssl/ca.pem

# 复制启动托管文件
scp  /usr/lib/systemd/system/kubelet.service node1:/usr/lib/systemd/system/kubelet.service
  • node节点修改配置文件 /opt/kubernetes/cfg/kubelet.conf
# 这个配置文件中只需要修改当前节点在k8s集群中显示的节点名称,全局唯一
--hostname-override=node1

注意: 下面两个文件不需要改动
/opt/kubernetes/cfg/bootstrap.kubeconfig ca证书放到对应位置即可
/opt/kubernetes/cfg/kubelet-config.json apiserver地址目前不需要更改,在做高可用时,修改成vip的地址即可

  • node节点启动kubelet
systemctl daemon-reload
systemctl start kubelet
systemctl enable kubelet
systemctl status kubelet
  • 查看kubelet证书请求
[root@master1 ~]# kubectl get csr
NAME                                                   AGE   SIGNERNAME                                    REQUESTOR           CONDITION
node-csr-GHAsgRPM7A_pmw59g_-0MBj22OYyyViuJs9bkjlM_E4   8s    kubernetes.io/kube-apiserver-client-kubelet   kubelet-bootstrap   Pending
  • 批准kubelet证书申请
[root@master1 ~]# kubectl certificate approve node-csr-GHAsgRPM7A_pmw59g_-0MBj22OYyyViuJs9bkjlM_E4
certificatesigningrequest.certificates.k8s.io/node-csr-GHAsgRPM7A_pmw59g_-0MBj22OYyyViuJs9bkjlM_E4 approved
  • 查看加入的节点信息
[root@master1 ~]# kubectl get nodes
NAME      STATUS   ROLES    AGE   VERSION
master1   Ready    <none>   13h   v1.20.15
node1     Ready    <none>   17m   v1.20.15

注意:虽然节点显示已经就绪,但是还没有部署kube-proxy组件,节点的网络还没有全部打通
查看calico组件状态发现当前节点的calico pod没有运行起来

[root@master1 ~]# kubectl get pods -n kube-system
NAME                                      READY   STATUS             RESTARTS   AGE
calico-kube-controllers-97769f7c7-59v2q   1/1     Running            0          12h
calico-node-26jg6                         0/1     CrashLoopBackOff   7          11m
calico-node-2fmz4                         1/1     Running            0          12h
coredns-79495b5589-svzqw                  1/1     Running            0          12h

[root@master1 ~]# kubectl logs pods/calico-node-26jg6 -n kube-system -f
2022-03-20 03:44:50.145 [INFO][9] startup/startup.go 299: Early log level set to info
2022-03-20 03:44:50.145 [INFO][9] startup/startup.go 315: Using NODENAME environment for node name
2022-03-20 03:44:50.145 [INFO][9] startup/startup.go 327: Determined node name: node1
2022-03-20 03:44:50.147 [INFO][9] startup/startup.go 359: Checking datastore connection
2022-03-20 03:45:20.148 [INFO][9] startup/startup.go 374: Hit error connecting to datastore - retry error=Get https://10.0.0.1:443/api/v1/nodes/foo: dial tcp 10.0.0.1:443: i/o timeout

[root@master1 ~]# kubectl get ds -n kube-system
NAME          DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR            AGE
calico-node   2         2         1       2            1           kubernetes.io/os=linux   12h

16.部署node节点kube-proxy

  • node节点创建工作目录
mkdir -p /opt/kubernetes/{bin,cfg,ssl,logs}
  • 从刚刚部署的master节点复制所需文件到node节点
# 复制二进制程序
scp /opt/kubernetes/bin/kube-proxy node1:/opt/kubernetes/bin/

# 复制配置文件
scp /opt/kubernetes/cfg/{kube-proxy.conf,kube-proxy-config.json,kube-proxy.kubeconfig} node1:/opt/kubernetes/cfg/

# 复制启动托管文件
scp  /usr/lib/systemd/system/kube-proxy.service node1:/usr/lib/systemd/system/kube-proxy.service
  • node节点修改配置文件 /opt/kubernetes/cfg/kube-proxy-config.json
# 这个配置文件中只需要修改当前节点在k8s集群中显示的节点名称,全局唯一
"hostnameOverride": "node1

注意: 下面两个文件不需要改动
/opt/kubernetes/cfg/kube-proxy.conf
/opt/kubernetes/cfg/kube-proxy.kubeconfig apiserver地址目前不需要更改,在做高可用时,修改成vip的地址即可

  • node节点启动kube-proxy
systemctl daemon-reload
systemctl start kube-proxy
systemctl enable kube-proxy
systemctl status kube-proxy
  • 查看加入的节点信息
[root@master1 ~]# kubectl get nodes
NAME      STATUS   ROLES    AGE   VERSION
master1   Ready    <none>   13h   v1.20.15
node1     Ready    <none>   17m   v1.20.15
  • 查看calico组件状态
[root@master1 ~]# kubectl get ds -n kube-system
NAME          DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR            AGE
calico-node   2         2         2       2            2           kubernetes.io/os=linux   13h

[root@master1 ~]# kubectl get pods -n kube-system
NAME                                      READY   STATUS    RESTARTS   AGE
calico-kube-controllers-97769f7c7-59v2q   1/1     Running   0          13h
calico-node-26jg6                         1/1     Running   8          19m
calico-node-2fmz4                         1/1     Running   0          13h
coredns-79495b5589-svzqw                  1/1     Running   0          12h

17.扩容master节点

在master2节点上部署apiserver,controller manager,scheduler

  • 从master1节点复制配置文件到master2
[root@master1 ~]# scp /opt/kubernetes/cfg/{kube-apiserver.conf,kube-controller-manager.conf,kube-controller-manager.kubeconfig,kube-scheduler.conf,kube-scheduler.kubeconfig,token.csv} master2:/opt/kubernetes/cfg/
  • 从master1节点复制二进制程序到master2
[root@master1 ~]# scp /opt/kubernetes/bin/{kube-apiserver,kube-controller-manager,kube-scheduler} master2:/opt/kubernetes/bin/
[root@master1 ~]# scp /usr/bin/kubectl master2:/usr/bin/
  • 从master1节点复制证书到master2
[root@master1 ~]# scp /opt/kubernetes/ssl/{ca-key.pem,ca.pem,kube-apiserver-key.pem,kube-apiserver.pem} master2:/opt/kubernetes/ssl/
  • 从master1节点复制启动托管文件到master2
[root@master1 ~]# scp /usr/lib/systemd/system/{kube-apiserver.service,kube-controller-manager.service,kube-scheduler.service} master2:/usr/lib/systemd/system/
  • 修改master2上的apiserver配置文件 /opt/kubernetes/cfg/kube-apiserver.conf
--bind-address=192.168.1.162
--advertise-address=192.168.1.162
  • 启动master程序
systemctl daemon-reload
systemctl start kube-apiserver kube-controller-manager kube-scheduler
systemctl enable kube-apiserver kube-controller-manager kube-scheduler
systemctl status kube-apiserver kube-controller-manager kube-scheduler
  • 从master1复制kubectl配置文件到master2
[root@master1 ~]# scp -r /root/.kube/ master2:/root/
  • master2节点修改kubectl配置文件 /root/.kube/config
server: https://192.168.1.162:6443
  • 查看集群状态
[root@master2 ~]# kubectl get cs
Warning: v1 ComponentStatus is deprecated in v1.19+
NAME                 STATUS    MESSAGE             ERROR
controller-manager   Healthy   ok
scheduler            Healthy   ok
etcd-1               Healthy   {"health":"true"}
etcd-0               Healthy   {"health":"true"}
etcd-2               Healthy   {"health":"true"}
etcd-3               Healthy   {"health":"true"}
etcd-4               Healthy   {"health":"true"}

18.扩展高可用集群

  • 在master1和master2两个节点安装nginx和keepalived
[root@master1 ~]# yum install nginx nginx-mod-stream keepalived -y
[root@master2 ~]# yum install nginx nginx-mod-stream keepalived -y
  • 修改nginx配置(master1和master2一样) /etc/nginx/nginx.conf
user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log;
pid /run/nginx.pid;

include /usr/share/nginx/modules/*.conf;

events {
    worker_connections 1024;
}

# 四层负载均衡,为两台Master apiserver组件提供负载均衡
stream {

    log_format  main  '$remote_addr $upstream_addr - [$time_local] $status $upstream_bytes_sent';

    access_log  /var/log/nginx/k8s-access.log  main;

    upstream k8s-apiserver {
       server 192.168.1.161:6443;   # Master1 APISERVER IP:PORT
       server 192.168.1.162:6443;   # Master2 APISERVER IP:PORT
    }

    server {
       listen 16443; # 由于nginx与master节点复用,这个监听端口不能是6443,否则会冲突
       proxy_pass k8s-apiserver;
    }
}

http {
    log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
                      '$status $body_bytes_sent "$http_referer" '
                      '"$http_user_agent" "$http_x_forwarded_for"';

    access_log  /var/log/nginx/access.log  main;

    sendfile            on;
    tcp_nopush          on;
    tcp_nodelay         on;
    keepalive_timeout   65;
    types_hash_max_size 2048;

    include             /etc/nginx/mime.types;
    default_type        application/octet-stream;

    server {
        listen       80 default_server;
        server_name  _;

        location / {
        }
    }
}
  • 修改master1 keepalived配置 /etc/keepalived/keepalived.conf
global_defs {
   notification_email {
     acassen@firewall.loc
     failover@firewall.loc
     sysadmin@firewall.loc
   }
   notification_email_from Alexandre.Cassen@firewall.loc
   smtp_server 127.0.0.1
   smtp_connect_timeout 30
   router_id NGINX_MASTER
}

vrrp_script check_nginx {
    script "/etc/keepalived/check_nginx.sh"
}

vrrp_instance VI_1 {
    state MASTER
    interface ens192  # 修改为实际网卡名
    virtual_router_id 51 # VRRP 路由 ID实例,每个实例是唯一的
    priority 100    # 优先级,备服务器设置 90
    advert_int 1    # 指定VRRP 心跳包通告间隔时间,默认1秒
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    # 虚拟IP
    virtual_ipaddress {
        192.168.1.160/24
    }
    track_script {
        check_nginx
    }
}
  • 修改master2 keepalived配置 /etc/keepalived/keepalived.conf
global_defs {
   notification_email {
     acassen@firewall.loc
     failover@firewall.loc
     sysadmin@firewall.loc
   }
   notification_email_from Alexandre.Cassen@firewall.loc
   smtp_server 127.0.0.1
   smtp_connect_timeout 30
   router_id NGINX_BACKUP
}

vrrp_script check_nginx {
    script "/etc/keepalived/check_nginx.sh"
}

vrrp_instance VI_1 {
    state BACKUP
    interface ens192
    virtual_router_id 51 # VRRP 路由 ID实例,每个实例是唯一的
    priority 90
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
        192.168.1.160/24
    }
    track_script {
        check_nginx
    }
}
  • 添加检测主备切换检测脚本(master1和master2一样) /etc/keepalived/check_nginx.sh
#!/bin/bash
count=$(ss -antp |grep 16443 |egrep -cv "grep|$$")

if [ "$count" -eq 0 ];then
    exit 1
else
    exit 0
fi
  • 启动nginx和keepalived
systemctl daemon-reload
systemctl start nginx keepalived
systemctl enable nginx keepalived
systemctl status nginx keepalived
  • 查看keepalived vip是否生效
[root@master1 ~]# ip a
2: ens192: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
    link/ether 00:0c:29:26:c0:23 brd ff:ff:ff:ff:ff:ff
    inet 192.168.1.161/24 brd 192.168.1.255 scope global noprefixroute ens192
       valid_lft forever preferred_lft forever
    inet 192.168.1.160/24 scope global secondary ens192
       valid_lft forever preferred_lft forever
  • 关闭master1 nginx程序,测试vip是否漂移到master节点
[root@master1 ~]# systemctl stop nginx
[root@master2 ~]# ip a
2: ens192: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
    link/ether 00:0c:29:2d:a0:5d brd ff:ff:ff:ff:ff:ff
    inet 192.168.1.162/24 brd 192.168.1.255 scope global noprefixroute ens192
       valid_lft forever preferred_lft forever
    inet 192.168.1.160/24 scope global secondary ens192
       valid_lft forever preferred_lft forever
  • 修改controller-manager,scheduler,kebelet,kube-proxy连接apiserver的地址至vip

涉及以下文件
/opt/kubernetes/cfg/bootstrap.kubeconfig #首次加入集群时候用到的证书申请文件
/opt/kubernetes/cfg/kubelet.kubeconfig #kubelet连接apiserver配置文件
/opt/kubernetes/cfg/kube-proxy.kubeconfig #kube-proxy连接apiserver配置文件
/opt/kubernetes/cfg/kube-scheduler.kubeconfig #kube-scheduler连接apiserver配置文件
/opt/kubernetes/cfg/kube-controller-manager.kubeconfig # kube-controller-manager连接apiserver配置文件
/root/.kube/config #kubectl连接apiserver配置文件

  • master节点

下面sed匹配到的文件只是上面的列出来的文件,不会影响他文件,如果不放心,可以先使用grep查看

[root@master1 ~]# sed -i s@192.168.1.161:6443@192.168.1.160:16443@g /opt/kubernetes/cfg/*
[root@master1 ~]# systemctl restart kubelet kube-proxy kube-scheduler kube-controller-manager

[root@master2 ~]# sed -i s@192.168.1.161:6443@192.168.1.160:16443@g /opt/kubernetes/cfg/*
[root@master3 ~]# systemctl restart kubelet kube-proxy kube-scheduler kube-controller-manager
  • node节点
[root@node1 ~]# sed -i s@192.168.1.161:6443@192.168.1.160:16443@g /opt/kubernetes/cfg/*
[root@node1 ~]# systemctl restart kubelet kube-proxy

[root@node2 ~]# sed -i s@192.168.1.161:6443@192.168.1.160:16443@g /opt/kubernetes/cfg/*
[root@node2 ~]# systemctl restart kubelet kube-proxy

[root@node3 ~]# sed -i s@192.168.1.161:6443@192.168.1.160:16443@g /opt/kubernetes/cfg/*
[root@node3 ~]# systemctl restart kubelet kube-proxy
  • kubectl工具
[root@master1 ~]# sed -i s@192.168.1.161:6443@192.168.1.160:16443@g /root/.kube/config
[root@master2 ~]# sed -i s@192.168.1.161:6443@192.168.1.160:16443@g /root/.kube/config

到此一整套k8s高可用集群全部部署完成

19.其他

a. 授权apiserver访问kubelet

发现使用命令kubectl exec 或 kubectl logs权限不足,报错如下:
error: unable to upgrade connection: Forbidden (user=kubernetes, verb=create, resource=nodes, subresource=proxy)
解决办法: 我这里是直接在系统预制clusterrolebinding的cluster-admin中添加kubernetes用户

[root@master2 ~]# kubectl edit clusterrolebinding cluster-admin
....
subjects:
- apiGroup: rbac.authorization.k8s.io
  kind: Group
  name: system:masters
- apiGroup: rbac.authorization.k8s.io
  kind: User
  name: kubernetes

互联网上搜到的其他方法

点击查看代码
cat > apiserver-to-kubelet-rbac.yaml << EOF
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  annotations:
    rbac.authorization.kubernetes.io/autoupdate: "true"
  labels:
    kubernetes.io/bootstrapping: rbac-defaults
  name: system:kube-apiserver-to-kubelet
rules:
  - apiGroups:
      - ""
    resources:
      - nodes/proxy
      - nodes/stats
      - nodes/log
      - nodes/spec
      - nodes/metrics
      - pods/log
    verbs:
      - "*"
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: system:kube-apiserver
  namespace: ""
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:kube-apiserver-to-kubelet
subjects:
  - apiGroup: rbac.authorization.k8s.io
    kind: User
    name: kubernetes
EOF

kubectl apply -f apiserver-to-kubelet-rbac.yaml
posted @ 2022-03-19 16:23  ForLivetoLearn  阅读(227)  评论(0编辑  收藏  举报