二进制安装kubernetes集群

原作者: sysit

1. 环境说明

1.1 部署拓扑

  • 部署拓扑
    本文采用外置etcd的方式部署

title

  • 实际组件部署

title

本文中etcd集群和master集群融合部署,大集群建议单独的服务器部署etcd。

1.2 集群环境

  • 操作系统:CentOS Linux release 7.9.2009 (Core)
  • 版本:Kubenetes 1.20.7(生产环境建议使用小版本大于5的包,如1.20.5以上)
  • etcd: 3.3.11
  • 网络方案:Calico
  • 容器:docker-ce 19.03
  • 插件:
    • CoreDns
    • Dashboard
    • Metrics-Server
    • EFK(ElasticSearch、Fluentd、Kibana)
  • 镜像仓库
    • harbor
  • ingress控制器:ingress-nginx

1.3 服务器信息

  1. 192.168.112.141 master1.sysit.cn master1
  2. 192.168.112.142 master2.sysit.cn master2
  3. 192.168.112.143 master3.sysit.cn master3
  4. 192.168.112.144 node1.sysit.cn node1
  5. 192.168.112.145 node2.sysit.cn node2
  6. 192.168.112.146 node3.sysit.cn node3

2. 环境初始化

2.1 操作系统初始化

  1. 见本站CentOS 7.3初始化

2.2 关闭swap(kubelet节点操作)

  1. #如果开启了 swap 分区,kubelet 会启动失败(可以通过将参数 --fail-swap-on 设置为 false 来忽略 swap on),故需要在每台机器上关闭 swap 分区:
  2. swapoff -a && sysctl -w vm.swappiness=0
  3. #为了防止开机自动挂载 swap 分区,可以注释 /etc/fstab 中相应的条目:
  4. sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab

2.3 关闭防火墙

  1. systemctl stop firewalld
  2. systemctl disable firewalld

2.4 关闭selinux

  1. setenforce 0
  2. sed -i "s/SELINUX=enforcing/SELINUX=disabled/g" /etc/selinux/config

2.5 设置主机名和hosts

  1. hostnamectl set-hostname master1.sysit.cn
  2. cat > /etc/hosts <<'EOF'
  3. 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
  4. ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
  5. 192.168.112.140 master.sysit.cn master
  6. 192.168.112.141 master1.sysit.cn master1
  7. 192.168.112.142 master2.sysit.cn master2
  8. 192.168.112.143 master3.sysit.cn master3
  9. 192.168.112.144 node1.sysit.cn node1
  10. 192.168.112.145 node2.sysit.cn node2
  11. 192.168.112.146 node3.sysit.cn node3
  12. EOF

2.6 安装依赖包

  1. sudo yum install -y epel-release
  2. sudo yum install -y conntrack ipvsadm ipset jq sysstat curl iptables libseccomp
  3. yum install sshpass -y

2.7 用户及免密(可选)

  1. # 免密,后续所有操作都在master1上执行,因此创建maste1到其他服务器的免密登录。
  2. ssh-keygen -t rsa
  3. export NODE_NAMES=(master1 master2 master3 node1 node2 node3)
  4. export pwd="rootpassword"
  5. for node in ${NODE_NAMES[@]}
  6. do
  7. echo ">>> ${node}"
  8. sshpass -p ${pwd} ssh-copy-id -o StrictHostKeyChecking=no root@${node} 2>/dev/null
  9. done

1.2.8 设置系统参数

  1. #将桥接的IPv4流量传递到iptables的链
  2. cat > /etc/sysctl.d/kubernetes.conf <<'EOF'
  3. net.bridge.bridge-nf-call-ip6tables = 1
  4. net.bridge.bridge-nf-call-iptables = 1
  5. EOF
  6. sudo sysctl -p /etc/sysctl.d/kubernetes.conf

2.9 升级内核

  • 升级完内核之后重启服务器
  1. # CentOS自带的内核版本较低,为了运行docker容器的稳定性,建议将内核版本升级。
  2. yum install https://www.elrepo.org/elrepo-release-7.el7.elrepo.noarch.rpm
  3. # 安装完成后检查 /boot/grub2/grub.cfg 中对应内核 menuentry 中是否包含 initrd16 配置,如果没有,再安装一次!
  4. sudo yum --enablerepo=elrepo-kernel install -y kernel-lt
  5. # 设置开机从新内核启动
  6. grub2-set-default 0 && grub2-mkconfig -o /etc/grub2.cfg
  7. # 确认是否启动默认内核指向新的内核
  8. grubby --default-kernel
  9. # 开启user_namespace
  10. grubby --args="user_namespace.enable=1" --update-kernel="$(grubby --default-kernel)"

2.10 加载内核模块

  1. cat > /etc/sysconfig/modules/ipvs.modules <<'EOF'
  2. #!/bin/bash
  3. ipvs_modules="ip_vs ip_vs_lc ip_vs_wlc ip_vs_rr ip_vs_wrr ip_vs_lblc ip_vs_lblcr ip_vs_dh ip_vs_sh ip_vs_fo ip_vs_nq ip_vs_sed ip_vs_ftp nf_conntrack"
  4. for kernel_module in ${ipvs_modules}; do
  5. /sbin/modinfo -F filename ${kernel_module} > /dev/null 2>&1
  6. if [ $? -eq 0 ]; then
  7. /sbin/modprobe ${kernel_module}
  8. fi
  9. done
  10. EOF
  11. chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep ip_vs

2.11 关闭 NUMA

  1. grubby --update-kernel=ALL --args="numa=off" --update-kernel="$(grubby --default-kernel)"

2.12 重启系统

  1. # 重启加载新内核
  2. reboot

2.13 配置时区及时间同步

  • 配置时区
  1. ln -sf /usr/share/zoneinfo/Asia/Shanghai /etc/localtime
  • 安装chronyd
  1. # 所有节点
  2. yum install chrony -y
  • 主时间服务器配置
  1. # master1作为主时间服务器,其他服务器同步master1的时间,
  2. # 配置本机时间,当外部时间无法获取的时候,采用本地时间。
  3. cat >>/etc/chrony.conf <<'EOF'
  4. allow 192.168.112.0/24
  5. server 127.127.1.0
  6. allow 127.0.0.0/8
  7. local stratum 10
  8. EOF
  9. # 重启
  10. systemctl enable chronyd
  11. systemctl restart chronyd
  12. # 检查
  13. chronyc sources
  • 其他节点作为client同步master1时间
  1. # chrony client配置,除master1之外的所有节点
  2. sed -i "s/server/#server/g" /etc/chrony.conf
  3. echo "server master1 iburst" >>/etc/chrony.conf
  4. #启动
  5. systemctl enable chronyd
  6. systemctl start chronyd
  7. # 检查
  8. chronyc sources

3. 部署高可用haproxy+keepalived

3.1 安装配置haproxy

  • 安装
  1. # 三个master节点安装
  2. yum install haproxy -y
  • haproxy配置
  1. grep -v ^# /etc/haproxy/haproxy.cfg
  2. global
  3. chroot /var/lib/haproxy
  4. daemon
  5. group haproxy
  6. user haproxy
  7. maxconn 4000
  8. pidfile /var/run/haproxy.pid
  9. log 127.0.0.1 local0 info
  10. defaults
  11. log global
  12. maxconn 4000
  13. option redispatch
  14. retries 3
  15. timeout http-request 10s
  16. timeout queue 1m
  17. timeout connect 10s
  18. timeout client 1m
  19. timeout server 1m
  20. timeout check 10s
  21. # haproxy监控页
  22. listen stats
  23. bind 0.0.0.0:1080
  24. mode http
  25. stats enable
  26. stats uri /
  27. stats realm Kuberentes\ Haproxy
  28. stats auth admin:admin
  29. stats refresh 30s
  30. stats show-node
  31. stats show-legends
  32. stats hide-version
  33. frontend kube-api-https_frontend
  34. bind 192.168.112.140:6443
  35. mode tcp
  36. default_backend kube-api-https_backend
  37. backend kube-api-https_backend
  38. balance roundrobin
  39. mode tcp
  40. stick-table type ip size 200k expire 30m
  41. stick on src
  42. server master1.sysit.cn 192.168.112.141:6443 maxconn 1024 cookie 1 weight 3 check inter 1500 rise 2 fall 3
  43. server master2.sysit.cn 192.168.112.142:6443 maxconn 1024 cookie 2 weight 3 check inter 1500 rise 2 fall 3
  44. server master3.sysit.cn 192.168.112.143:6443 maxconn 1024 cookie 3 weight 3 check inter 1500 rise 2 fall 3
  • haproxy配置日志
  1. 建议开启haproxy的日志功能,便于后续的问题排查!
  2. #创建HAProxy记录日志文件并授权
  3. mkdir /var/log/haproxy && chmod a+w /var/log/haproxy
  4. yum install rsyslog
  5. #在rsyslog文件下修改以下字段
  6. vim /etc/rsyslog.conf
  7. #启用tcp/udp模块
  8. # Provides UDP syslog reception
  9. $ModLoad imudp
  10. $UDPServerRun 514
  11. # Provides TCP syslog reception
  12. $ModLoad imtcp
  13. $InputTCPServerRun 514
  14. #添加haproxy配置
  15. local0.=info -/var/log/haproxy/haproxy-info.log
  16. local0.=err -/var/log/haproxy/haproxy-err.log
  17. local0.notice;local0.!=err -/var/log/haproxy/haproxy-notice.log
  18. systemctl restart rsyslog
  • haproxy监听内核
  1. #全部控制节点修改内核参数;
  2. #net.ipv4.ip_nonlocal_bind:是否允许no-local ip绑定,关系到haproxy实例与vip能否绑定并切换;
  3. #net.ipv4.ip_forward:是否允许转发
  4. echo "net.ipv4.ip_nonlocal_bind = 1" >>/etc/sysctl.conf
  5. echo "net.ipv4.ip_forward = 1" >>/etc/sysctl.conf
  6. sysctl -p
  • haproxy启动
  1. systemctl enable haproxy
  2. systemctl restart haproxy

3.2 安装配置keepalived

  • 安装配置keepalived
  1. # 三个节点
  2. yum install -y keepalived
  • keepalived配置

注意修改配置priority的值

  1. ! Configuration File for keepalived
  2. global_defs {
  3. notification_email {
  4. root@localhost.local
  5. }
  6. notification_email_from root@localhost.local
  7. smtp_server 192.168.112.11
  8. smtp_connect_timeout 30
  9. router_id Haproxy_DEVEL
  10. }
  11. vrrp_script chk_haproxy {
  12. script "/etc/haproxy/chk_haproxy.sh"
  13. interval 1
  14. #haproxy在线,权重加2
  15. #weight 2
  16. }
  17. vrrp_instance VI_1 {
  18. state BACKUP
  19. interface eth0
  20. virtual_router_id 51
  21. # 注意修改成不同的priority值
  22. priority 100
  23. advert_int 1
  24. nopreempt
  25. authentication {
  26. auth_type PASS
  27. auth_pass sysit
  28. }
  29. virtual_ipaddress {
  30. 192.168.112.140
  31. }
  32. track_script {
  33. chk_haproxy
  34. }
  35. }
  • haproxy检测脚本
  1. cat >/etc/haproxy/chk_haproxy.sh<<'EOF'
  2. #!/bin/bash
  3. # check haproxy process, if there isn't any process, try to start the process once,
  4. # check it again after 3s, if there isn't any process still, restart keepalived process, change state.
  5. # 2017-03-22 v0.1
  6. if [ $(ps -C haproxy --no-header | wc -l) -eq 0 ]; then
  7. /etc/rc.d/init.d/haproxy start
  8. sleep 3
  9. if [ $(ps -C haproxy --no-header | wc -l) -eq 0 ]; then
  10. systemctl restart keepalived
  11. fi
  12. fi
  13. EOF
  14. chmod +x /etc/haproxy/chk_haproxy.sh
  • 启动服务
  1. systemctl enable keepalived
  2. systemctl start keepalived

4. CA证书与秘钥

4.1 cfssl工具

本文档采用 CloudFlare 的 PKI 工具集 cfssl 来生成 Certificate Authority (CA) 证书和秘钥文件,CA 是自签名的证书,用来签名后续创建的其它 TLS证书。

  1. # 选择任意一台服务器
  2. mkdir -p /usr/local/cfssl/bin
  3. cd /usr/local/cfssl/bin
  4. curl https://github.com/cloudflare/cfssl/releases/download/v1.5.0/cfssl_1.5.0_linux_amd64 -o cfssl
  5. curl https://github.com/cloudflare/cfssl/releases/download/v1.5.0/cfssl-certinfo_1.5.0_linux_amd64 -o cfssl-certinfo
  6. curl https://github.com/cloudflare/cfssl/releases/download/v1.5.0/cfssljson_1.5.0_linux_amd64 -o cfssljson
  7. chmod +x cfssl cfssl-certinfo cfssljson
  8. echo 'export PATH=$PATH:/usr/local/cfssl/bin' >>/etc/bashrc

4.2 CA配置文件

CA 证书是集群所有节点共享的,只需要创建一个 CA 证书,后续创建的所有证书都由它签名。
CA 配置文件用于配置根证书的使用场景 (profile) 和具体参数 (usage,过期时间、服务端认证、客户端认证、加密等),后续在签名其它证书时需要指定特定场景。

  • CA配置文件
  1. # ca-config.json:1个profiles,分别指定不同的过期时间,使用场景等参数,根据需要在不同场景使用不同的profile签名证书;这里以生成的模板为基础修改;
  2. cd /opt/k8s/
  3. cat > ca-config.json <<EOF
  4. {
  5. "signing": {
  6. "default": {
  7. "expiry": "87600h"
  8. },
  9. "profiles": {
  10. "kubernetes": {
  11. "usages": [
  12. "signing",
  13. "key encipherment",
  14. "server auth",
  15. "client auth"
  16. ],
  17. "expiry": "87600h"
  18. }
  19. }
  20. }
  21. }
  22. EOF

signing:表示该证书可用于签名其它证书,生成的 ca.pem 证书中 CA=TRUE
server auth:表示 client 可以用该该证书对 server 提供的证书进行验证;
client auth:表示 server 可以用该该证书对 client 提供的证书进行验证;

  • CA证书签名请求
  1. cat > ca-csr.json <<EOF
  2. {
  3. "CN": "kubernetes",
  4. "key": {
  5. "algo": "rsa",
  6. "size": 2048
  7. },
  8. "names": [
  9. {
  10. "C": "CN",
  11. "L": "Chengdu",
  12. "ST": "Sichuan",
  13. "O": "k8s",
  14. "OU": "System"
  15. }
  16. ]
  17. }
  18. EOF
  • 生成CA证书与秘钥
  1. cfssl gencert -initca ca-csr.json | cfssljson -bare ca
  2. ls ca*
  3. ca-config.json ca.csr ca-csr.json ca-key.pem ca.pem
  4. #简单查看
  5. cfssl-certinfo -cert ca.pem
  • 分发
  1. # 将生成的CA证书,秘钥,配置文件等分发到所有机器/etc/kubernetes/cert目录下;
  2. # ca-key.pem与ca.pem重要
  3. export NODE_NAMES=(master1 master2 master3 node1 node2 node3)
  4. for node in ${NODE_NAMES[@]}
  5. do
  6. echo ">>> ${node}"
  7. ssh root@${node} "mkdir -p /etc/kubernetes/cert"
  8. scp ca*pem root@${node}:/etc/kubernetes/cert/
  9. done

5. etcd安装

见etcd安装配置文档,注意如果单独安装的话,两个ca文件可以不同。
但是为了保持一致,建议使用同一个ca文件。
注意:为了以后扩展或服务器变更,建议预留几个IP地址。

6. 安装docker

  • 所有节点
  1. curl -o /etc/yum.repos.d/docker-ce.repo https://download.docker.com/linux/centos/docker-ce.repo
  2. # 修改为清华大学源,加快安装速度
  3. sudo sed -i 's+download.docker.com+mirrors.tuna.tsinghua.edu.cn/docker-ce+' /etc/yum.repos.d/docker-ce.repo
  4. yum install docker-ce-19.03.15 -y
  5. # 编辑systemctl的Docker启动文件
  6. # Docker从1.13版本开始调整了默认的防火墙规则,禁用了iptables filter表中FOWARD链,这样会引起Kubernetes集群中跨Node的Pod无法通信,因此docker安装完成后,还需要手动修改iptables规则。
  7. sed -i "13i ExecStartPost=/usr/sbin/iptables -P FORWARD ACCEPT" /usr/lib/systemd/system/docker.service
  8. # 1.15.x以后版本添加如下
  9. mkdir /etc/docker
  10. cat > /etc/docker/daemon.json <<EOF
  11. {
  12. "exec-opts": ["native.cgroupdriver=systemd"],
  13. "log-driver": "json-file",
  14. "log-opts": {
  15. "max-size": "100m"
  16. },
  17. "storage-driver": "overlay2",
  18. "storage-opts": [
  19. "overlay2.override_kernel_check=true"
  20. ]
  21. }
  22. EOF
  23. mkdir -p /etc/systemd/system/docker.service.d
  24. # Restart Docker
  25. systemctl daemon-reload
  26. systemctl enable docker
  27. systemctl restart docker

7. 安装二进制包

下载地址: https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.20.md
注:下载一个server包就够了,包含了Master和Worker Node二进制文件,生产环境建议使用大于5的包,比如1.20.5以后的包。

  1. cd /opt/k8s
  2. wget https://dl.k8s.io/v1.20.7/kubernetes-server-linux-amd64.tar.gz
  3. tar zxvf kubernetes-server-linux-amd64.tar.gz
  4. cd kubernetes/server/bin
  5. # master节点分发
  6. # mkdir -p /opt/kubernetes/bin
  7. # cp kube-apiserver kube-scheduler kube-controller-manager /opt/kubernetes/bin/
  8. export MASTER_NAMES=(master1 master2 master3)
  9. for node in ${MASTER_NAMES[@]}
  10. do
  11. echo ">>> ${node}"
  12. ssh root@${node} "mkdir -p /opt/kubernetes/bin"
  13. scp kube-apiserver kube-scheduler kube-controller-manager root@${node}:/opt/kubernetes/bin/
  14. done
  15. # work节点
  16. # cp kubelet kube-proxy /opt/kubernetes/bin/
  17. export NODE_NAMES=(master1 master2 master3 node1 node2 node3)
  18. for node in ${NODE_NAMES[@]}
  19. do
  20. echo ">>> ${node}"
  21. ssh root@${node} "mkdir -p /opt/kubernetes/bin"
  22. scp kubelet kube-proxy root@${node}:/opt/kubernetes/bin/
  23. done
  24. # kubectl
  25. # cp kubectl /usr/bin
  26. for node in master1 master2 master3
  27. do
  28. echo ">>> ${node}"
  29. scp kubectl root@${node}:/usr/bin/
  30. done

8. kubectl配置

8.1 创建kubectl TLS证书与私钥

  • 创建kubectl证书签名请求
  1. cat > admin-csr.json <<'EOF'
  2. {
  3. "CN": "admin",
  4. "hosts": [],
  5. "key": {
  6. "algo": "rsa",
  7. "size": 2048
  8. },
  9. "names": [
  10. {
  11. "C": "CN",
  12. "L": "Chengdu",
  13. "ST": "Sichuan",
  14. "O": "system:masters",
  15. "OU": "System"
  16. }
  17. ]
  18. }
  19. EOF
  • 生成
  1. cfssl gencert -ca=/opt/k8s/ca.pem \
  2. -ca-key=/opt/k8s/ca-key.pem \
  3. -config=/opt/k8s/ca-config.json \
  4. -profile=kubernetes admin-csr.json | cfssljson -bare admin
  5. # 查看
  6. ls
  7. admin.csr admin-csr.json admin-key.pem admin.pem
  • 分发key
  1. export MASTER_NAMES=(master1 master2 master3)
  2. for node in ${MASTER_NAMES[@]}
  3. do
  4. echo ">>> ${node}"
  5. scp admin*pem root@${node}:/etc/kubernetes/cert
  6. done

8.2 kubectl kubeconfig文件

  • 创建
  1. cd /opt/k8s/
  2. mkdir /root/.kube
  3. KUBE_CONFIG="/root/.kube/config"
  4. KUBE_APISERVER="https://192.168.112.140:6443"
  5. kubectl config set-cluster kubernetes \
  6. --certificate-authority=/etc/kubernetes/ssl/ca.pem \
  7. --embed-certs=true \
  8. --server=${KUBE_APISERVER} \
  9. --kubeconfig=${KUBE_CONFIG}
  10. kubectl config set-credentials cluster-admin \
  11. --client-certificate=./admin.pem \
  12. --client-key=./admin-key.pem \
  13. --embed-certs=true \
  14. --kubeconfig=${KUBE_CONFIG}
  15. kubectl config set-context default \
  16. --cluster=kubernetes \
  17. --user=cluster-admin \
  18. --kubeconfig=${KUBE_CONFIG}
  19. kubectl config use-context default --kubeconfig=${KUBE_CONFIG}
  • 分发kubeconfig文件
  1. export MASTER_NAMES=(master1 master2 master3)
  2. for node in ${MASTER_NAMES[@]}
  3. do
  4. echo ">>> ${node}"
  5. ssh root@${node} "mkdir -p ~/.kube/"
  6. scp kubectl.kubeconfig root@${node}:~/.kube/config
  7. done

8.3 让kubectl 可以使用tab

  1. yum install bash-completion -y
  2. echo "source <(kubectl completion bash)" >> ~/.bashrc && source ~/.bashrc

9 安装配置kube-apiserver

三个master节点安装

9.1 kubernetes 证书和私钥

  • 创建证书签名文件
  1. cat > kubernetes-csr.json <<EOF
  2. {
  3. "CN": "kubernetes",
  4. "hosts": [
  5. "127.0.0.1",
  6. "192.168.112.140",
  7. "192.168.112.141",
  8. "192.168.112.142",
  9. "192.168.112.143",
  10. "192.168.112.200",
  11. "10.0.0.1",
  12. "kubernetes",
  13. "kubernetes.default",
  14. "kubernetes.default.svc",
  15. "kubernetes.default.svc.sysit",
  16. "kubernetes.default.svc.sysit.cn"
  17. ],
  18. "key": {
  19. "algo": "rsa",
  20. "size": 2048
  21. },
  22. "names": [
  23. {
  24. "C": "CN",
  25. "ST": "Sichuan",
  26. "L": "Chengdu",
  27. "O": "k8s",
  28. "OU": "System"
  29. }
  30. ]
  31. }
  32. EOF
  • 上述文件hosts字段中IP为所有Master/LB/VIP IP,一个都不能少!为了方便后期扩容可以多写几个预留的IP。
  • hosts 字段指定授权使用该证书的 IP 或域名列表,这里列出了 VIP 、apiserver 节点 IPkubernetes 服务 IP 和域名,为了扩展要求,建议增加IP地址备用;
  • 域名最后字符不能是 .(如不能为 kubernetes.default.svc.sysit.cn.),否则解析时失败,提示: x509: cannot parse dnsName "kubernetes.default.svc.sysit.cn."
  • 如果使用其他域名如bootgo.com,则需要修改域名列表中的最后两个域名为:kubernetes.default.svc.bootgo、kubernetes.default.svc.bootgo.com
  • kubernetes 服务 IP 是 apiserver 自动创建的,一般是 --service-cluster-ip-range 参数指定的网段的第一个IP,后续可以通过如下命令获取:
    $ kubectl get svc kubernetes
    NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
    kubernetes 10.0.0.1 443/TCP 1d
  • 生成证书和私钥
  1. cfssl gencert -ca=/opt/k8s/ca.pem \
  2. -ca-key=/opt/k8s/ca-key.pem \
  3. -config=/opt/k8s/ca-config.json \
  4. -profile=kubernetes kubernetes-csr.json | cfssljson -bare kubernetes
  5. # 查看
  6. ls kubernetes*pem
  7. kubernetes-key.pem kubernetes.pem
  • 将生成的证书和私钥文件分发到 master 节点
  1. # cp kubernetes*.pem /etc/kubernetes/ssl/
  2. export MASTER_NAMES=(master1 master2 master3)
  3. for node in ${MASTER_NAMES[@]}
  4. do
  5. echo ">>> ${node}"
  6. scp kubernetes*.pem root@${node}:/etc/kubernetes/ssl/
  7. done

9.2 kube-apiserver配置文件

  • 创建配置文件
  1. # 注意修改bind-address和advertise-address的地址
  2. cat > kube-apiserver.conf << 'EOF'
  3. KUBE_APISERVER_OPTS="--logtostderr=false \
  4. --v=2 \
  5. --log-dir=/var/log/kubernetes/ \
  6. --etcd-servers=https://192.168.112.141:2379,https://192.168.112.142:2379,https://192.168.112.143:2379 \
  7. --bind-address=#NODE_IP# \
  8. --secure-port=6443 \
  9. --advertise-address=#NODE_IP# \
  10. --allow-privileged=true \
  11. --service-cluster-ip-range=10.0.0.0/24 \
  12. --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota,NodeRestriction \
  13. --authorization-mode=RBAC,Node \
  14. --enable-bootstrap-token-auth=true \
  15. --token-auth-file=/etc/kubernetes/conf/token.csv \
  16. --service-node-port-range=30000-32767 \
  17. --kubelet-client-certificate=/etc/kubernetes/cert/kubernetes.pem \
  18. --kubelet-client-key=/etc/kubernetes/cert/kubernetes-key.pem \
  19. --tls-cert-file=/etc/kubernetes/cert/kubernetes.pem \
  20. --tls-private-key-file=/etc/kubernetes/cert/kubernetes-key.pem \
  21. --client-ca-file=/etc/kubernetes/cert/ca.pem \
  22. --service-account-key-file=/etc/kubernetes/cert/ca-key.pem \
  23. --service-account-issuer=api \
  24. --service-account-signing-key-file=/etc/kubernetes/cert/kubernetes-key.pem \
  25. --etcd-cafile=/etc/etcd/cert/ca.pem \
  26. --etcd-certfile=/etc/etcd/cert/etcd.pem \
  27. --etcd-keyfile=/etc/etcd/cert/etcd-key.pem \
  28. --requestheader-client-ca-file=/etc/kubernetes/cert/ca.pem \
  29. --proxy-client-cert-file=/etc/kubernetes/cert/kubernetes.pem \
  30. --proxy-client-key-file=/etc/kubernetes/cert/kubernetes-key.pem \
  31. --requestheader-allowed-names=kubernetes \
  32. --requestheader-extra-headers-prefix=X-Remote-Extra- \
  33. --requestheader-group-headers=X-Remote-Group \
  34. --requestheader-username-headers=X-Remote-User \
  35. --enable-aggregator-routing=true \
  36. --audit-log-maxage=30 \
  37. --audit-log-maxbackup=3 \
  38. --audit-log-maxsize=100 \
  39. --audit-log-path=/var/log/kubernetes/k8s-audit.log"
  40. EOF

–logtostderr:启用日志
—v:日志等级
–log-dir:日志目录
–etcd-servers:etcd集群地址
–bind-address:监听地址
–secure-port:https安全端口
–advertise-address:集群通告地址
–allow-privileged:启用授权
–service-cluster-ip-range:Service虚拟IP地址段
–enable-admission-plugins:准入控制模块
–authorization-mode:认证授权,启用RBAC授权和节点自管理
–enable-bootstrap-token-auth:启用TLS bootstrap机制
–token-auth-file:bootstrap token文件
–service-node-port-range:Service nodeport类型默认分配端口范围
–kubelet-client-xxx:apiserver访问kubelet客户端证书
–tls-xxx-file:apiserver https证书
1.20版本必须加的参数:–service-account-issuer,–service-account-signing-key-file
–etcd-xxxfile:连接Etcd集群证书
–audit-log-xxx:审计日志 启动聚合层相关配置:–requestheader-client-ca-file,–proxy-client-cert-file,–proxy-client-key-file,
–requestheader-allowed-names,–requestheader-extra-headers-prefix,–requestheader-group-headers,
–requestheader-username-headers,–enable-aggregator-routing

  • 分发配置文件
  1. # mkdir -p /etc/kubernetes/conf && mkdir /var/log/kubernetes
  2. # cp kube-apiserver.conf /etc/kubernetes/conf/
  3. for ip in 192.168.112.{141,142,143}
  4. do
  5. echo ">>> ${ip}"
  6. ssh root@${ip} "mkdir -p /etc/kubernetes/conf && mkdir /var/log/kubernetes"
  7. scp kube-apiserver.conf root@${ip}:/etc/kubernetes/conf/kube-apiserver.conf
  8. ssh root@${ip} "sed -i 's/#NODE_IP#/${ip}/g' /etc/kubernetes/conf/kube-apiserver.conf"
  9. done

9.3 启用TLS Bootstrapping机制

TLS Bootstraping:Master
apiserver启用TLS认证后,Node节点kubelet和kube-proxy要与kube-apiserver进行通信,
必须使用CA签发的有效证书才可以,当Node节点很多时,这种客户端证书颁发需要大量工作,同样也会增加集群扩展复杂度。
为了简化流程,Kubernetes引入了TLS
bootstraping机制来自动颁发客户端证书,kubelet会以一个低权限用户自动向apiserver申请证书,
kubelet的证书由apiserver动态签署。所以强烈建议在Node上使用这种方式,目前主要用于kubelet,kube-proxy还是由我们统一颁发一个证书。

  • 生成token.csv
  1. cat > /etc/kubernetes/conf/token.csv << EOF
  2. $(head -c 16 /dev/urandom | od -An -t x | tr -d ' '),kubelet-bootstrap,10001,"system:kubelet-bootstrap"
  3. EOF
  • 分发token.csv
  1. export MASTER_NAMES=(master1 master2 master3)
  2. for node in ${MASTER_NAMES[@]}
  3. do
  4. echo ">>> ${node}"
  5. scp token.csv root@${node}:/etc/kubernetes/conf/
  6. done

9.4 kube-apiserver systemd unit文件

  • 创建文件
  1. cat > kube-apiserver.service << 'EOF'
  2. [Unit]
  3. Description=Kubernetes API Server
  4. Documentation=https://github.com/kubernetes/kubernetes
  5. [Service]
  6. EnvironmentFile=/etc/kubernetes/conf/kube-apiserver.conf
  7. ExecStart=/opt/kubernetes/bin/kube-apiserver $KUBE_APISERVER_OPTS
  8. Restart=on-failure
  9. [Install]
  10. WantedBy=multi-user.target
  11. EOF
  • 分发文件
  1. export MASTER_NAMES=(master1 master2 master3)
  2. for node in ${MASTER_NAMES[@]}
  3. do
  4. echo ">>> ${node}"
  5. scp kube-apiserver.service root@${node}:/usr/lib/systemd/system/kube-apiserver.service
  6. done

9.5 启动

  1. # systemctl daemon-reload
  2. # systemctl enable kube-apiserver
  3. # systemctl restart kube-apiserver
  4. export MASTER_NAMES=(master1 master2 master3)
  5. for node in ${MASTER_NAMES[@]}
  6. do
  7. echo ">>> ${node}"
  8. ssh root@${node} "/var/log/kubernetes/kube-apiserver"
  9. ssh root@${node} "systemctl daemon-reload && systemctl enable kube-apiserver && systemctl restart kube-apiserver"
  10. done

9.6 检查启动结果

  1. export MASTER_NAMES=(master1 master2 master3)
  2. for node in ${MASTER_NAMES[@]}
  3. do
  4. echo ">>> ${node}"
  5. ssh root@${node} "systemctl status kube-apiserver|grep Active"
  6. done

9.7 查看kube-apiserver写入etcd的数据

  1. ETCDCTL_API=3 etcdctl \
  2. --endpoints=https://192.168.112.141:2379,https://192.168.112.142:2379,https://192.168.112.143:2379 \
  3. --cacert=/etc/etcd/cert/ca.pem \
  4. --cert=/etc/etcd/cert/etcd.pem \
  5. --key=/etc/etcd/cert/etcd-key.pem \
  6. get /registry/ --prefix --keys-only

10 部署kube-controller-manager

该集群包含 3 个节点,启动后将通过竞争选举机制产生一个 leader 节点,其它节点为阻塞状态。当 leader 节点不可用后,剩余节点将再次进行选举产生新的 leader 节点,从而保证服务的可用性。

10.1 创建 kube-controller-manager 证书和私钥

  • 创建证书签名请求
  1. cat > kube-controller-manager-csr.json <<EOF
  2. {
  3. "CN": "system:kube-controller-manager",
  4. "hosts": [],
  5. "key": {
  6. "algo": "rsa",
  7. "size": 2048
  8. },
  9. "names": [
  10. {
  11. "C": "CN",
  12. "L": "Chengdu",
  13. "ST": "Sichuan",
  14. "O": "system:masters",
  15. "OU": "System"
  16. }
  17. ]
  18. }
  19. EOF
  • 生成证书和私钥:
  1. cfssl gencert -ca=/opt/k8s/ca.pem \
  2. -ca-key=/opt/k8s/ca-key.pem \
  3. -config=/opt/k8s/ca-config.json \
  4. -profile=kubernetes kube-controller-manager-csr.json | cfssljson -bare kube-controller-manager
  • 将生成的证书和私钥分发到所有 master 节点
  1. export MASTER_NAMES=(master1 master2 master3)
  2. for node in ${MASTER_NAMES[@]}
  3. do
  4. echo ">>> ${node}"
  5. scp kube-controller-manager*.pem root@${node}:/etc/kubernetes/cert/
  6. done

10.2 kubeconfig文件

kubeconfig 文件包含访问 apiserver 的所有信息,如 apiserver 地址、CA 证书和自身使用的证书;

  • 生成kubeconfig文件
  1. # --server:指定api-server,采用ha之后的vip;
  2. # cluster名自定义,设定之后需保持一致;
  3. # --kubeconfig:指定kubeconfig文件路径与文件名;如果不设置,默认生成在~/.kube/config文件
  4. #生成kubeconfig文件(以下是shell命令,直接在终端执行):
  5. KUBE_CONFIG="kube-controller-manager.kubeconfig"
  6. KUBE_APISERVER="https://192.168.112.140:6443"
  7. kubectl config set-cluster kubernetes \
  8. --certificate-authority=/etc/kubernetes/cert/ca.pem \
  9. --embed-certs=true \
  10. --server=${KUBE_APISERVER} \
  11. --kubeconfig=${KUBE_CONFIG}
  12. kubectl config set-credentials kube-controller-manager \
  13. --client-certificate=./kube-controller-manager.pem \
  14. --client-key=./kube-controller-manager-key.pem \
  15. --embed-certs=true \
  16. --kubeconfig=${KUBE_CONFIG}
  17. kubectl config set-context default \
  18. --cluster=kubernetes \
  19. --user=kube-controller-manager \
  20. --kubeconfig=${KUBE_CONFIG}
  21. kubectl config use-context default --kubeconfig=${KUBE_CONFIG}
  • 分发kubeconfig文件
  1. export MASTER_NAMES=(master1 master2 master3)
  2. for node in ${MASTER_NAMES[@]}
  3. do
  4. echo ">>> ${node}"
  5. scp kube-controller-manager.kubeconfig root@${node}:/etc/kubernetes/conf/
  6. done

10.3 kube-controller-manager 配置文件

  • 创建配置文件
  1. cat > kube-controller-manager.conf << 'EOF'
  2. KUBE_CONTROLLER_MANAGER_OPTS="--logtostderr=false \
  3. --v=2 \
  4. --log-dir=/var/log/kubernetes/kube-controller-manager/ \\
  5. --leader-elect=true \\
  6. --kubeconfig=/opt/kubernetes/conf/kube-controller-manager.kubeconfig \
  7. --bind-address=127.0.0.1 \
  8. --allocate-node-cidrs=true \
  9. --cluster-cidr=10.244.0.0/16 \
  10. --service-cluster-ip-range=10.0.0.0/24 \
  11. --cluster-signing-cert-file=/opt/kubernetes/cert/ca.pem \
  12. --cluster-signing-key-file=/opt/kubernetes/cert/ca-key.pem \
  13. --root-ca-file=/opt/kubernetes/cert/ca.pem \
  14. --service-account-private-key-file=/opt/kubernetes/cert/ca-key.pem \
  15. --cluster-signing-duration=87600h0m0s"
  16. EOF

–kubeconfig:连接apiserver配置文件
–leader-elect:当该组件启动多个时,自动选举(HA)
–cluster-signing-cert-file/–cluster-signing-key-file:自动为kubelet颁发证书的CA,与apiserver保持一致

  • 分发配置文件
  1. export MASTER_NAMES=(master1 master2 master3)
  2. for node in ${MASTER_NAMES[@]}
  3. do
  4. echo ">>> ${node}"
  5. scp kube-controller-manager.conf root@${node}:/etc/kubernetes/conf/kube-controller-manager.conf
  6. done

10.4 kube-controller-manager systemd unit 文件

  • 创建
  1. cat > kube-controller-manager.service << 'EOF'
  2. [Unit]
  3. Description=Kubernetes Controller Manager
  4. Documentation=https://github.com/kubernetes/kubernetes
  5. [Service]
  6. EnvironmentFile=/etc/kubernetes/conf/kube-controller-manager.conf
  7. ExecStart=/opt/kubernetes/bin/kube-controller-manager $KUBE_CONTROLLER_MANAGER_OPTS
  8. Restart=on-failure
  9. [Install]
  10. WantedBy=multi-user.target
  11. EOF
  • 分发
  1. export MASTER_NAMES=(master1 master2 master3)
  2. for node in ${MASTER_NAMES[@]}
  3. do
  4. echo ">>> ${node}"
  5. scp kube-controller-manager.service root@${node}:/usr/lib/systemd/system/kube-controller-manager.service
  6. done

10.5 启动 kube-controller-manager 服务

  1. # systemctl daemon-reload
  2. # systemctl enable kube-controller-manager
  3. # systemctl restart kube-controller-manager
  4. export MASTER_NAMES=(master1 master2 master3)
  5. for node in ${MASTER_NAMES[@]}
  6. do
  7. echo ">>> ${node}"
  8. ssh root@${node} "/var/log/kubernetes/kube-controller-manager"
  9. ssh root@${node} "systemctl daemon-reload && systemctl enable kube-controller-manager && systemctl restart kube-controller-manager"
  10. done

10.6 检查启动结果

  1. export MASTER_NAMES=(master1 master2 master3)
  2. for node in ${MASTER_NAMES[@]}
  3. do
  4. echo ">>> ${node}"
  5. ssh root@${node} "systemctl status kube-controller-manager|grep Active"
  6. done

10.7 测试 kube-controller-manager 集群的高可用

停掉一个或两个节点的 kube-controller-manager 服务,观察其它节点的日志,看是否获取了 leader 权限。

11. kube-scheduler安装

该集群包含 3 个节点,启动后将通过竞争选举机制产生一个 leader 节点,其它节点为阻塞状态。当 leader 节点不可用后,剩余节点将再次进行选举产生新的 leader 节点,从而保证服务的可用性。

11.1 创建 kube-scheduler 证书和私钥

  • 创建证书签名文件
  1. cat > kube-scheduler-csr.json <<EOF
  2. {
  3. "CN": "system:kube-scheduler",
  4. "hosts": [],
  5. "key": {
  6. "algo": "rsa",
  7. "size": 2048
  8. },
  9. "names": [
  10. {
  11. "C": "CN",
  12. "ST": "Sichuan",
  13. "L": "Chengdu",
  14. "O": "system:kube-scheduler",
  15. "OU": "System"
  16. }
  17. ]
  18. }
  19. EOF
  • 生成证书和私钥
  1. cfssl gencert -ca=/opt/k8s/ca.pem \
  2. -ca-key=/opt/k8s/ca-key.pem \
  3. -config=/opt/k8s/ca-config.json \
  4. -profile=kubernetes kube-scheduler-csr.json | cfssljson -bare kube-scheduler

11.2 kubeconfig 文件

  • 创建
  1. #生成kubeconfig文件(以下是shell命令,直接在终端执行):
  2. KUBE_CONFIG="kube-scheduler.kubeconfig"
  3. KUBE_APISERVER="https://192.168.112.140:6443"
  4. kubectl config set-cluster kubernetes \
  5. --certificate-authority=/etc/kubernetes/cert/ca.pem \
  6. --embed-certs=true \
  7. --server=${KUBE_APISERVER} \
  8. --kubeconfig=${KUBE_CONFIG}
  9. kubectl config set-credentials kube-scheduler \
  10. --client-certificate=./kube-scheduler.pem \
  11. --client-key=./kube-scheduler-key.pem \
  12. --embed-certs=true \
  13. --kubeconfig=${KUBE_CONFIG}
  14. kubectl config set-context default \
  15. --cluster=kubernetes \
  16. --user=kube-scheduler \
  17. --kubeconfig=${KUBE_CONFIG}
  18. kubectl config use-context default --kubeconfig=${KUBE_CONFIG}
  • 分发
  1. export MASTER_NAMES=(master1 master2 master3)
  2. for node in ${MASTER_NAMES[@]}
  3. do
  4. echo ">>> ${node}"
  5. scp kube-scheduler.kubeconfig root@${node}:/etc/kubernetes/conf/
  6. done

11.3 kube-scheduler配置文件

  • 创建配置文件
  1. cat > kube-scheduler.conf << 'EOF'
  2. KUBE_SCHEDULER_OPTS="--logtostderr=false \
  3. --v=2 \
  4. --log-dir=/var/log/kubernetes/kube-scheduler/ \
  5. --leader-elect \
  6. --kubeconfig=/etc/kubernetes/conf/kube-scheduler.kubeconfig \
  7. --bind-address=127.0.0.1"
  8. EOF
  • 分发配置文件
  1. export MASTER_NAMES=(master1 master2 master3)
  2. for node in ${MASTER_NAMES[@]}
  3. do
  4. echo ">>> ${node}"
  5. scp kube-scheduler.conf root@${node}:/etc/kubernetes/conf/kube-scheduler.conf
  6. done

11.4 kube-scheduler systemd unit 文件

  • 创建
  1. cat > kube-scheduler.service <<'EOF'
  2. [Unit]
  3. Description=Kubernetes Scheduler
  4. Documentation=https://github.com/kubernetes/kubernetes
  5. [Service]
  6. EnvironmentFile=/etc/kubernetes/conf/kube-scheduler.conf
  7. ExecStart=/opt/kubernetes/bin/kube-scheduler $KUBE_SCHEDULER_OPTS
  8. Restart=on-failure
  9. RestartSec=5
  10. [Install]
  11. WantedBy=multi-user.target
  12. EOF
  • 分发
  1. export MASTER_NAMES=(master1 master2 master3)
  2. for node in ${MASTER_NAMES[@]}
  3. do
  4. echo ">>> ${node}"
  5. scp kube-scheduler.service root@${node}:/usr/lib/systemd/system/kube-scheduler.service
  6. done

11.5 启动 kube-scheduler 服务

  1. # systemctl daemon-reload
  2. # systemctl enable kube-scheduler
  3. # systemctl restart kube-scheduler
  4. export MASTER_NAMES=(master1 master2 master3)
  5. for node in ${MASTER_NAMES[@]}
  6. do
  7. echo ">>> ${node}"
  8. ssh root@${node} "mkdir -p /var/log/kubernetes/kube-scheduler"
  9. ssh root@${node} "systemctl daemon-reload && systemctl enable kube-scheduler && systemctl restart kube-scheduler"
  10. done

11.6 检查启动结果

  1. export MASTER_NAMES=(master1 master2 master3)
  2. for node in ${MASTER_NAMES[@]}
  3. do
  4. echo ">>> ${node}"
  5. ssh root@${node} "systemctl status kube-scheduler|grep Active"
  6. done

12. kubelet安装

kublet 运行在每个 worker 节点上(生产环境可能需要在master节点上也运行),接收 kube-apiserver 发送的请求,管理 Pod 容器,执行交互式命令,如 exec、run、logs 等。

kublet 启动时自动向 kube-apiserver 注册节点信息,为确保安全,本文档只开启接收 https 请求的安全端口,对请求进行认证和授权,拒绝未授权的访问(如 apiserver、heapster)。

12.1 kubelet bootstrap kubeconfig 文件

  • 创建
  1. KUBE_CONFIG="kubelet-bootstrap.kubeconfig"
  2. KUBE_APISERVER="https://192.168.112.140:6443"
  3. BOOTSTRAP_TOKEN=$(awk -F "," '{print $1}' /etc/kubernetes/conf/token.csv)
  4. #生成 kubelet bootstrap kubeconfig 配置文件
  5. kubectl config set-cluster kubernetes \
  6. --certificate-authority=/etc/kubernetes/ssl/ca.pem \
  7. --embed-certs=true \
  8. --server=${KUBE_APISERVER} \
  9. --kubeconfig=${KUBE_CONFIG}
  10. kubectl config set-credentials "kubelet-bootstrap" \
  11. --token=${TOKEN} \
  12. --kubeconfig=${KUBE_CONFIG}
  13. kubectl config set-context default \
  14. --cluster=kubernetes \
  15. --user="kubelet-bootstrap" \
  16. --kubeconfig=${KUBE_CONFIG}
  17. kubectl config use-context default --kubeconfig=${KUBE_CONFIG}

证书中写入 Token 而非证书,证书后续由 controller-manager 创建。

  • 分发
  1. export NODE_NAMES=(master1 master2 master3 node1 node2 node3)
  2. for node in ${NODE_NAMES[@]}
  3. do
  4. echo ">>> ${node}"
  5. scp kubelet-bootstrap.kubeconfig root@${node}:/etc/kubernetes/conf/kubelet-bootstrap.kubeconfig
  6. done

12.2 参数配置文件

  • 创建
  1. cat > kubelet-config.yaml << EOF
  2. kind: KubeletConfiguration
  3. apiVersion: kubelet.config.k8s.io/v1beta1
  4. address: 0.0.0.0
  5. port: 10250
  6. readOnlyPort: 10255
  7. cgroupDriver: systemd
  8. clusterDNS: ["10.0.0.2"]
  9. clusterDomain: "sysit.cn"
  10. failSwapOn: false
  11. authentication:
  12. anonymous:
  13. enabled: false
  14. webhook:
  15. cacheTTL: 2m0s
  16. enabled: true
  17. x509:
  18. clientCAFile: /etc/kubernetes/cert/ca.pem
  19. authorization:
  20. mode: Webhook
  21. webhook:
  22. cacheAuthorizedTTL: 5m0s
  23. cacheUnauthorizedTTL: 30s
  24. evictionHard:
  25. imagefs.available: 15%
  26. memory.available: 100Mi
  27. nodefs.available: 10%
  28. nodefs.inodesFree: 5%
  29. maxOpenFiles: 1000000
  30. maxPods: 110
  31. EOF
  • 有人说clusterDomain 后应该有一个“.”,如“sysit.cn.”,待验证。
  • cgroupDriver 如果docker的驱动为systemd,cgroupDriver修改为systemd。此处设置很重要,否则后面node节点无法加入到集群

  • 分发

  1. export NODE_NAMES=(master1 master2 master3 node1 node2 node3)
  2. for node in ${NODE_NAMES[@]}
  3. do
  4. echo ">>> ${node}"
  5. scp kubelet-config.yaml root@${node}:/etc/kubernetes/conf/kubelet-config.yaml
  6. done

12.3 kubelet配置文件

  • 创建
  1. cat > kubelet.conf << 'EOF'
  2. KUBELET_OPTS="--logtostderr=false \
  3. --v=2 \
  4. --log-dir=/var/log/kubernetes/kubelet/ \
  5. --hostname-override=#NODE_NAME# \
  6. --network-plugin=cni \
  7. --kubeconfig=/etc/kubernetes/conf/kubelet.kubeconfig \
  8. --bootstrap-kubeconfig=/etc/kubernetes/conf/bootstrap.kubeconfig \
  9. --config=/etc/kubernetes/conf/kubelet-config.yaml \
  10. --cert-dir=/etc/kubernetes/cert \
  11. --pod-infra-container-image=lizhenliang/pause-amd64:3.0"
  12. EOF

–hostname-override:显示名称,集群中唯一
–network-plugin:启用CNI
–kubeconfig:空路径,会自动生成,后面用于连接apiserver
–bootstrap-kubeconfig:首次启动向apiserver申请证书
–config:配置参数文件
–cert-dir:kubelet证书生成目录
–pod-infra-container-image:管理Pod网络容器的镜像

  • 分发
  1. export NODE_NAMES=(master1 master2 master3 node1 node2 node3)
  2. for node in ${NODE_NAMES[@]}
  3. do
  4. echo ">>> ${node}"
  5. ssh root@${node} "mkdir -p /etc/kubernetes/conf"
  6. scp kubelet.conf root@${node}:/etc/kubernetes/conf/kubelet.conf
  7. ssh root@${node} "sed -i 's/#NODE_NAME#/${node}.sysit.cn/g' /etc/kubernetes/conf/kubelet.conf"
  8. done

12.4 kubelet systemd unit 文件

  • 创建
  1. cat > kubelet.service <<'EOF'
  2. [Unit]
  3. Description=Kubernetes Kubelet
  4. After=docker.service
  5. [Service]
  6. EnvironmentFile=/etc/kubernetes/conf/kubelet.conf
  7. ExecStart=/opt/kubernetes/bin/kubelet $KUBELET_OPTS
  8. Restart=on-failure
  9. LimitNOFILE=65536
  10. [Install]
  11. WantedBy=multi-user.target
  12. EOF
  • 分发
  1. export NODE_NAMES=(master1 master2 master3 node1 node2 node3)
  2. for node in ${NODE_NAMES[@]}
  3. do
  4. echo ">>> ${node}"
  5. scp kubelet.service root@${node}:/usr/lib/systemd/system/kubelet.service
  6. done

12.5 启动

  1. # systemctl daemon-reload
  2. # systemctl start kubelet
  3. # systemctl enable kubelet
  4. export NODE_NAMES=(master1 master2 master3 node1 node2 node3)
  5. for node in ${NODE_NAMES[@]}
  6. do
  7. echo ">>> ${node}"
  8. ssh root@${node} "mkdir -p /var/lib/kubelet && mkdir -p /var/log/kubernetes/kubelet"
  9. ssh root@${node} "systemctl daemon-reload && systemctl enable kubelet && systemctl restart kubelet"
  10. done

12.6 检查启动结果

  1. export NODE_NAMES=(master1 master2 master3 node1 node2 node3)
  2. for node in ${NODE_NAMES[@]}
  3. do
  4. echo ">>> ${node}"
  5. ssh root@${node} "systemctl status kubelet|grep Active"
  6. done

12.7 批准kubelet证书申请并加入集群

  • 查看CSR请求
    确认kubelet服务启动成功后,接着到master上Approve一下bootstrap请求。执行如下命令可以看到 CSR 请求:
  1. kubectl get csr
  2. NAME AGE SIGNERNAME REQUESTOR CONDITION
  3. node-csr-uCEGPOIiDdlLODKts8J658HrFq9CZ--K6M4G7bjhk8A 6m3s kubernetes.io/kube-apiserver-client-kubelet kubelet-bootstrap Pending
  • 批准申请
  1. kubectl certificate approve node-csr-uCEGPOIiDdlLODKts8J658HrFq9CZ--K6M4G7bjhk8A
  • 查看节点
  1. NAME STATUS ROLES AGE VERSION
  2. master1.sysit.cn NotReady <none> 15s v1.20.7
  3. master2.sysit.cn NotReady <none> 15s v1.20.7
  4. master3.sysit.cn NotReady <none> 15s v1.20.7
  5. node1.sysit.cn NotReady <none> 15s v1.20.7
  6. node2.sysit.cn NotReady <none> 15s v1.20.7
  7. node3.sysit.cn NotReady <none> 15s v1.20.7

此时还没有安装网络插件,安装网络插件之后,状态会变成Ready。

12.8 授权apiserver访问kubelet

  1. #应用场景:例如kubectl logs
  2. cat > apiserver-to-kubelet-rbac.yaml << EOF
  3. apiVersion: rbac.authorization.k8s.io/v1
  4. kind: ClusterRole
  5. metadata:
  6. annotations:
  7. rbac.authorization.kubernetes.io/autoupdate: "true"
  8. labels:
  9. kubernetes.io/bootstrapping: rbac-defaults
  10. name: system:kube-apiserver-to-kubelet
  11. rules:
  12. - apiGroups:
  13. - ""
  14. resources:
  15. - nodes/proxy
  16. - nodes/stats
  17. - nodes/log
  18. - nodes/spec
  19. - nodes/metrics
  20. - pods/log
  21. verbs:
  22. - "*"
  23. ---
  24. apiVersion: rbac.authorization.k8s.io/v1
  25. kind: ClusterRoleBinding
  26. metadata:
  27. name: system:kube-apiserver
  28. namespace: ""
  29. roleRef:
  30. apiGroup: rbac.authorization.k8s.io
  31. kind: ClusterRole
  32. name: system:kube-apiserver-to-kubelet
  33. subjects:
  34. - apiGroup: rbac.authorization.k8s.io
  35. kind: User
  36. name: kubernetes
  37. EOF
  38. kubectl apply -f apiserver-to-kubelet-rbac.yaml

13 部署kube-proxy

13.1 生成kube-proxy证书:

  • 创建证书请求文件
  1. cat > kube-proxy-csr.json << EOF
  2. {
  3. "CN": "system:kube-proxy",
  4. "hosts": [],
  5. "key": {
  6. "algo": "rsa",
  7. "size": 2048
  8. },
  9. "names": [
  10. {
  11. "C": "CN",
  12. "L": "Chengdu",
  13. "ST": "Sichuan",
  14. "O": "k8s",
  15. "OU": "System"
  16. }
  17. ]
  18. }
  19. EOF
  • 生成证书
  1. cfssl gencert -ca=/opt/k8s/ca.pem \
  2. -ca-key=/opt/k8s/ca-key.pem \
  3. -config=/opt/k8s/ca-config.json \
  4. -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy

13.2 kubeconfig文件

  • 创建
  1. KUBE_CONFIG="kube-proxy.kubeconfig"
  2. KUBE_APISERVER="https://192.168.112.140:6443"
  3. kubectl config set-cluster kubernetes \
  4. --certificate-authority=/etc/kubernetes/ssl/ca.pem \
  5. --embed-certs=true \
  6. --server=${KUBE_APISERVER} \
  7. --kubeconfig=${KUBE_CONFIG}
  8. kubectl config set-credentials kube-proxy \
  9. --client-certificate=./kube-proxy.pem \
  10. --client-key=./kube-proxy-key.pem \
  11. --embed-certs=true \
  12. --kubeconfig=${KUBE_CONFIG}
  13. kubectl config set-context default \
  14. --cluster=kubernetes \
  15. --user=kube-proxy \
  16. --kubeconfig=${KUBE_CONFIG}
  17. kubectl config use-context default --kubeconfig=${KUBE_CONFIG}
  • 分发 kubeconfig 文件
  1. export NODE_NAMES=(master1 master2 master3 node1 node2 node3)
  2. for node in ${NODE_NAMES[@]}
  3. do
  4. echo ">>> ${node}"
  5. scp kube-proxy.kubeconfig root@${node}:/etc/kubernetes/conf/
  6. done

13.3 kube-proxy参数文件

  • 创建
  1. cat >kube-proxy-config.yaml << EOF
  2. kind: KubeProxyConfiguration
  3. apiVersion: kubeproxy.config.k8s.io/v1alpha1
  4. bindAddress: 0.0.0.0
  5. metricsBindAddress: 0.0.0.0:10249
  6. clientConnection:
  7. kubeconfig: /etc/kubernetes/conf/kube-proxy.kubeconfig
  8. hostnameOverride: #NODE_NAME#
  9. clusterCIDR: 10.0.0.0/24
  10. mode: "ipvs"
  11. EOF
  • 分发
  1. export NODE_NAMES=(master1 master2 master3 node1 node2 node3)
  2. for node in ${NODE_NAMES[@]}
  3. do
  4. echo ">>> ${node}"
  5. scp kube-proxy-config.yaml root@${node}:/etc/kubernetes/conf/kube-proxy-config.yaml
  6. ssh root@${node} "sed -i 's/#NODE_NAME#/${node}.sysit.cn/g' /etc/kubernetes/conf/kube-proxy-config.yaml"
  7. done

13.4 kube-proxy 配置文件

  • 创建
  1. cat > kube-proxy.conf <<'EOF'
  2. KUBE_PROXY_OPTS="--logtostderr=false \\
  3. --v=2 \\
  4. --log-dir=/var/log/kubernetes/kube-proxy/ \\
  5. --config=/etc/kubernetes/conf/kube-proxy-config.yaml"
  6. EOF
  • 分发
  1. export NODE_NAMES=(master1 master2 master3 node1 node2 node3)
  2. for node in ${NODE_NAMES[@]}
  3. do
  4. echo ">>> ${node}"
  5. scp kube-proxy.conf root@${node}:/etc/kubernetes/conf/kube-proxy.conf
  6. done

13.5 kube-proxy systemd unit文件

  • 创建
  1. cat > kube-proxy.service <<'EOF'
  2. [Unit]
  3. Description=Kubernetes Proxy
  4. After=network.target
  5. [Service]
  6. EnvironmentFile=/etc/kubernetes/conf/kube-proxy.conf
  7. ExecStart=/opt/kubernetes/bin/kube-proxy $KUBE_PROXY_ARGS
  8. Restart=on-failure
  9. LimitNOFILE=65536
  10. [Install]
  11. WantedBy=multi-user.target
  12. EOF
  • 分发kube-proxy systemd unit文件
  1. export NODE_NAMES=(master1 master2 master3 node1 node2 node3)
  2. for node in ${NODE_NAMES[@]}
  3. do
  4. echo ">>> ${node}"
  5. scp kube-proxy.service root@${node}:/usr/lib/systemd/system/kube-proxy.service
  6. done

13.6 启动

  1. export NODE_NAMES=(master1 master2 master3 node1 node2 node3)
  2. for node in ${NODE_NAMES[@]}
  3. do
  4. echo ">>> ${node}"
  5. ssh root@${node} "mkdir -p /var/lib/kube-proxy && mkdir -p /var/log/kubernetes/kube-proxy"
  6. ssh root@${node} "systemctl daemon-reload && systemctl enable kube-proxy && systemctl restart kube-proxy"
  7. done

13.7 检查启动结果

  1. export NODE_NAMES=(master1 master2 master3 node1 node2 node3)
  2. for node in ${NODE_NAMES[@]}
  3. do
  4. echo ">>> ${node}"
  5. ssh root@${node} "systemctl status kube-proxy|grep Active"
  6. done

14. 部署网络插件calico

这里配置calico作为网络插件,参考文档:https://docs.projectcalico.org/getting-started/kubernetes/self-managed-onprem/onpremises#install-calico-on-nodes

  • 配置文件
    注意修改calico.yaml文件中的子网信息与config文件中的podSubnet相匹配,本文中是10.244.0.0/16
  1. wget https://docs.projectcalico.org/manifests/calico.yaml
  2. cp calico.yaml{,.ori}
  3. [root@master1 k8s]# diff calico.yaml calico.yaml.ori
  4. 3683,3684c3683,3684
  5. < - name: CALICO_IPV4POOL_CIDR
  6. < value: "10.244.0.0/16"
  7. ---
  8. > # - name: CALICO_IPV4POOL_CIDR
  9. > # value: "192.168.0.0/16"
  • 应用
  1. kubectl apply -f calico.yaml
  • 查看节点
  1. [root@master1 k8s]# kubectl get node
  2. NAME STATUS ROLES AGE VERSION
  3. master1.sysit.cn Ready <none> 1h v1.20.7
  4. master2.sysit.cn Ready <none> 1h v1.20.7
  5. master3.sysit.cn Ready <none> 1h v1.20.7
  6. node1.sysit.cn Ready <none> 1h v1.20.7
  7. node2.sysit.cn Ready <none> 1h v1.20.7
  8. node3.sysit.cn Ready <none> 1h v1.20.7

15 部署coreDNS

  • 部署
  1. wget https://github.com/coredns/deployment/archive/refs/tags/coredns-1.14.0.tar.gz
  2. tar -zxvf deployment-coredns-1.14.0.tar.gz
  3. cd deployment-coredns-1.14.0/kubernetes/
  4. export CLUSTER_DNS_SVC_IP="10.0.0.2"
  5. export CLUSTER_DNS_DOMAIN="sysit.cn"
  6. ./deploy.sh -i ${CLUSTER_DNS_SVC_IP} -d ${CLUSTER_DNS_DOMAIN} | kubectl apply -f -
  • 检查
  1. kubectl run -it --rm dns-test --image=busybox:1.28.4 sh
  2. / # nslookup kubernetes
  3. Server: 10.0.0.2
  4. Address 1: 10.0.0.2 kube-dns.kube-system.svc.sysit.cn
  5. Name: kubernetes
  6. Address 1: 10.0.0.1 kubernetes.default.svc.sysit.cn

16. dashboard和metrics

详见本站kubernetes dashboard安装及addons安装

参考文章: https://blog.51cto.com/lizhenliang/2717923

posted @ 2021-12-03 10:02  JV_GGG  阅读(263)  评论(0)    收藏  举报