二进制安装kubernetes集群
原作者: sysit
1. 环境说明
1.1 部署拓扑
- 部署拓扑
本文采用外置etcd的方式部署
- 实际组件部署
本文中etcd集群和master集群融合部署,大集群建议单独的服务器部署etcd。
1.2 集群环境
- 操作系统:CentOS Linux release 7.9.2009 (Core)
- 版本:Kubenetes 1.20.7(生产环境建议使用小版本大于5的包,如1.20.5以上)
- etcd: 3.3.11
- 网络方案:Calico
- 容器:docker-ce 19.03
- 插件:
- CoreDns
- Dashboard
- Metrics-Server
- EFK(ElasticSearch、Fluentd、Kibana)
- 镜像仓库
- harbor
- ingress控制器:ingress-nginx
1.3 服务器信息
192.168.112.141 master1.sysit.cn master1192.168.112.142 master2.sysit.cn master2192.168.112.143 master3.sysit.cn master3192.168.112.144 node1.sysit.cn node1192.168.112.145 node2.sysit.cn node2192.168.112.146 node3.sysit.cn node3
2. 环境初始化
2.1 操作系统初始化
见本站CentOS 7.3初始化
2.2 关闭swap(kubelet节点操作)
#如果开启了 swap 分区,kubelet 会启动失败(可以通过将参数 --fail-swap-on 设置为 false 来忽略 swap on),故需要在每台机器上关闭 swap 分区:swapoff -a && sysctl -w vm.swappiness=0#为了防止开机自动挂载 swap 分区,可以注释 /etc/fstab 中相应的条目:sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab
2.3 关闭防火墙
systemctl stop firewalldsystemctl disable firewalld
2.4 关闭selinux
setenforce 0sed -i "s/SELINUX=enforcing/SELINUX=disabled/g" /etc/selinux/config
2.5 设置主机名和hosts
hostnamectl set-hostname master1.sysit.cncat > /etc/hosts <<'EOF'127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4::1 localhost localhost.localdomain localhost6 localhost6.localdomain6192.168.112.140 master.sysit.cn master192.168.112.141 master1.sysit.cn master1192.168.112.142 master2.sysit.cn master2192.168.112.143 master3.sysit.cn master3192.168.112.144 node1.sysit.cn node1192.168.112.145 node2.sysit.cn node2192.168.112.146 node3.sysit.cn node3EOF
2.6 安装依赖包
sudo yum install -y epel-releasesudo yum install -y conntrack ipvsadm ipset jq sysstat curl iptables libseccompyum install sshpass -y
2.7 用户及免密(可选)
# 免密,后续所有操作都在master1上执行,因此创建maste1到其他服务器的免密登录。ssh-keygen -t rsaexport NODE_NAMES=(master1 master2 master3 node1 node2 node3)export pwd="rootpassword"for node in ${NODE_NAMES[@]}doecho ">>> ${node}"sshpass -p ${pwd} ssh-copy-id -o StrictHostKeyChecking=no root@${node} 2>/dev/nulldone
1.2.8 设置系统参数
#将桥接的IPv4流量传递到iptables的链cat > /etc/sysctl.d/kubernetes.conf <<'EOF'net.bridge.bridge-nf-call-ip6tables = 1net.bridge.bridge-nf-call-iptables = 1EOFsudo sysctl -p /etc/sysctl.d/kubernetes.conf
2.9 升级内核
- 升级完内核之后重启服务器
# CentOS自带的内核版本较低,为了运行docker容器的稳定性,建议将内核版本升级。yum install https://www.elrepo.org/elrepo-release-7.el7.elrepo.noarch.rpm# 安装完成后检查 /boot/grub2/grub.cfg 中对应内核 menuentry 中是否包含 initrd16 配置,如果没有,再安装一次!sudo yum --enablerepo=elrepo-kernel install -y kernel-lt# 设置开机从新内核启动grub2-set-default 0 && grub2-mkconfig -o /etc/grub2.cfg# 确认是否启动默认内核指向新的内核grubby --default-kernel# 开启user_namespacegrubby --args="user_namespace.enable=1" --update-kernel="$(grubby --default-kernel)"
2.10 加载内核模块
cat > /etc/sysconfig/modules/ipvs.modules <<'EOF'#!/bin/bashipvs_modules="ip_vs ip_vs_lc ip_vs_wlc ip_vs_rr ip_vs_wrr ip_vs_lblc ip_vs_lblcr ip_vs_dh ip_vs_sh ip_vs_fo ip_vs_nq ip_vs_sed ip_vs_ftp nf_conntrack"for kernel_module in ${ipvs_modules}; do/sbin/modinfo -F filename ${kernel_module} > /dev/null 2>&1if [ $? -eq 0 ]; then/sbin/modprobe ${kernel_module}fidoneEOFchmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep ip_vs
2.11 关闭 NUMA
grubby --update-kernel=ALL --args="numa=off" --update-kernel="$(grubby --default-kernel)"
2.12 重启系统
# 重启加载新内核reboot
2.13 配置时区及时间同步
- 配置时区
ln -sf /usr/share/zoneinfo/Asia/Shanghai /etc/localtime
- 安装chronyd
# 所有节点yum install chrony -y
- 主时间服务器配置
# master1作为主时间服务器,其他服务器同步master1的时间,# 配置本机时间,当外部时间无法获取的时候,采用本地时间。cat >>/etc/chrony.conf <<'EOF'allow 192.168.112.0/24server 127.127.1.0allow 127.0.0.0/8local stratum 10EOF# 重启systemctl enable chronydsystemctl restart chronyd# 检查chronyc sources
- 其他节点作为client同步master1时间
# chrony client配置,除master1之外的所有节点sed -i "s/server/#server/g" /etc/chrony.confecho "server master1 iburst" >>/etc/chrony.conf#启动systemctl enable chronydsystemctl start chronyd# 检查chronyc sources
3. 部署高可用haproxy+keepalived
3.1 安装配置haproxy
- 安装
# 三个master节点安装yum install haproxy -y
- haproxy配置
grep -v ^# /etc/haproxy/haproxy.cfgglobalchroot /var/lib/haproxydaemongroup haproxyuser haproxymaxconn 4000pidfile /var/run/haproxy.pidlog 127.0.0.1 local0 infodefaultslog globalmaxconn 4000option redispatchretries 3timeout http-request 10stimeout queue 1mtimeout connect 10stimeout client 1mtimeout server 1mtimeout check 10s# haproxy监控页listen statsbind 0.0.0.0:1080mode httpstats enablestats uri /stats realm Kuberentes\ Haproxystats auth admin:adminstats refresh 30sstats show-nodestats show-legendsstats hide-versionfrontend kube-api-https_frontendbind 192.168.112.140:6443mode tcpdefault_backend kube-api-https_backendbackend kube-api-https_backendbalance roundrobinmode tcpstick-table type ip size 200k expire 30mstick on srcserver master1.sysit.cn 192.168.112.141:6443 maxconn 1024 cookie 1 weight 3 check inter 1500 rise 2 fall 3server master2.sysit.cn 192.168.112.142:6443 maxconn 1024 cookie 2 weight 3 check inter 1500 rise 2 fall 3server master3.sysit.cn 192.168.112.143:6443 maxconn 1024 cookie 3 weight 3 check inter 1500 rise 2 fall 3
- haproxy配置日志
建议开启haproxy的日志功能,便于后续的问题排查!#创建HAProxy记录日志文件并授权mkdir /var/log/haproxy && chmod a+w /var/log/haproxyyum install rsyslog#在rsyslog文件下修改以下字段vim /etc/rsyslog.conf#启用tcp/udp模块# Provides UDP syslog reception$ModLoad imudp$UDPServerRun 514# Provides TCP syslog reception$ModLoad imtcp$InputTCPServerRun 514#添加haproxy配置local0.=info -/var/log/haproxy/haproxy-info.loglocal0.=err -/var/log/haproxy/haproxy-err.loglocal0.notice;local0.!=err -/var/log/haproxy/haproxy-notice.logsystemctl restart rsyslog
- haproxy监听内核
#全部控制节点修改内核参数;#net.ipv4.ip_nonlocal_bind:是否允许no-local ip绑定,关系到haproxy实例与vip能否绑定并切换;#net.ipv4.ip_forward:是否允许转发echo "net.ipv4.ip_nonlocal_bind = 1" >>/etc/sysctl.confecho "net.ipv4.ip_forward = 1" >>/etc/sysctl.confsysctl -p
- haproxy启动
systemctl enable haproxysystemctl restart haproxy
3.2 安装配置keepalived
- 安装配置keepalived
# 三个节点yum install -y keepalived
- keepalived配置
注意修改配置priority的值
! Configuration File for keepalivedglobal_defs {notification_email {root@localhost.local}notification_email_from root@localhost.localsmtp_server 192.168.112.11smtp_connect_timeout 30router_id Haproxy_DEVEL}vrrp_script chk_haproxy {script "/etc/haproxy/chk_haproxy.sh"interval 1#haproxy在线,权重加2#weight 2}vrrp_instance VI_1 {state BACKUPinterface eth0virtual_router_id 51# 注意修改成不同的priority值priority 100advert_int 1nopreemptauthentication {auth_type PASSauth_pass sysit}virtual_ipaddress {192.168.112.140}track_script {chk_haproxy}}
- haproxy检测脚本
cat >/etc/haproxy/chk_haproxy.sh<<'EOF'#!/bin/bash# check haproxy process, if there isn't any process, try to start the process once,# check it again after 3s, if there isn't any process still, restart keepalived process, change state.# 2017-03-22 v0.1if [ $(ps -C haproxy --no-header | wc -l) -eq 0 ]; then/etc/rc.d/init.d/haproxy startsleep 3if [ $(ps -C haproxy --no-header | wc -l) -eq 0 ]; thensystemctl restart keepalivedfifiEOFchmod +x /etc/haproxy/chk_haproxy.sh
- 启动服务
systemctl enable keepalivedsystemctl start keepalived
4. CA证书与秘钥
4.1 cfssl工具
本文档采用 CloudFlare 的 PKI 工具集 cfssl 来生成 Certificate Authority (CA) 证书和秘钥文件,CA 是自签名的证书,用来签名后续创建的其它 TLS证书。
# 选择任意一台服务器mkdir -p /usr/local/cfssl/bincd /usr/local/cfssl/bincurl https://github.com/cloudflare/cfssl/releases/download/v1.5.0/cfssl_1.5.0_linux_amd64 -o cfsslcurl https://github.com/cloudflare/cfssl/releases/download/v1.5.0/cfssl-certinfo_1.5.0_linux_amd64 -o cfssl-certinfocurl https://github.com/cloudflare/cfssl/releases/download/v1.5.0/cfssljson_1.5.0_linux_amd64 -o cfssljsonchmod +x cfssl cfssl-certinfo cfssljsonecho 'export PATH=$PATH:/usr/local/cfssl/bin' >>/etc/bashrc
4.2 CA配置文件
CA 证书是集群所有节点共享的,只需要创建一个 CA 证书,后续创建的所有证书都由它签名。
CA 配置文件用于配置根证书的使用场景 (profile) 和具体参数 (usage,过期时间、服务端认证、客户端认证、加密等),后续在签名其它证书时需要指定特定场景。
- CA配置文件
# ca-config.json:1个profiles,分别指定不同的过期时间,使用场景等参数,根据需要在不同场景使用不同的profile签名证书;这里以生成的模板为基础修改;cd /opt/k8s/cat > ca-config.json <<EOF{"signing": {"default": {"expiry": "87600h"},"profiles": {"kubernetes": {"usages": ["signing","key encipherment","server auth","client auth"],"expiry": "87600h"}}}}EOF
signing:表示该证书可用于签名其它证书,生成的ca.pem证书中CA=TRUE;server auth:表示client可以用该该证书对server提供的证书进行验证;client auth:表示server可以用该该证书对client提供的证书进行验证;
- CA证书签名请求
cat > ca-csr.json <<EOF{"CN": "kubernetes","key": {"algo": "rsa","size": 2048},"names": [{"C": "CN","L": "Chengdu","ST": "Sichuan","O": "k8s","OU": "System"}]}EOF
- 生成CA证书与秘钥
cfssl gencert -initca ca-csr.json | cfssljson -bare cals ca*ca-config.json ca.csr ca-csr.json ca-key.pem ca.pem#简单查看cfssl-certinfo -cert ca.pem
- 分发
# 将生成的CA证书,秘钥,配置文件等分发到所有机器/etc/kubernetes/cert目录下;# ca-key.pem与ca.pem重要export NODE_NAMES=(master1 master2 master3 node1 node2 node3)for node in ${NODE_NAMES[@]}doecho ">>> ${node}"ssh root@${node} "mkdir -p /etc/kubernetes/cert"scp ca*pem root@${node}:/etc/kubernetes/cert/done
5. etcd安装
见etcd安装配置文档,注意如果单独安装的话,两个ca文件可以不同。
但是为了保持一致,建议使用同一个ca文件。
注意:为了以后扩展或服务器变更,建议预留几个IP地址。
6. 安装docker
- 所有节点
curl -o /etc/yum.repos.d/docker-ce.repo https://download.docker.com/linux/centos/docker-ce.repo# 修改为清华大学源,加快安装速度sudo sed -i 's+download.docker.com+mirrors.tuna.tsinghua.edu.cn/docker-ce+' /etc/yum.repos.d/docker-ce.repoyum install docker-ce-19.03.15 -y# 编辑systemctl的Docker启动文件# Docker从1.13版本开始调整了默认的防火墙规则,禁用了iptables filter表中FOWARD链,这样会引起Kubernetes集群中跨Node的Pod无法通信,因此docker安装完成后,还需要手动修改iptables规则。sed -i "13i ExecStartPost=/usr/sbin/iptables -P FORWARD ACCEPT" /usr/lib/systemd/system/docker.service# 1.15.x以后版本添加如下mkdir /etc/dockercat > /etc/docker/daemon.json <<EOF{"exec-opts": ["native.cgroupdriver=systemd"],"log-driver": "json-file","log-opts": {"max-size": "100m"},"storage-driver": "overlay2","storage-opts": ["overlay2.override_kernel_check=true"]}EOFmkdir -p /etc/systemd/system/docker.service.d# Restart Dockersystemctl daemon-reloadsystemctl enable dockersystemctl restart docker
7. 安装二进制包
下载地址:
https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.20.md
注:下载一个server包就够了,包含了Master和Worker Node二进制文件,生产环境建议使用大于5的包,比如1.20.5以后的包。
cd /opt/k8swget https://dl.k8s.io/v1.20.7/kubernetes-server-linux-amd64.tar.gztar zxvf kubernetes-server-linux-amd64.tar.gzcd kubernetes/server/bin# master节点分发# mkdir -p /opt/kubernetes/bin# cp kube-apiserver kube-scheduler kube-controller-manager /opt/kubernetes/bin/export MASTER_NAMES=(master1 master2 master3)for node in ${MASTER_NAMES[@]}doecho ">>> ${node}"ssh root@${node} "mkdir -p /opt/kubernetes/bin"scp kube-apiserver kube-scheduler kube-controller-manager root@${node}:/opt/kubernetes/bin/done# work节点# cp kubelet kube-proxy /opt/kubernetes/bin/export NODE_NAMES=(master1 master2 master3 node1 node2 node3)for node in ${NODE_NAMES[@]}doecho ">>> ${node}"ssh root@${node} "mkdir -p /opt/kubernetes/bin"scp kubelet kube-proxy root@${node}:/opt/kubernetes/bin/done# kubectl# cp kubectl /usr/binfor node in master1 master2 master3doecho ">>> ${node}"scp kubectl root@${node}:/usr/bin/done
8. kubectl配置
8.1 创建kubectl TLS证书与私钥
- 创建kubectl证书签名请求
cat > admin-csr.json <<'EOF'{"CN": "admin","hosts": [],"key": {"algo": "rsa","size": 2048},"names": [{"C": "CN","L": "Chengdu","ST": "Sichuan","O": "system:masters","OU": "System"}]}EOF
- 生成
cfssl gencert -ca=/opt/k8s/ca.pem \-ca-key=/opt/k8s/ca-key.pem \-config=/opt/k8s/ca-config.json \-profile=kubernetes admin-csr.json | cfssljson -bare admin# 查看lsadmin.csr admin-csr.json admin-key.pem admin.pem
- 分发key
export MASTER_NAMES=(master1 master2 master3)for node in ${MASTER_NAMES[@]}doecho ">>> ${node}"scp admin*pem root@${node}:/etc/kubernetes/certdone
8.2 kubectl kubeconfig文件
- 创建
cd /opt/k8s/mkdir /root/.kubeKUBE_CONFIG="/root/.kube/config"KUBE_APISERVER="https://192.168.112.140:6443"kubectl config set-cluster kubernetes \--certificate-authority=/etc/kubernetes/ssl/ca.pem \--embed-certs=true \--server=${KUBE_APISERVER} \--kubeconfig=${KUBE_CONFIG}kubectl config set-credentials cluster-admin \--client-certificate=./admin.pem \--client-key=./admin-key.pem \--embed-certs=true \--kubeconfig=${KUBE_CONFIG}kubectl config set-context default \--cluster=kubernetes \--user=cluster-admin \--kubeconfig=${KUBE_CONFIG}kubectl config use-context default --kubeconfig=${KUBE_CONFIG}
- 分发kubeconfig文件
export MASTER_NAMES=(master1 master2 master3)for node in ${MASTER_NAMES[@]}doecho ">>> ${node}"ssh root@${node} "mkdir -p ~/.kube/"scp kubectl.kubeconfig root@${node}:~/.kube/configdone
8.3 让kubectl 可以使用tab
yum install bash-completion -yecho "source <(kubectl completion bash)" >> ~/.bashrc && source ~/.bashrc
9 安装配置kube-apiserver
三个master节点安装
9.1 kubernetes 证书和私钥
- 创建证书签名文件
cat > kubernetes-csr.json <<EOF{"CN": "kubernetes","hosts": ["127.0.0.1","192.168.112.140","192.168.112.141","192.168.112.142","192.168.112.143","192.168.112.200","10.0.0.1","kubernetes","kubernetes.default","kubernetes.default.svc","kubernetes.default.svc.sysit","kubernetes.default.svc.sysit.cn"],"key": {"algo": "rsa","size": 2048},"names": [{"C": "CN","ST": "Sichuan","L": "Chengdu","O": "k8s","OU": "System"}]}EOF
- 上述文件hosts字段中IP为所有Master/LB/VIP IP,一个都不能少!为了方便后期扩容可以多写几个预留的IP。
hosts字段指定授权使用该证书的IP或域名列表,这里列出了VIP、apiserver节点IP、kubernetes服务IP和域名,为了扩展要求,建议增加IP地址备用;- 域名最后字符不能是 .(如不能为
kubernetes.default.svc.sysit.cn.),否则解析时失败,提示:x509: cannot parse dnsName "kubernetes.default.svc.sysit.cn.";- 如果使用其他域名如
bootgo.com,则需要修改域名列表中的最后两个域名为:kubernetes.default.svc.bootgo、kubernetes.default.svc.bootgo.comkubernetes服务IP是apiserver自动创建的,一般是--service-cluster-ip-range参数指定的网段的第一个IP,后续可以通过如下命令获取:
$ kubectl get svc kubernetes
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes 10.0.0.1 443/TCP 1d
- 生成证书和私钥
cfssl gencert -ca=/opt/k8s/ca.pem \-ca-key=/opt/k8s/ca-key.pem \-config=/opt/k8s/ca-config.json \-profile=kubernetes kubernetes-csr.json | cfssljson -bare kubernetes# 查看ls kubernetes*pemkubernetes-key.pem kubernetes.pem
- 将生成的证书和私钥文件分发到 master 节点
# cp kubernetes*.pem /etc/kubernetes/ssl/export MASTER_NAMES=(master1 master2 master3)for node in ${MASTER_NAMES[@]}doecho ">>> ${node}"scp kubernetes*.pem root@${node}:/etc/kubernetes/ssl/done
9.2 kube-apiserver配置文件
- 创建配置文件
# 注意修改bind-address和advertise-address的地址cat > kube-apiserver.conf << 'EOF'KUBE_APISERVER_OPTS="--logtostderr=false \--v=2 \--log-dir=/var/log/kubernetes/ \--etcd-servers=https://192.168.112.141:2379,https://192.168.112.142:2379,https://192.168.112.143:2379 \--bind-address=#NODE_IP# \--secure-port=6443 \--advertise-address=#NODE_IP# \--allow-privileged=true \--service-cluster-ip-range=10.0.0.0/24 \--enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota,NodeRestriction \--authorization-mode=RBAC,Node \--enable-bootstrap-token-auth=true \--token-auth-file=/etc/kubernetes/conf/token.csv \--service-node-port-range=30000-32767 \--kubelet-client-certificate=/etc/kubernetes/cert/kubernetes.pem \--kubelet-client-key=/etc/kubernetes/cert/kubernetes-key.pem \--tls-cert-file=/etc/kubernetes/cert/kubernetes.pem \--tls-private-key-file=/etc/kubernetes/cert/kubernetes-key.pem \--client-ca-file=/etc/kubernetes/cert/ca.pem \--service-account-key-file=/etc/kubernetes/cert/ca-key.pem \--service-account-issuer=api \--service-account-signing-key-file=/etc/kubernetes/cert/kubernetes-key.pem \--etcd-cafile=/etc/etcd/cert/ca.pem \--etcd-certfile=/etc/etcd/cert/etcd.pem \--etcd-keyfile=/etc/etcd/cert/etcd-key.pem \--requestheader-client-ca-file=/etc/kubernetes/cert/ca.pem \--proxy-client-cert-file=/etc/kubernetes/cert/kubernetes.pem \--proxy-client-key-file=/etc/kubernetes/cert/kubernetes-key.pem \--requestheader-allowed-names=kubernetes \--requestheader-extra-headers-prefix=X-Remote-Extra- \--requestheader-group-headers=X-Remote-Group \--requestheader-username-headers=X-Remote-User \--enable-aggregator-routing=true \--audit-log-maxage=30 \--audit-log-maxbackup=3 \--audit-log-maxsize=100 \--audit-log-path=/var/log/kubernetes/k8s-audit.log"EOF
–logtostderr:启用日志
—v:日志等级
–log-dir:日志目录
–etcd-servers:etcd集群地址
–bind-address:监听地址
–secure-port:https安全端口
–advertise-address:集群通告地址
–allow-privileged:启用授权
–service-cluster-ip-range:Service虚拟IP地址段
–enable-admission-plugins:准入控制模块
–authorization-mode:认证授权,启用RBAC授权和节点自管理
–enable-bootstrap-token-auth:启用TLS bootstrap机制
–token-auth-file:bootstrap token文件
–service-node-port-range:Service nodeport类型默认分配端口范围
–kubelet-client-xxx:apiserver访问kubelet客户端证书
–tls-xxx-file:apiserver https证书
1.20版本必须加的参数:–service-account-issuer,–service-account-signing-key-file
–etcd-xxxfile:连接Etcd集群证书
–audit-log-xxx:审计日志 启动聚合层相关配置:–requestheader-client-ca-file,–proxy-client-cert-file,–proxy-client-key-file,
–requestheader-allowed-names,–requestheader-extra-headers-prefix,–requestheader-group-headers,
–requestheader-username-headers,–enable-aggregator-routing
- 分发配置文件
# mkdir -p /etc/kubernetes/conf && mkdir /var/log/kubernetes# cp kube-apiserver.conf /etc/kubernetes/conf/for ip in 192.168.112.{141,142,143}doecho ">>> ${ip}"ssh root@${ip} "mkdir -p /etc/kubernetes/conf && mkdir /var/log/kubernetes"scp kube-apiserver.conf root@${ip}:/etc/kubernetes/conf/kube-apiserver.confssh root@${ip} "sed -i 's/#NODE_IP#/${ip}/g' /etc/kubernetes/conf/kube-apiserver.conf"done
9.3 启用TLS Bootstrapping机制
TLS Bootstraping:Master
apiserver启用TLS认证后,Node节点kubelet和kube-proxy要与kube-apiserver进行通信,
必须使用CA签发的有效证书才可以,当Node节点很多时,这种客户端证书颁发需要大量工作,同样也会增加集群扩展复杂度。
为了简化流程,Kubernetes引入了TLS
bootstraping机制来自动颁发客户端证书,kubelet会以一个低权限用户自动向apiserver申请证书,
kubelet的证书由apiserver动态签署。所以强烈建议在Node上使用这种方式,目前主要用于kubelet,kube-proxy还是由我们统一颁发一个证书。
- 生成token.csv
cat > /etc/kubernetes/conf/token.csv << EOF$(head -c 16 /dev/urandom | od -An -t x | tr -d ' '),kubelet-bootstrap,10001,"system:kubelet-bootstrap"EOF
- 分发token.csv
export MASTER_NAMES=(master1 master2 master3)for node in ${MASTER_NAMES[@]}doecho ">>> ${node}"scp token.csv root@${node}:/etc/kubernetes/conf/done
9.4 kube-apiserver systemd unit文件
- 创建文件
cat > kube-apiserver.service << 'EOF'[Unit]Description=Kubernetes API ServerDocumentation=https://github.com/kubernetes/kubernetes[Service]EnvironmentFile=/etc/kubernetes/conf/kube-apiserver.confExecStart=/opt/kubernetes/bin/kube-apiserver $KUBE_APISERVER_OPTSRestart=on-failure[Install]WantedBy=multi-user.targetEOF
- 分发文件
export MASTER_NAMES=(master1 master2 master3)for node in ${MASTER_NAMES[@]}doecho ">>> ${node}"scp kube-apiserver.service root@${node}:/usr/lib/systemd/system/kube-apiserver.servicedone
9.5 启动
# systemctl daemon-reload# systemctl enable kube-apiserver# systemctl restart kube-apiserverexport MASTER_NAMES=(master1 master2 master3)for node in ${MASTER_NAMES[@]}doecho ">>> ${node}"ssh root@${node} "/var/log/kubernetes/kube-apiserver"ssh root@${node} "systemctl daemon-reload && systemctl enable kube-apiserver && systemctl restart kube-apiserver"done
9.6 检查启动结果
export MASTER_NAMES=(master1 master2 master3)for node in ${MASTER_NAMES[@]}doecho ">>> ${node}"ssh root@${node} "systemctl status kube-apiserver|grep Active"done
9.7 查看kube-apiserver写入etcd的数据
ETCDCTL_API=3 etcdctl \--endpoints=https://192.168.112.141:2379,https://192.168.112.142:2379,https://192.168.112.143:2379 \--cacert=/etc/etcd/cert/ca.pem \--cert=/etc/etcd/cert/etcd.pem \--key=/etc/etcd/cert/etcd-key.pem \get /registry/ --prefix --keys-only
10 部署kube-controller-manager
该集群包含 3 个节点,启动后将通过竞争选举机制产生一个 leader 节点,其它节点为阻塞状态。当 leader 节点不可用后,剩余节点将再次进行选举产生新的 leader 节点,从而保证服务的可用性。
10.1 创建 kube-controller-manager 证书和私钥
- 创建证书签名请求
cat > kube-controller-manager-csr.json <<EOF{"CN": "system:kube-controller-manager","hosts": [],"key": {"algo": "rsa","size": 2048},"names": [{"C": "CN","L": "Chengdu","ST": "Sichuan","O": "system:masters","OU": "System"}]}EOF
- 生成证书和私钥:
cfssl gencert -ca=/opt/k8s/ca.pem \-ca-key=/opt/k8s/ca-key.pem \-config=/opt/k8s/ca-config.json \-profile=kubernetes kube-controller-manager-csr.json | cfssljson -bare kube-controller-manager
- 将生成的证书和私钥分发到所有 master 节点
export MASTER_NAMES=(master1 master2 master3)for node in ${MASTER_NAMES[@]}doecho ">>> ${node}"scp kube-controller-manager*.pem root@${node}:/etc/kubernetes/cert/done
10.2 kubeconfig文件
kubeconfig 文件包含访问 apiserver 的所有信息,如 apiserver 地址、CA 证书和自身使用的证书;
- 生成kubeconfig文件
# --server:指定api-server,采用ha之后的vip;# cluster名自定义,设定之后需保持一致;# --kubeconfig:指定kubeconfig文件路径与文件名;如果不设置,默认生成在~/.kube/config文件#生成kubeconfig文件(以下是shell命令,直接在终端执行):KUBE_CONFIG="kube-controller-manager.kubeconfig"KUBE_APISERVER="https://192.168.112.140:6443"kubectl config set-cluster kubernetes \--certificate-authority=/etc/kubernetes/cert/ca.pem \--embed-certs=true \--server=${KUBE_APISERVER} \--kubeconfig=${KUBE_CONFIG}kubectl config set-credentials kube-controller-manager \--client-certificate=./kube-controller-manager.pem \--client-key=./kube-controller-manager-key.pem \--embed-certs=true \--kubeconfig=${KUBE_CONFIG}kubectl config set-context default \--cluster=kubernetes \--user=kube-controller-manager \--kubeconfig=${KUBE_CONFIG}kubectl config use-context default --kubeconfig=${KUBE_CONFIG}
- 分发kubeconfig文件
export MASTER_NAMES=(master1 master2 master3)for node in ${MASTER_NAMES[@]}doecho ">>> ${node}"scp kube-controller-manager.kubeconfig root@${node}:/etc/kubernetes/conf/done
10.3 kube-controller-manager 配置文件
- 创建配置文件
cat > kube-controller-manager.conf << 'EOF'KUBE_CONTROLLER_MANAGER_OPTS="--logtostderr=false \--v=2 \--log-dir=/var/log/kubernetes/kube-controller-manager/ \\--leader-elect=true \\--kubeconfig=/opt/kubernetes/conf/kube-controller-manager.kubeconfig \--bind-address=127.0.0.1 \--allocate-node-cidrs=true \--cluster-cidr=10.244.0.0/16 \--service-cluster-ip-range=10.0.0.0/24 \--cluster-signing-cert-file=/opt/kubernetes/cert/ca.pem \--cluster-signing-key-file=/opt/kubernetes/cert/ca-key.pem \--root-ca-file=/opt/kubernetes/cert/ca.pem \--service-account-private-key-file=/opt/kubernetes/cert/ca-key.pem \--cluster-signing-duration=87600h0m0s"EOF
–kubeconfig:连接apiserver配置文件
–leader-elect:当该组件启动多个时,自动选举(HA)
–cluster-signing-cert-file/–cluster-signing-key-file:自动为kubelet颁发证书的CA,与apiserver保持一致
- 分发配置文件
export MASTER_NAMES=(master1 master2 master3)for node in ${MASTER_NAMES[@]}doecho ">>> ${node}"scp kube-controller-manager.conf root@${node}:/etc/kubernetes/conf/kube-controller-manager.confdone
10.4 kube-controller-manager systemd unit 文件
- 创建
cat > kube-controller-manager.service << 'EOF'[Unit]Description=Kubernetes Controller ManagerDocumentation=https://github.com/kubernetes/kubernetes[Service]EnvironmentFile=/etc/kubernetes/conf/kube-controller-manager.confExecStart=/opt/kubernetes/bin/kube-controller-manager $KUBE_CONTROLLER_MANAGER_OPTSRestart=on-failure[Install]WantedBy=multi-user.targetEOF
- 分发
export MASTER_NAMES=(master1 master2 master3)for node in ${MASTER_NAMES[@]}doecho ">>> ${node}"scp kube-controller-manager.service root@${node}:/usr/lib/systemd/system/kube-controller-manager.servicedone
10.5 启动 kube-controller-manager 服务
# systemctl daemon-reload# systemctl enable kube-controller-manager# systemctl restart kube-controller-managerexport MASTER_NAMES=(master1 master2 master3)for node in ${MASTER_NAMES[@]}doecho ">>> ${node}"ssh root@${node} "/var/log/kubernetes/kube-controller-manager"ssh root@${node} "systemctl daemon-reload && systemctl enable kube-controller-manager && systemctl restart kube-controller-manager"done
10.6 检查启动结果
export MASTER_NAMES=(master1 master2 master3)for node in ${MASTER_NAMES[@]}doecho ">>> ${node}"ssh root@${node} "systemctl status kube-controller-manager|grep Active"done
10.7 测试 kube-controller-manager 集群的高可用
停掉一个或两个节点的 kube-controller-manager 服务,观察其它节点的日志,看是否获取了 leader 权限。
11. kube-scheduler安装
该集群包含 3 个节点,启动后将通过竞争选举机制产生一个 leader 节点,其它节点为阻塞状态。当 leader 节点不可用后,剩余节点将再次进行选举产生新的 leader 节点,从而保证服务的可用性。
11.1 创建 kube-scheduler 证书和私钥
- 创建证书签名文件
cat > kube-scheduler-csr.json <<EOF{"CN": "system:kube-scheduler","hosts": [],"key": {"algo": "rsa","size": 2048},"names": [{"C": "CN","ST": "Sichuan","L": "Chengdu","O": "system:kube-scheduler","OU": "System"}]}EOF
- 生成证书和私钥
cfssl gencert -ca=/opt/k8s/ca.pem \-ca-key=/opt/k8s/ca-key.pem \-config=/opt/k8s/ca-config.json \-profile=kubernetes kube-scheduler-csr.json | cfssljson -bare kube-scheduler
11.2 kubeconfig 文件
- 创建
#生成kubeconfig文件(以下是shell命令,直接在终端执行):KUBE_CONFIG="kube-scheduler.kubeconfig"KUBE_APISERVER="https://192.168.112.140:6443"kubectl config set-cluster kubernetes \--certificate-authority=/etc/kubernetes/cert/ca.pem \--embed-certs=true \--server=${KUBE_APISERVER} \--kubeconfig=${KUBE_CONFIG}kubectl config set-credentials kube-scheduler \--client-certificate=./kube-scheduler.pem \--client-key=./kube-scheduler-key.pem \--embed-certs=true \--kubeconfig=${KUBE_CONFIG}kubectl config set-context default \--cluster=kubernetes \--user=kube-scheduler \--kubeconfig=${KUBE_CONFIG}kubectl config use-context default --kubeconfig=${KUBE_CONFIG}
- 分发
export MASTER_NAMES=(master1 master2 master3)for node in ${MASTER_NAMES[@]}doecho ">>> ${node}"scp kube-scheduler.kubeconfig root@${node}:/etc/kubernetes/conf/done
11.3 kube-scheduler配置文件
- 创建配置文件
cat > kube-scheduler.conf << 'EOF'KUBE_SCHEDULER_OPTS="--logtostderr=false \--v=2 \--log-dir=/var/log/kubernetes/kube-scheduler/ \--leader-elect \--kubeconfig=/etc/kubernetes/conf/kube-scheduler.kubeconfig \--bind-address=127.0.0.1"EOF
- 分发配置文件
export MASTER_NAMES=(master1 master2 master3)for node in ${MASTER_NAMES[@]}doecho ">>> ${node}"scp kube-scheduler.conf root@${node}:/etc/kubernetes/conf/kube-scheduler.confdone
11.4 kube-scheduler systemd unit 文件
- 创建
cat > kube-scheduler.service <<'EOF'[Unit]Description=Kubernetes SchedulerDocumentation=https://github.com/kubernetes/kubernetes[Service]EnvironmentFile=/etc/kubernetes/conf/kube-scheduler.confExecStart=/opt/kubernetes/bin/kube-scheduler $KUBE_SCHEDULER_OPTSRestart=on-failureRestartSec=5[Install]WantedBy=multi-user.targetEOF
- 分发
export MASTER_NAMES=(master1 master2 master3)for node in ${MASTER_NAMES[@]}doecho ">>> ${node}"scp kube-scheduler.service root@${node}:/usr/lib/systemd/system/kube-scheduler.servicedone
11.5 启动 kube-scheduler 服务
# systemctl daemon-reload# systemctl enable kube-scheduler# systemctl restart kube-schedulerexport MASTER_NAMES=(master1 master2 master3)for node in ${MASTER_NAMES[@]}doecho ">>> ${node}"ssh root@${node} "mkdir -p /var/log/kubernetes/kube-scheduler"ssh root@${node} "systemctl daemon-reload && systemctl enable kube-scheduler && systemctl restart kube-scheduler"done
11.6 检查启动结果
export MASTER_NAMES=(master1 master2 master3)for node in ${MASTER_NAMES[@]}doecho ">>> ${node}"ssh root@${node} "systemctl status kube-scheduler|grep Active"done
12. kubelet安装
kublet 运行在每个 worker 节点上(生产环境可能需要在master节点上也运行),接收 kube-apiserver 发送的请求,管理 Pod 容器,执行交互式命令,如 exec、run、logs 等。
kublet 启动时自动向 kube-apiserver 注册节点信息,为确保安全,本文档只开启接收 https 请求的安全端口,对请求进行认证和授权,拒绝未授权的访问(如 apiserver、heapster)。
12.1 kubelet bootstrap kubeconfig 文件
- 创建
KUBE_CONFIG="kubelet-bootstrap.kubeconfig"KUBE_APISERVER="https://192.168.112.140:6443"BOOTSTRAP_TOKEN=$(awk -F "," '{print $1}' /etc/kubernetes/conf/token.csv)#生成 kubelet bootstrap kubeconfig 配置文件kubectl config set-cluster kubernetes \--certificate-authority=/etc/kubernetes/ssl/ca.pem \--embed-certs=true \--server=${KUBE_APISERVER} \--kubeconfig=${KUBE_CONFIG}kubectl config set-credentials "kubelet-bootstrap" \--token=${TOKEN} \--kubeconfig=${KUBE_CONFIG}kubectl config set-context default \--cluster=kubernetes \--user="kubelet-bootstrap" \--kubeconfig=${KUBE_CONFIG}kubectl config use-context default --kubeconfig=${KUBE_CONFIG}
证书中写入 Token 而非证书,证书后续由 controller-manager 创建。
- 分发
export NODE_NAMES=(master1 master2 master3 node1 node2 node3)for node in ${NODE_NAMES[@]}doecho ">>> ${node}"scp kubelet-bootstrap.kubeconfig root@${node}:/etc/kubernetes/conf/kubelet-bootstrap.kubeconfigdone
12.2 参数配置文件
- 创建
cat > kubelet-config.yaml << EOFkind: KubeletConfigurationapiVersion: kubelet.config.k8s.io/v1beta1address: 0.0.0.0port: 10250readOnlyPort: 10255cgroupDriver: systemdclusterDNS: ["10.0.0.2"]clusterDomain: "sysit.cn"failSwapOn: falseauthentication:anonymous:enabled: falsewebhook:cacheTTL: 2m0senabled: truex509:clientCAFile: /etc/kubernetes/cert/ca.pemauthorization:mode: Webhookwebhook:cacheAuthorizedTTL: 5m0scacheUnauthorizedTTL: 30sevictionHard:imagefs.available: 15%memory.available: 100Minodefs.available: 10%nodefs.inodesFree: 5%maxOpenFiles: 1000000maxPods: 110EOF
- 有人说clusterDomain 后应该有一个“.”,如“sysit.cn.”,待验证。
-
cgroupDriver 如果docker的驱动为systemd,cgroupDriver修改为systemd。此处设置很重要,否则后面node节点无法加入到集群
-
分发
export NODE_NAMES=(master1 master2 master3 node1 node2 node3)for node in ${NODE_NAMES[@]}doecho ">>> ${node}"scp kubelet-config.yaml root@${node}:/etc/kubernetes/conf/kubelet-config.yamldone
12.3 kubelet配置文件
- 创建
cat > kubelet.conf << 'EOF'KUBELET_OPTS="--logtostderr=false \--v=2 \--log-dir=/var/log/kubernetes/kubelet/ \--hostname-override=#NODE_NAME# \--network-plugin=cni \--kubeconfig=/etc/kubernetes/conf/kubelet.kubeconfig \--bootstrap-kubeconfig=/etc/kubernetes/conf/bootstrap.kubeconfig \--config=/etc/kubernetes/conf/kubelet-config.yaml \--cert-dir=/etc/kubernetes/cert \--pod-infra-container-image=lizhenliang/pause-amd64:3.0"EOF
–hostname-override:显示名称,集群中唯一
–network-plugin:启用CNI
–kubeconfig:空路径,会自动生成,后面用于连接apiserver
–bootstrap-kubeconfig:首次启动向apiserver申请证书
–config:配置参数文件
–cert-dir:kubelet证书生成目录
–pod-infra-container-image:管理Pod网络容器的镜像
- 分发
export NODE_NAMES=(master1 master2 master3 node1 node2 node3)for node in ${NODE_NAMES[@]}doecho ">>> ${node}"ssh root@${node} "mkdir -p /etc/kubernetes/conf"scp kubelet.conf root@${node}:/etc/kubernetes/conf/kubelet.confssh root@${node} "sed -i 's/#NODE_NAME#/${node}.sysit.cn/g' /etc/kubernetes/conf/kubelet.conf"done
12.4 kubelet systemd unit 文件
- 创建
cat > kubelet.service <<'EOF'[Unit]Description=Kubernetes KubeletAfter=docker.service[Service]EnvironmentFile=/etc/kubernetes/conf/kubelet.confExecStart=/opt/kubernetes/bin/kubelet $KUBELET_OPTSRestart=on-failureLimitNOFILE=65536[Install]WantedBy=multi-user.targetEOF
- 分发
export NODE_NAMES=(master1 master2 master3 node1 node2 node3)for node in ${NODE_NAMES[@]}doecho ">>> ${node}"scp kubelet.service root@${node}:/usr/lib/systemd/system/kubelet.servicedone
12.5 启动
# systemctl daemon-reload# systemctl start kubelet# systemctl enable kubeletexport NODE_NAMES=(master1 master2 master3 node1 node2 node3)for node in ${NODE_NAMES[@]}doecho ">>> ${node}"ssh root@${node} "mkdir -p /var/lib/kubelet && mkdir -p /var/log/kubernetes/kubelet"ssh root@${node} "systemctl daemon-reload && systemctl enable kubelet && systemctl restart kubelet"done
12.6 检查启动结果
export NODE_NAMES=(master1 master2 master3 node1 node2 node3)for node in ${NODE_NAMES[@]}doecho ">>> ${node}"ssh root@${node} "systemctl status kubelet|grep Active"done
12.7 批准kubelet证书申请并加入集群
- 查看CSR请求
确认kubelet服务启动成功后,接着到master上Approve一下bootstrap请求。执行如下命令可以看到 CSR 请求:
kubectl get csrNAME AGE SIGNERNAME REQUESTOR CONDITIONnode-csr-uCEGPOIiDdlLODKts8J658HrFq9CZ--K6M4G7bjhk8A 6m3s kubernetes.io/kube-apiserver-client-kubelet kubelet-bootstrap Pending
- 批准申请
kubectl certificate approve node-csr-uCEGPOIiDdlLODKts8J658HrFq9CZ--K6M4G7bjhk8A
- 查看节点
NAME STATUS ROLES AGE VERSIONmaster1.sysit.cn NotReady <none> 15s v1.20.7master2.sysit.cn NotReady <none> 15s v1.20.7master3.sysit.cn NotReady <none> 15s v1.20.7node1.sysit.cn NotReady <none> 15s v1.20.7node2.sysit.cn NotReady <none> 15s v1.20.7node3.sysit.cn NotReady <none> 15s v1.20.7
此时还没有安装网络插件,安装网络插件之后,状态会变成Ready。
12.8 授权apiserver访问kubelet
#应用场景:例如kubectl logscat > apiserver-to-kubelet-rbac.yaml << EOFapiVersion: rbac.authorization.k8s.io/v1kind: ClusterRolemetadata:annotations:rbac.authorization.kubernetes.io/autoupdate: "true"labels:kubernetes.io/bootstrapping: rbac-defaultsname: system:kube-apiserver-to-kubeletrules:- apiGroups:- ""resources:- nodes/proxy- nodes/stats- nodes/log- nodes/spec- nodes/metrics- pods/logverbs:- "*"---apiVersion: rbac.authorization.k8s.io/v1kind: ClusterRoleBindingmetadata:name: system:kube-apiservernamespace: ""roleRef:apiGroup: rbac.authorization.k8s.iokind: ClusterRolename: system:kube-apiserver-to-kubeletsubjects:- apiGroup: rbac.authorization.k8s.iokind: Username: kubernetesEOFkubectl apply -f apiserver-to-kubelet-rbac.yaml
13 部署kube-proxy
13.1 生成kube-proxy证书:
- 创建证书请求文件
cat > kube-proxy-csr.json << EOF{"CN": "system:kube-proxy","hosts": [],"key": {"algo": "rsa","size": 2048},"names": [{"C": "CN","L": "Chengdu","ST": "Sichuan","O": "k8s","OU": "System"}]}EOF
- 生成证书
cfssl gencert -ca=/opt/k8s/ca.pem \-ca-key=/opt/k8s/ca-key.pem \-config=/opt/k8s/ca-config.json \-profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy
13.2 kubeconfig文件
- 创建
KUBE_CONFIG="kube-proxy.kubeconfig"KUBE_APISERVER="https://192.168.112.140:6443"kubectl config set-cluster kubernetes \--certificate-authority=/etc/kubernetes/ssl/ca.pem \--embed-certs=true \--server=${KUBE_APISERVER} \--kubeconfig=${KUBE_CONFIG}kubectl config set-credentials kube-proxy \--client-certificate=./kube-proxy.pem \--client-key=./kube-proxy-key.pem \--embed-certs=true \--kubeconfig=${KUBE_CONFIG}kubectl config set-context default \--cluster=kubernetes \--user=kube-proxy \--kubeconfig=${KUBE_CONFIG}kubectl config use-context default --kubeconfig=${KUBE_CONFIG}
- 分发 kubeconfig 文件
export NODE_NAMES=(master1 master2 master3 node1 node2 node3)for node in ${NODE_NAMES[@]}doecho ">>> ${node}"scp kube-proxy.kubeconfig root@${node}:/etc/kubernetes/conf/done
13.3 kube-proxy参数文件
- 创建
cat >kube-proxy-config.yaml << EOFkind: KubeProxyConfigurationapiVersion: kubeproxy.config.k8s.io/v1alpha1bindAddress: 0.0.0.0metricsBindAddress: 0.0.0.0:10249clientConnection:kubeconfig: /etc/kubernetes/conf/kube-proxy.kubeconfighostnameOverride: #NODE_NAME#clusterCIDR: 10.0.0.0/24mode: "ipvs"EOF
- 分发
export NODE_NAMES=(master1 master2 master3 node1 node2 node3)for node in ${NODE_NAMES[@]}doecho ">>> ${node}"scp kube-proxy-config.yaml root@${node}:/etc/kubernetes/conf/kube-proxy-config.yamlssh root@${node} "sed -i 's/#NODE_NAME#/${node}.sysit.cn/g' /etc/kubernetes/conf/kube-proxy-config.yaml"done
13.4 kube-proxy 配置文件
- 创建
cat > kube-proxy.conf <<'EOF'KUBE_PROXY_OPTS="--logtostderr=false \\--v=2 \\--log-dir=/var/log/kubernetes/kube-proxy/ \\--config=/etc/kubernetes/conf/kube-proxy-config.yaml"EOF
- 分发
export NODE_NAMES=(master1 master2 master3 node1 node2 node3)for node in ${NODE_NAMES[@]}doecho ">>> ${node}"scp kube-proxy.conf root@${node}:/etc/kubernetes/conf/kube-proxy.confdone
13.5 kube-proxy systemd unit文件
- 创建
cat > kube-proxy.service <<'EOF'[Unit]Description=Kubernetes ProxyAfter=network.target[Service]EnvironmentFile=/etc/kubernetes/conf/kube-proxy.confExecStart=/opt/kubernetes/bin/kube-proxy $KUBE_PROXY_ARGSRestart=on-failureLimitNOFILE=65536[Install]WantedBy=multi-user.targetEOF
- 分发kube-proxy systemd unit文件
export NODE_NAMES=(master1 master2 master3 node1 node2 node3)for node in ${NODE_NAMES[@]}doecho ">>> ${node}"scp kube-proxy.service root@${node}:/usr/lib/systemd/system/kube-proxy.servicedone
13.6 启动
export NODE_NAMES=(master1 master2 master3 node1 node2 node3)for node in ${NODE_NAMES[@]}doecho ">>> ${node}"ssh root@${node} "mkdir -p /var/lib/kube-proxy && mkdir -p /var/log/kubernetes/kube-proxy"ssh root@${node} "systemctl daemon-reload && systemctl enable kube-proxy && systemctl restart kube-proxy"done
13.7 检查启动结果
export NODE_NAMES=(master1 master2 master3 node1 node2 node3)for node in ${NODE_NAMES[@]}doecho ">>> ${node}"ssh root@${node} "systemctl status kube-proxy|grep Active"done
14. 部署网络插件calico
这里配置calico作为网络插件,参考文档:https://docs.projectcalico.org/getting-started/kubernetes/self-managed-onprem/onpremises#install-calico-on-nodes
- 配置文件
注意修改calico.yaml文件中的子网信息与config文件中的podSubnet相匹配,本文中是10.244.0.0/16
wget https://docs.projectcalico.org/manifests/calico.yamlcp calico.yaml{,.ori}[root@master1 k8s]# diff calico.yaml calico.yaml.ori3683,3684c3683,3684< - name: CALICO_IPV4POOL_CIDR< value: "10.244.0.0/16"---> # - name: CALICO_IPV4POOL_CIDR> # value: "192.168.0.0/16"
- 应用
kubectl apply -f calico.yaml
- 查看节点
[root@master1 k8s]# kubectl get nodeNAME STATUS ROLES AGE VERSIONmaster1.sysit.cn Ready <none> 1h v1.20.7master2.sysit.cn Ready <none> 1h v1.20.7master3.sysit.cn Ready <none> 1h v1.20.7node1.sysit.cn Ready <none> 1h v1.20.7node2.sysit.cn Ready <none> 1h v1.20.7node3.sysit.cn Ready <none> 1h v1.20.7
15 部署coreDNS
- 部署
wget https://github.com/coredns/deployment/archive/refs/tags/coredns-1.14.0.tar.gztar -zxvf deployment-coredns-1.14.0.tar.gzcd deployment-coredns-1.14.0/kubernetes/export CLUSTER_DNS_SVC_IP="10.0.0.2"export CLUSTER_DNS_DOMAIN="sysit.cn"./deploy.sh -i ${CLUSTER_DNS_SVC_IP} -d ${CLUSTER_DNS_DOMAIN} | kubectl apply -f -
- 检查
kubectl run -it --rm dns-test --image=busybox:1.28.4 sh/ # nslookup kubernetesServer: 10.0.0.2Address 1: 10.0.0.2 kube-dns.kube-system.svc.sysit.cnName: kubernetesAddress 1: 10.0.0.1 kubernetes.default.svc.sysit.cn
16. dashboard和metrics
详见本站kubernetes dashboard安装及addons安装
浙公网安备 33010602011771号