基于kubeasz的k8s v1.34.x高可用集群部署
1.1 部署目标
1、完成kubernetes管理端服务(kube-apiserver、kube-controller-manager、kube-scheduler)的高可用部署。
2、完成node节点上的客户端服务(kubelet、kube-proxy)的部署。
1.2 集群架构
| 节点类型 | 数量 | 功能 | IP |
|---|---|---|---|
| master | 2 | k8s控制端 | 172.31.7.101、102 |
| harbor | 1 | 镜像服务器 | 172.31.7.104 |
| etcd | 1 | 保存集群数据 | 172.31.7.106 |
| HA | 1 | 高可用 | 172.31.7.109 |
| deploy | 1 | 部署用、后期可作为HA | 172.31.7.110 |
| node | 2 | 运行容器 | 172.31.7.111、112 |
1.3 集群软件环境
对外API端口:172.31.7.188:6443
操作系统:Ubuntu 22.04.2 LTS
k8s版本:1.34.2
calico版本:3.24.8
1.4 基础环境部署
系统主机名配置、IP配置、系统参数优化,以及依赖的负载均衡和Harbor部署
1.4.1 系统配置及主机名解析
iptables、防火墙、内核参数与资源限制等系统配置可以通过脚本批量设置,master会使用添加node时候使用的名称与node通信。
1.4.2 HA(High Available)高可用负载均衡
主要目的:将master节点做为后端,实现k8s集群的高可用反向代理
安装keepalived+haproxy后先修改keepalived配置:
sudo apt update
sudo apt install keepalived haproxy
sudo vim /etc/keepalived/keepalived.conf
global_defs {
vrrp_skip_check_adv_addr
#vrrp_strict #此处注释掉可以解除vip的禁ping限制
vrrp_garp_interval 0
vrrp_gna_interval 0
}
vrrp_instance VI_1 {
state MASTER
interface ens3 #此处依据实际网卡名配置
virtual_router_id 51
priority 100
advert_int 1
authentication {
auth_type PASS
auth_pass 1111
}
virtual_ipaddress {#以下依据实际网卡名配置
172.31.7.188 dev ens3 label ens3:0
172.31.7.189 dev ens3 label ens3:1
172.31.7.190 dev ens3 label ens3:2
172.31.7.191 dev ens3 label ens3:3
}
}
重启keepalived服务并设为开机自启动
systemctl restart keeepalived.service
systemctl enable keepalived.service
sudo vim /etc/haproxy/haproxy.cfg
listen k8s-6443
bind 172.31.7.188:6443
mode tcp
server master01 172.31.7.101:6443 check inter 3s fall 3 rise 5
server master02 172.31.7.102:6443 check inter 3s fall 3 rise 5
重启haproxy服务
sudo systemctl restart haproxy.service
1.4.3 安装运行时并部署镜像仓库harbor
kubernetes master节点和node节点使用containerd作为运行时,harbor节点安装docker(目前harbor的安装脚本会强制检查
docker及docker-compose是否安装)用于部署harbor,containerd可以使用apt、二进制等方式批量安装。
(此处详细过程省略,可以使用脚本部署、安装完后使用docker version命令确认安装成功及版本信息)
2、下载、解压、安装harbor安装包
1.4.4 支持https访问的Harbor部署
集群业务镜像将统一上传到Harbor的服务器并实现镜像分发,不再通过互联网在线下载公网镜像,提高镜像分发效率及数据安全性。
安装参考:
https://goharbor.io/docs/2.5.0/install-config/configure-https/
下载harbor离线安装包并放在/apps目录,同时准备一个certs目录用于接下来存放ssl证书
root@k8s-harbor1:/usr/local/src# mkdir /apps/
root@k8s-harbor1:/usr/local/src# mv harbor-offline-installer-v2.14.0.tgz /apps/
root@k8s-harbor1:/usr/local/src# cd /apps/
root@k8s-harbor1:/apps# tar xvf harbor-offline-installer-v2.14.0.tgz
root@k8s-harbor1:/apps# cd harbor/
root@k8s-harbor1:/apps/harbor# mkdir certs
root@k8s-harbor1:/apps/harbor# cd certs/
1.4.4.1 申请并下载SSL证书
生产环境推荐使用公有证书签发机构签发的证书、证书签发机构的证书(免费或付费),可被信任。
目前各云厂商通常可提供90天的单域名免费证书,”时巴克科技“可提供90天的通配符证书,通配符证书支持一个域名的所有下一级子域名,例如 *.spug.cc 可用于 a.spug.cc 或 b.spug.cc,但不能用于 a.b.spug.cc。
下载回来的证书文件中pem后缀的是公钥,key后缀的是私钥。
可以放在刚才已经准备好的/apps/harbor/certs目录
1.4.4.2 修改harbor配置文件harbor.yaml
root@k8s-harbor1:/apps/harbor/certs# unzip harbor.myarchitect.online_nginx.zip
root@k8s-harbor1:/apps/harbor/certs# ll
total 28
drwxr-xr-x 2 root root 123 Oct 13 16:49 ./
drwxr-xr-x 4 root root 4096 Oct 13 16:52 ../
-rw------- 1 root root 1675 Oct 13 16:48 harbor.myarchitect.online.key
-rw------- 1 root root 6955 Oct 13 16:48 harbor.myarchitect.online.pem
-rw-r--r-- 1 root root 8920 Oct 13 16:48 harbor.myarchitect.online_nginx.zip
root@k8s-harbor1:/apps/harbor/certs# cd ..
root@k8s-harbor1:/apps/harbor# vim harbor.yml
# The IP address or hostname to access admin UI and registry service.
# DO NOT use localhost or 127.0.0.1, because Harbor needs to be accessed by external
clients.
hostname: harbor.myarchitect.online
# http related config
http:
# port for http, default is 80. If https enabled, this port will redirect to https port
port: 80
# https related config
https:
# https port for harbor, default is 443
port: 443
# The path of cert and key files for nginx
# 此处用你的域名替换"yourdomainname"
certificate: /apps/harbor/certs/harbor.yourdomainname.pem
private_key: /apps/harbor/certs/harbor.yourdomainname.key
# enable strong ssl ciphers (default: false)
# strong_ssl_ciphers: false
1.4.4.3 执行install.sh完成部署
root@k8s-harbor1:/apps/harbor# ./install.sh --with-trivy
部署完后可以使用浏览器访问harbor进行测试
1.4.4.4 nerdctl测试登录harbor
确认浏览器访问正常后可以在deploy节点测试登录harbor服务器,以验证是否能够登录harbor及push镜像。(如果部署多台harbor,可以在其他harbor节点测试)
测试时可以先修改本地HOSTS文件解析harbor服务器域名:
root@k8s-harbor2:/usr/local/src# cat /etc/hosts
172.31.7.104 harbor.yourdomainname
root@k8s-harbor2:/usr/local/src#nerdctl login harbor.yourdomainname
看到Login Succeeded字样即测试成功
1.4.4.5 测试nerdctl下载镜像并上传到harbor仓库
root@k8s-harbor2:~# nerdctl pull registry.cnhangzhou.
aliyuncs.com/myhubregistry/rockylinux:9.3.20231119
root@k8s-harbor2:~# nerdctl tag registry.cnhangzhou.
aliyuncs.com/myhubregistry/rockylinux:9.3.20231119
harbor.yourdomainname/baseimages/rockylinux:9.3.20231119
root@k8s-harbor2:/usr/local/src# nerdctl push
harbor.yourdomainname/baseimages/rockylinux:9.3.20231119
1.5 部署节点初始化
部署节点为172.31.7.110,主要作用如下:
1、从互联网下载安装资源
2、可选将部分镜像修改tag后上传到集群内部镜像仓库服务器
3、对master进行初始化
4、对node进行初始化
5、后期集群维护
添加及删除master节点
添加就删除node节点
etcd数据备份及恢复
使用kubeasz载部署kubernetes过程中
的各种镜像及二进制文件等资源,需要安装docker环境。
另外此节点为后期也可能会向harbor上传下载镜像,因此也需要登录habror并上传下载镜像。
kubeasz项目地址:https://github.com/easzlab/kubeasz
由于节点需要使用docker下载k8s集群部署过程中需要的各种镜像及二进制文件等资源,需要安装docker环境。
另外,此节点为docker环境但是后期也可能会向harbor上传下载镜像,因此也需要登录habror并上传下载镜像。
root@k8s-deploy:vim /usr/local/src# cat /etc/hosts
172.31.7.104 harbor.yourdomainname
root@k8s-deploy:/usr/local/src#docker login harbor.yourdomainname
Username: admin
Password:
WARNING! Your password will be stored unencrypted in /root/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-store
Login Succeeded
#测试镜像下载和上传
root@k8s-deploy:~# docker pull registry.cnhangzhou.
aliyuncs.com/myhubregistry/rockylinux:10.0.20250606
root@k8s-deploy:~# docker tag registry.cnhangzhou.
aliyuncs.com/myhubregistry/rockylinux:10.0.20250606
harbor.yourdomainname/baseimages/rockylinux:10.0.20250606
root@k8s-deploy:~# docker push harbor.yourdomainname/baseimages/rockylinux:10.0.20250606
1.6 kubeasz部署高可用kubernetes
kubeasz致力于提供快速部署高可用k8s集群的工具, 同时也努力成为k8s实践、使用的参考书;基于二进制方式部署和利用ansible-playbook实现自动化;既提供一键安装脚本, 也可以根据安装指南分步执行安装各个组件。
kubeasz从每一个单独部件组装到完整的集群,提供最灵活的配置能力,几乎可以设置任何组件的任何参数;同时又为集群创建预置一套运行良好的默认配置,甚至自动化创建适合大规模集群的BGP Route Reflector网络模式。
1.6.1 免秘钥登录配置
将部署节点的公钥分发至master、node、etcd节点。
root@k8s-deploy:~# apt install ansible
root@k8s-deploy:~# ssh-keygen -t rsa-sha2-512 -b 4096
root@k8s-deploy:~# apt install sshpass #安装sshpass命令用于同步公钥到各k8s服务器
root@k8s-deploy:~# cat key-scp.sh
!/bin/bash
#目标主机列表
IP="
172.31.7.101
172.31.7.102
172.31.7.104
172.31.7.106
172.31.7.109
172.31.7.111
172.31.7.112
"
REMOTE_PORT="22"
REMOTE_USER="ubuntu"
REMOTE_PASS="mageedu"
# 创建或清空日志文件
LOG_FILE="ssh_config.log"
echo "开始配置免密钥登录 - $(date)" > "$LOG_FILE"
for REMOTE_HOST in ${IP};do
echo "正在配置 $REMOTE_HOST ..."
#添加目标远程主机的公钥
echo "添加 $REMOTE_HOST 到 known_hosts..."
ssh-keyscan -p "${REMOTE_PORT}" "${REMOTE_HOST}" >> ~/.ssh/known_hosts
# 设置know_hosts文件权限
chmod 600 ~/.ssh/known_hosts
#通过sshpass配置免秘钥登录、并创建python3软连接
echo "配置SSH秘钥..."
sshpass -p "${REMOTE_PASS}" ssh-copy-id "${REMOTE_USER}@${REMOTE_HOST}"
# 检查SSH连接是否成功
if ssh -o ConnectTimeout=5 -o BatchMode=yes "$REMOTE_USER@$REMOTE_HOST" "exit"; then
echo "创建python3软链接..."
ssh "${REMORE_USER}@${REMOTE_HOST}" "sudo ln -sv /usr/bin/python3 /usr/bin/python || true"
# 验证设置
ssh "$REMOTE_USER@$REMOTE_HOST" "hostname && python --version && whoami"
echo "$REMOTE_HOST: 配置完成" >> "$LOG_FILE"
echo "${REMOTE_HOST} 免秘钥配置完成!"
else
echo "$REMOTE_HOST: SSH连接失败" >> "$LOG_FILE"
echo "警告: $REMOTE_HOST 配置失败!"
fi
echo "------------------------------------------------"
done
echo "配置完成!详情可查看日志文件:$LOG_FILE"
#执行脚本同步:
roo@k8s-deploy:~# bash key-scp.sh
执行完通过ssh命令确认deploy可以免密登录集群内的其他服务器。
1.6.2 下载kubeasz项目及组件
root@k8s-deploy:~# apt install git
root@k8s-deploy:~# export release=3.6.8
root@k8s-deploy:~# wget
https://github.com/easzlab/kubeasz/releases/download/${release}/ezdown
root@k8s-deploy:~# vim ezdown #可选自定义下载组件版本
root@k8s-deploy:~# chmod +x ./ezdown
root@k8s-deploy:~# ./ezdown -D #此步骤需要能从docker官方镜像仓库下载镜像、并需要几分钟下载时间
root@k8s-deploy:~# ll /etc/kubeasz/
total 120
drwxrwxr-x 12 root root 4096 Oct 13 18:36 ./
drwxr-xr-x 115 root root 8192 Oct 13 18:36 ../
drwxrwxr-x 4 root root 77 Sep 14 19:56 .github/
-rw-rw-r-- 1 root root 322 Sep 14 19:45 .gitignore
-rw-rw-r-- 1 root root 6445 Sep 14 19:45 README.md
-rw-rw-r-- 1 root root 20304 Sep 14 19:45 ansible.cfg
drwxr-xr-x 5 root root 4096 Oct 13 18:36 bin/
drwxrwxr-x 8 root root 94 Sep 14 19:56 docs/
drwxr-xr-x 2 root root 4096 Oct 13 18:39 down/
drwxrwxr-x 2 root root 70 Sep 14 19:56 example/
-rwxrwxr-x 1 root root 26440 Sep 14 19:45 ezctl*
-rwxrwxr-x 1 root root 26635 Sep 14 19:45 ezdown*
drwxrwxr-x 3 root root 24 Sep 14 19:56 manifests/
drwxrwxr-x 2 root root 94 Sep 14 19:56 pics/
drwxrwxr-x 2 root root 4096 Sep 14 19:56 playbooks/
drwxrwxr-x 22 root root 4096 Sep 14 19:56 roles/
drwxrwxr-x 2 root root 90 Sep 14 19:56 tools/
1.6.3 生成并自定义kubeasz的hosts文件
root@k8s-deploy:~# cd /etc/kubeasz/
root@k8s-deploy:/etc/kubeasz# pwd
/etc/kubeasz
root@k8s-deploy:/etc/kubeasz# ./ezctl new k8s-cluster1
2025-10-16 14:39:06 [ezctl:145] DEBUG generate custom cluster files in
/etc/kubeasz/clusters/k8s-cluster1
2025-10-16 14:39:06 [ezctl:151] DEBUG set versions
2025-10-16 14:39:06 [ezctl:182] DEBUG cluster k8s-cluster1: files successfully created.
2025-10-16 14:39:06 [ezctl:183] INFO next steps 1: to config '/etc/kubeasz/clusters/k8scluster1/
hosts'
2025-10-16 14:39:06 [ezctl:184] INFO next steps 2: to config '/etc/kubeasz/clusters/k8scluster1/
config.yml'
1.6.3.1 编辑ansible hosts文件
指定etcd节点、master节点、node节点、VIP、运行时、网络组建类型、service IP与pod IP范围等配置信息。
root@k8s-deploy:/etc/kubeasz# cat clusters/k8s-cluster1/hosts
# 'etcd' cluster should have odd member(s) (1,3,5,...)
[etcd]
172.31.7.106
# master node(s), set unique 'k8s_nodename' for each node
# CAUTION: 'k8s_nodename' must consist of lower case alphanumeric characters, '-' or '.',
# and must start and end with an alphanumeric character
[kube_master]
172.31.7.101 k8s_nodename='172.31.7.101'
172.31.7.102 k8s_nodename='172.31.7.102'
# work node(s), set unique 'k8s_nodename' for each node
# CAUTION: 'k8s_nodename' must consist of lower case alphanumeric characters, '-' or '.',
# and must start and end with an alphanumeric character
[kube_node]
172.31.7.111 k8s_nodename='172.31.7.111'
172.31.7.112 k8s_nodename='172.31.7.112'
# [optional] harbor server, a private docker registry
# 'NEW_INSTALL': 'true' to install a harbor server; 'false' to integrate with existed one
[harbor]
#172.31.7.8 NEW_INSTALL=false
# [optional] loadbalance for accessing k8s from outside
[ex_lb]
#172.31.7.6 LB_ROLE=backup EX_APISERVER_VIP=172.31.7.188 EX_APISERVER_PORT=8443
#172.31.7.7 LB_ROLE=master EX_APISERVER_VIP=172.31.7.188 EX_APISERVER_PORT=8443
# [optional] ntp server for the cluster
[chrony]
#172.31.7.1
[all:vars]
# --------- Main Variables ---------------
# Secure port for apiservers
SECURE_PORT="6443"
# Cluster container-runtime supported: docker, containerd
# if k8s version >= 1.24, docker is not supported
CONTAINER_RUNTIME="containerd"
# Network plugins supported: calico, flannel, kube-router, cilium, kube-ovn
CLUSTER_NETWORK="calico"
# Service proxy mode of kube-proxy: 'iptables' or 'ipvs'
PROXY_MODE="ipvs"
# K8S Service CIDR, not overlap with node(host) networking
SERVICE_CIDR="10.100.0.0/16"
# Cluster CIDR (Pod CIDR), not overlap with node(host) networking
CLUSTER_CIDR="10.200.0.0/16"
# NodePort Range
NODE_PORT_RANGE="30000-62767"
# Cluster DNS Domain
CLUSTER_DNS_DOMAIN="cluster.local"
# -------- Additional Variables (don't change the default value right now) ---
# Binaries Directory
# 注意路径不要写错,错了改起来很麻烦,别问我怎么知道的。
bin_dir="/usr/local/bin"
# Deploy Directory (kubeasz workspace)
base_dir="/etc/kubeasz"
# Directory for a specific cluster
cluster_dir="{{ base_dir }}/clusters/k8s-cluster1"
# CA and other components cert/key Directory
ca_dir="/etc/kubernetes/ssl"
# Default 'k8s_nodename' is empty
k8s_nodename=''
# Default python interpreter
ansible_python_interpreter=/usr/bin/python3
# 如果不使用root用户,而是使用ubuntu用户免密sudo执行ansible,可以添加以下内容
# 如果直接使用root用户,请删除或注释以下内容
ansible_ssh_user=ubuntu
ansible_ssh_private_key_file=/home/ubuntu/.ssh/id_rsa
ansible_become=yes
ansible_become_method=sudo
ansible_become_user=root
1.6.3.2 编辑集群配置文件config.yml
root@k8s-deploy:/etc/kubeasz# vim clusters/k8s-cluster1/config.yml
root@k8s-deploy:/etc/kubeasz# cat clusters/k8s-cluster1/config.yml
############################
# prepare
############################
# 可选离线安装系统软件包 (offline|online)
INSTALL_SOURCE: "online"
# 可选进行系统安全加固 github.com/dev-sec/ansible-collection-hardening
# (deprecated) 未更新上游项目,未验证最新k8s集群安装,不建议启用
OS_HARDEN: false
############################
# role:deploy
############################
# default: ca will expire in 100 years
# default: certs issued by the ca will expire in 50 years
CA_EXPIRY: "876000h"
CERT_EXPIRY: "438000h"
# force to recreate CA and other certs, not suggested to set 'true'
CHANGE_CA: false
# kubeconfig 配置参数
CLUSTER_NAME: "cluster1"
CONTEXT_NAME: "context-{{ CLUSTER_NAME }}"
# k8s version
K8S_VER: "1.34.1"
# set unique 'k8s_nodename' for each node, if not set(default:'') ip add will be used
# CAUTION: 'k8s_nodename' must consist of lower case alphanumeric characters, '-' or '.',
# and must start and end with an alphanumeric character (e.g. 'example.com'),
# regex used for validation is '[a-z0-9]([-a-z0-9]*[a-z0-9])?(\.[a-z0-9]([-a-z0-9]*[a-z0-
9])?)*'
K8S_NODENAME: "{%- if k8s_nodename != '' -%} \
{{ k8s_nodename|replace('_', '-')|lower }} \
{%- else -%} \
k8s-{{ inventory_hostname|replace('.', '-') }} \
{%- endif -%}"
# use 'K8S_NODENAME' to set hostname
ENABLE_SETTING_HOSTNAME: true
############################
# role:etcd
############################
# 设置不同的wal目录,可以避免磁盘io竞争,提高性能
ETCD_DATA_DIR: "/var/lib/etcd"
ETCD_WAL_DIR: ""
############################
# role:runtime [containerd,docker]
############################
# [.]启用拉取加速镜像仓库
ENABLE_MIRROR_REGISTRY: true
# [.]添加信任的私有仓库
# 必须按照如下示例格式,协议头'http://'和'https://'不能省略
INSECURE_REG:
- "http://easzlab.io.local:5000"
- "https://harbor.myserver.com"
# [.]基础容器镜像,此处使用你自己的域名替换"yourdomainname"
SANDBOX_IMAGE: "harbor.yourdomainname/baseimages/pause:3.10"
# [containerd]容器持久化存储目录
CONTAINERD_STORAGE_DIR: "/var/lib/containerd"
# [docker]容器存储目录
DOCKER_STORAGE_DIR: "/var/lib/docker"
# [docker]开启Restful API
DOCKER_ENABLE_REMOTE_API: false
############################
# role:kube-master
############################
# k8s 集群 master 节点证书配置,可以添加多个ip和域名(比如增加公网ip和域名),后期修改可能比较麻烦,建议把将来可能会用到的IP、域名都写上。
MASTER_CERT_HOSTS:
- "172.31.7.188"
- "k8s-api.yourdomainname""
- "k8s-api.myserver.com"
#- "www.test.com"
# node 节点上 pod 网段掩码长度(决定每个节点最多能分配的pod ip地址)
# 如果flannel 使用 --kube-subnet-mgr 参数,那么它将读取该设置为每个节点分配pod网段
# https://github.com/coreos/flannel/issues/847
NODE_CIDR_LEN: 24
# 是否启用集群audit功能
ENABLE_CLUSTER_AUDIT: true
############################
# role:kube-node
############################
# Kubelet 根目录
KUBELET_ROOT_DIR: "/var/lib/kubelet"
# node节点最大pod 数
MAX_PODS: 110
# 配置为kube组件(kubelet,kube-proxy,dockerd等)预留的资源量
# 数值设置详见templates/kubelet-config.yaml.j2
KUBE_RESERVED_ENABLED: "no"
# k8s 官方不建议草率开启 system-reserved, 除非你基于长期监控,了解系统的资源占用状况;
# 并且随着系统运行时间,需要适当增加资源预留,数值设置详见templates/kubelet-config.yaml.j2
# 系统预留设置基于 4c/8g 虚机,最小化安装系统服务,如果使用高性能物理机可以适当增加预留
# 另外,集群安装时候apiserver等资源占用会短时较大,建议至少预留1g内存
SYS_RESERVED_ENABLED: "no"
############################
# role:network [flannel,calico,cilium,kube-ovn,kube-router]
############################
# ------------------------------------------- flannel
# [flannel]设置flannel 后端"host-gw","vxlan"等
FLANNEL_BACKEND: "vxlan"
DIRECT_ROUTING: false
# [flannel]
flannel_ver: "v0.27.3"
# ------------------------------------------- calico
# 模式可选项有: [Always, CrossSubnet, Never],跨子网可以配置为Always与CrossSubnet
# CrossSubnet为隧道+BGP路由混合模式可以提升网络性能,同子网配置为Never即可.
# 公有云建议使用always比较省事,其他的话需要修改各自公有云的网络配置,具体可以参考各个公有云说明
CALICO_ENABLE_OVERLAY: "Always"
# [calico]设置 calico-node使用的host IP,bgp邻居通过该地址建立,可手工指定也可以自动发现
IP_AUTODETECTION_METHOD: "can-reach={{ groups['kube_master'][0] }}"
# [calico]设置calico 网络 backend: bird, vxlan, none
# 少数公有云(Azure)或者私有云不支持IPinIP封包,可以使用 vxlan 模式
CALICO_NETWORKING_BACKEND: "bird"
# [calico]设置calico 是否使用route reflectors
# 如果集群规模超过50个节点,建议启用该特性
CALICO_RR_ENABLED: false
# CALICO_RR_NODES 配置route reflectors的节点,如果未设置默认使用集群master节点
# CALICO_RR_NODES: ["192.168.1.1", "192.168.1.2"]
CALICO_RR_NODES: []
# [calico]更新支持calico 版本: ["3.19", "3.23"]
calico_ver: "v3.28.4"
# [calico]calico 主版本
calico_ver_main: "{{ calico_ver.split('.')[0] }}.{{ calico_ver.split('.')[1] }}"
# ------------------------------------------- cilium
# [cilium]镜像版本
cilium_ver: "1.17.4"
cilium_connectivity_check: false
cilium_hubble_enabled: false
cilium_hubble_ui_enabled: false
# ------------------------------------------- kube-ovn
# [kube-ovn]离线镜像tar包
kube_ovn_ver: "v1.11.5"
# ------------------------------------------- kube-router
# [kube-router]公有云上存在限制,一般需要始终开启 ipinip;自有环境可以设置为 "subnet"
OVERLAY_TYPE: "full"
# [kube-router]NetworkPolicy 支持开关
FIREWALL_ENABLE: true
# [kube-router]kube-router 镜像版本
kube_router_ver: "v1.5.4"
############################
# role:cluster-addon
############################
# coredns 自动安装
dns_install: "no"
corednsVer: "1.12.4"
ENABLE_LOCAL_DNS_CACHE: false
dnsNodeCacheVer: "1.26.4"
# 设置 local dns cache 地址
LOCAL_DNS_CACHE: "169.254.20.10"
# metric server 自动安装
metricsserver_install: "no"
metricsVer: "v0.8.0"
# dashboard 自动安装
dashboard_install: "no"
dashboardVer: "7.12.0"
# local-storage (local-path-provisioner) 自动安装
local_path_provisioner_install: "no"
local_path_provisioner_ver: "v0.0.31"
local_path_storage_class: "local-path"
# 设置默认本地存储路径
local_path_provisioner_dir: "/opt/local-path-provisioner"
# nfs-provisioner 自动安装
nfs_provisioner_install: "no"
nfs_provisioner_namespace: "kube-system"
nfs_provisioner_ver: "v4.0.2"
nfs_storage_class: "managed-nfs-storage"
nfs_server: "192.168.1.10"
nfs_path: "/data/nfs"
# openebs 自动安装
openebs_install: "no"
openebs_ver: "4.3.2"
openebs_namespace: "openebs"
openebs_hostpath: "/var/openebs/local"
openebs_hostpath_storage_class: "openebs-hostpath"
openebs_lvm_storage_class: "openebs-lvmpv"
openebs_lvm_vg: "vg_k8s"
# prometheus 自动安装
prom_install: "no"
prom_namespace: "monitor"
prom_storage_class: ""
prom_chart_ver: "75.7.0"
# minio 自动安装
minio_install: "no"
minio_namespace: "minio"
minio_storage_class: "{{ openebs_lvm_storage_class }}"
minio_chart_ver: "7.1.1"
minio_root_user: "3aea61ca94177dx"
minio_root_password: "0f3b19e46dd3aea61ca94177d"
# 单机版=1,集群版=4以上
minio_pool_servers: 4
minio_pool_size: 10Gi
# 是否启用tls证书,如果未启用则使用http协议
minio_tls_enabled: false
# 是否使用权威证书,如果使用需要提前把证书放到目录 roles/cluster-addon/templates/minio/; 并且要求
# 证书和私钥的名称分别为server.crt和server.key
minio_with_global_cert: false
# kubeapps 自动安装
kubeapps_install: "no"
kubeapps_install_namespace: "kubeapps"
kubeapps_working_namespace: "default"
kubeapps_storage_class: "{{ openebs_hostpath_storage_class }}"
kubeapps_chart_ver: "12.4.3"
# nacos 自动安装
nacos_install: "no"
nacos_namespace: "nacos"
nacos_mysql_host: "semisync-mysql-cluster-mysql"
nacos_mysql_db: "nacos"
nacos_mysql_port: "3306"
nacos_mysql_user: "__dbuser__"
nacos_mysql_password: "__yourpassword__"
nacos_storage_class: "{{ openebs_lvm_storage_class }}"
# rocketmq 自动安装
rocketmq_install: "no"
rocketmq_namespace: "rocketmq"
rocketmq_storage_class: "{{ openebs_lvm_storage_class }}"
# network-check 自动安装
network_check_enabled: false
network_check_schedule: "*/5 * * * *"
# kubeblocks 自动安装
kubeblocks_ver: "1.0.0"
kubeblocks_install: "no"
# ingress-nginx 自动安装
# ingress-nginx 只会部署到node with 标签:ingress-controller/provider=ingress-nginx
ingress_nginx_install: "no"
ingress_nginx_namespace: "ingress-nginx"
ingress_nginx_ver: "4.13.0"
ingress_nginx_metrics_enabled: true
############################
# role:harbor
############################
# harbor version,完整版本号
HARBOR_VER: "v2.12.4"
HARBOR_DOMAIN: "harbor.easzlab.io.local"
HARBOR_PATH: /var/data
HARBOR_TLS_PORT: 8443
HARBOR_REGISTRY: "{{ HARBOR_DOMAIN }}:{{ HARBOR_TLS_PORT }}"
# if set 'false', you need to put certs named harbor.pem and harbor-key.pem in directory
'down'
HARBOR_SELF_SIGNED_CERT: true
# install extra component
HARBOR_WITH_TRIVY: false
1.6.4 部署k8s集群
通过ansible脚本初始化环境及部署k8s 高可用集群
1.6.4.1 环境初始化
root@k8s-deploy:/etc/kubeasz# vim playbooks/01.prepare.yml #系统基础初始化主机配置
# [optional] to synchronize system time of nodes with 'chrony'
- hosts:
- kube_master
- kube_node
- etcd
#- ex_lb
#- chrony
roles:
- { role: os-harden, when: "OS_HARDEN|bool" }
- { role: chrony, when: "groups['chrony']|length > 0" }
root@k8s-deploy:/etc/kubeasz# vim roles/prepare/tasks/main.yml #禁止修改主机名,v3.6.2加入,删除60行以后的配置
root@k8s-deploy:/etc/kubeasz# ./ezctl setup k8s-cluster1 01 #准备CA和基础环境初始化
1.6.4.2 部署etcd
可更改启动脚本路径及版本等自定义配置
root@k8s-deploy:/etc/kubeasz# ./ezctl setup k8s-cluster1 02 #部署
etcd集群
各etcd服务器验证etcd服务:(此处以三台etcd服务器为例,如果没有这么多etcd服务器,请删除下方etcd节点IP列表多余的IP)
root@etcd1:~# export NODE_IPS="172.31.7.106 172.31.7.107 172.31.7.108"
root@k8s-etcd1:~# for ip in ${NODE_IPS}; do /usr/local/bin/etcdctl --
endpoints=https://${ip}:2379 --cacert=/etc/kubernetes/ssl/ca.pem --
cert=/etc/kubernetes/ssl/etcd.pem --key=/etc/kubernetes/ssl/etcd-key.pem endpoint
health; done
https://172.31.7.106:2379 is healthy: successfully committed proposal: took = 7.742781ms
https://172.31.7.107:2379 is healthy: successfully committed proposal: took = 9.419607ms
https://172.31.7.108:2379 is healthy: successfully committed proposal: took = 9.150312ms
注:以上返回信息表示etcd集群运行正常,否则异常。
1.6.4.3 部署容器运行时containerd
由证书签发机构签发的证书可被信任,不需要执行分发步骤。
# 验证基础容器镜像
root@k8s-deploy:/etc/kubeasz# grep SANDBOX_IMAGE ./clusters/* -R #使用私有仓库pause镜像
# 配置基础镜像
root@k8s-deploy:/etc/kubeasz# vim ./clusters/k8s-cluster1/config.yml
# [containerd]基础容器镜像
SANDBOX_IMAGE: "harbor.myarchitect.online/baseimages/pause:3.10"
# 配置harbor镜像仓库域名解析,把你的域名解析到集群harbor服务器,如果有DNS服务器进行域名解析则不用配:
root@k8s-deploy:/etc/kubeasz# vim roles/containerd/tasks/main.yml
34 - name: 添加域名解析
35 shell: "echo '172.31.7.104 harbor.yourdomainname' >> /etc/hosts"
# 可选自定义containerd配置文件:
97 [plugins.'io.containerd.cri.v1.runtime'.containerd.runtimes.runc.options]
98 BinaryName = ''
99 CriuImagePath = ''
100 CriuWorkPath = ''
101 IoGid = 0
102 IoUid = 0
103 NoNewKeyring = false
104 Root = ''
105 ShimCgroup = ''
106 SystemdCgroup = true
# 配置nerdctl客户端:
root@k8s-deploy:/etc/kubeasz# tar xvf nerdctl-2.1.6-linux-amd64.tar.gz -C
/etc/kubeasz/bin/containerd-bin/
nerdctl
containerd-rootless-setuptool.sh
containerd-rootless.sh
root@k8s-deploy:/etc/kubeasz# cat roles/containerd/tasks/main.yml
- name: 准备containerd相关目录
file: name={{ item }} state=directory
with_items:
- "{{ bin_dir }}/containerd-bin"
- "/etc/containerd"
- "/etc/containerd/certs.d/docker.io"
- "/etc/containerd/certs.d/easzlab.io.local:5000"
- "/etc/containerd/certs.d/{{ HARBOR_REGISTRY }}"
- "/etc/nerdctl/" #创建nerdctl配置文件目录
- name: 加载内核模块 overlay
modprobe: name=overlay state=present
- name: 下载 containerd 二进制文件
copy: src={{ item }} dest={{ bin_dir }}/containerd-bin/ mode=0755
with_fileglob:
- "{{ base_dir }}/bin/containerd-bin/*"
tags: upgrade
- name: 下载 crictl
copy: src={{ base_dir }}/bin/crictl dest={{ bin_dir }}/crictl mode=0755
- name: 添加 crictl 自动补全
lineinfile:
dest: ~/.bashrc
state: present
regexp: 'crictl completion'
line: 'source <(crictl completion bash) # generated by kubeasz'
- name: 创建 containerd 配置文件
template: src=config.toml.j2 dest=/etc/containerd/config.toml
tags: upgrade
- name: 创建 nerdctl 配置文件
template: src=nerdctl.toml.j2 dest=/etc/nerdctl/nerdctl.toml #添加分发nerdctl配置文件
tags: upgrade
- name: 添加域名解析
shell: "echo '172.31.7.104 harbor.yourdomainname' >> /etc/hosts" #如果需要,可添加自定义域
名解析
- name: 配置docker.io 加速镜像
template: src=docker.io/hosts.toml.j2 dest=/etc/containerd/certs.d/docker.io/hosts.toml
- name: 配置local_registry 仓库
template: src="easzlab.io.local:5000/hosts.toml.j2"
dest=/etc/containerd/certs.d/easzlab.io.local:5000/hosts.toml
- name: 配置harbor 仓库
template: src="HARBOR_REGISTRY/hosts.toml.j2" dest=/etc/containerd/certs.d/{{
HARBOR_REGISTRY }}/hosts.toml
- name: 创建systemd unit文件
template: src=containerd.service.j2 dest=/etc/systemd/system/containerd.service
tags: upgrade
- name: 创建 crictl 配置
template: src=crictl.yaml.j2 dest=/etc/crictl.yaml
- name: 开机启用 containerd 服务
shell: systemctl enable containerd
ignore_errors: true
- name: 开启 containerd 服务
shell: systemctl daemon-reload && systemctl restart containerd
tags: upgrade
- name: 轮询等待containerd服务运行
shell: "systemctl is-active containerd.service"
register: containerd_status
until: '"active" in containerd_status.stdout'
retries: 8
delay: 2
tags: upgrade
# nerdctl配置文件:
root@k8s-deploy:/etc/kubeasz# vim roles/containerd/templates/nerdctl.toml.j2
namespace = "k8s.io"
debug = false
debug_full = false
insecure_registry = true
# 可选自定义containerd service文件:
root@k8s-master1:/etc/kubeasz# cat roles/containerd/templates/containerd.service.j2
[Unit]
Description=containerd container runtime
Documentation=https://containerd.io
After=network.target
[Service]
Environment="PATH={{ bin_dir }}/containerd-bin:/bin:/sbin:/usr/bin:/usr/sbin"
ExecStartPre=-/sbin/modprobe overlay
ExecStart={{ bin_dir }}/containerd-bin/containerd --log-level warn
Restart=always
RestartSec=5
Delegate=yes
KillMode=process
OOMScoreAdjust=-999
LimitNOFILE=1048576
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNPROC=infinity
LimitCORE=infinity
[Install]
WantedBy=multi-user.target
# 执行部署运行时:
root@k8s-deploy:/etc/kubeasz# ./ezctl setup k8s-cluster1 03
root@k8s-node1:~# /usr/local/bin/containerd-bin/containerd -v
containerd github.com/containerd/containerd/v2 v2.1.4
75cb2b7193e4e490e9fbdc236c0e811ccaba3376
root@k8s-node1:~# ln -sv /usr/local/bin/containerd-bin/* /usr/local/bin/
root@k8s-node1:~# nerdctl version
Client:
Version: v2.1.6
OS/Arch: linux/amd64
Git commit: 59253e9931873e79b92fe3400f14e69d6be34025
buildctl:
Version:
Server:
containerd:
Version: v2.1.4
GitCommit: 75cb2b7193e4e490e9fbdc236c0e811ccaba3376
runc:
Version: 1.3.1
GitCommit: v1.3.1-0-ge6457afc
#验证可以登录harbor,用于后期镜像分发:
root@k8s-node1:~# nerdctl login harbor.myarchitect.online
Enter Username: admin
Enter Password:
WARN[0003] skipping verifying HTTPS certs for "harbor.myarchitect.online:443"
WARNING! Your credentials are stored unencrypted in '/root/.docker/config.json'.
Configure a credential helper to remove this warning. See
https://docs.docker.com/go/credential-store/
Login Succeeded
#测试上传镜像
root@k8s-node1:~# nerdctl pull registry.cn-hangzhou.aliyuncs.com/zhangshijie/ubuntu:22.04
root@k8s-node1:~# nerdctl tag registry.cn-hangzhou.aliyuncs.com/zhangshijie/ubuntu:22.04
harbor.yourdomainname/baseimages/ubuntu:22.04
root@k8s-node1:~# nerdctl push harbor.yourdomainname/baseimages/ubuntu:22.04
1.6.4.4 部署kubernetes master节点
根据实际业务需求可选择性更改启动脚本参数及路径等自定义功能
root@k8s-deploy:/etc/kubeasz# ./ezctl help setup
root@k8s-deploy:/etc/kubeasz# vim roles/kube-master/tasks/main.yml #可自定义配置
root@k8s-deploy:/etc/kubeasz# ./ezctl setup k8s-cluster1 04
root@k8s-deploy:/etc/kubeasz# kubectl get node
NAME STATUS ROLES AGE VERSION
172.31.7.101 Ready,SchedulingDisabled master 7m55s v1.34.1
172.31.7.102 Ready,SchedulingDisabled master 7m55s v1.34.1
1.6.4.5 部署kubernetes node节点
root@k8s-deploy:/etc/kubeasz# ./ezctl help setup
root@k8s-deploy:/etc/kubeasz# vim roles/kube-node/tasks/main.yml #可自定义配置
root@k8s-deploy:/etc/kubeasz# ./ezctl setup k8s-cluster1 05
root@k8s-deploy:/etc/kubeasz# kubectl get node
NAME STATUS ROLES AGE VERSION
172.31.7.101 Ready,SchedulingDisabled master 12m v1.34.1
172.31.7.102 Ready,SchedulingDisabled master 12m v1.34.1
172.31.7.111 Ready node 24s v1.34.1
172.31.7.112 Ready node 24s v1.34.1
1.6.4.6 部署网络组件calico
root@k8s-deploy:/etc/kubeasz/roles/calico/templates# pwd
/etc/kubeasz/roles/calico/templates
root@k8s-deploy:/etc/kubeasz/roles/calico/templates# mv calico_v3.28.4-k8s_1.34.1-
ubuntu2404.yaml ./calico-v3.28.yaml.j2
root@k8s-deploy:/etc/kubeasz/roles/calico/templates# cd /etc/kubeasz/
root@k8s-deploy:/etc/kubeasz# vim clusters/k8s-cluster1/config.yml
# [calico]更新支持calico 版本: ["3.19", "3.23"]
calico_ver: "v3.28.4" #指定自己的版本
# [calico]calico 主版本 # [calico]calico 获取主版本传递给部署任务
calico_ver_main: "{{ calico_ver.split('.')[0] }}.{{ calico_ver.split('.')[1] }}"
查看calico镜像:
root@k8s-deploy:/etc/kubeasz# grep "image:" roles/calico/templates/calico-v3.28.yaml.j2
image: easzlab.io.local:5000/calico/cni:{{ calico_ver }}
image: easzlab.io.local:5000/calico/node:{{ calico_ver }}
image: easzlab.io.local:5000/calico/node:{{ calico_ver }}
image: easzlab.io.local:5000/calico/kube-controllers:{{ calico_ver }}
将镜像上传至本地harbor:
root@k8s-deploy:/etc/kubeasz# docker tag calico/node:v3.28.4
harbor.yourdomainname/baseimages/calico-node:v3.26.4
root@k8s-deploy:/etc/kubeasz# docker push harbor.yourdomainname/baseimages/caliconode:
v3.28.4
root@k8s-deploy:/etc/kubeasz# docker tag calico/kube-controllers:v3.28.4
harbor.yourdomainname/baseimages/calico-kube-controllers:v3.28.4
root@k8s-deploy:/etc/kubeasz# docker push harbor.yourdomainname/baseimages/calico-kubecontrollers:
v3.28.4
root@k8s-deploy:/etc/kubeasz# docker tag calico/cni:v3.26.4
harbor.yourdomainname/baseimages/cni:v3.28.4
root@k8s-deploy:/etc/kubeasz# docker push harbor.yourdomainname/baseimages/cni:v3.28.4
修改yaml文件中的镜像地址:
root@k8s-deploy:/etc/kubeasz# vim roles/calico/templates/calico-v3.26.yaml.j2
root@k8s-deploy:/etc/kubeasz# grep "image:" roles/calico/templates/calico-v3.26.yaml.j2
#image: docker.io/calico/cni:v3.28.1
image: registry.cn-hangzhou.aliyuncs.com/myhubregistry/calico:cni-v3.28.4
#image: docker.io/calico/cni:v3.28.1
image: registry.cn-hangzhou.aliyuncs.com/myhubregistry/calico:cni-v3.28.4
#image: docker.io/calico/node:v3.28.1
image: registry.cn-hangzhou.aliyuncs.com/myhubregistry/calico:node-v3.28.4
#image: docker.io/calico/node:v3.28.1
image: registry.cn-hangzhou.aliyuncs.com/myhubregistry/calico:node-v3.28.4
#image: docker.io/calico/kube-controllers:v3.28.1
image: registry.cn-hangzhou.aliyuncs.com/myhubregistry/calico:kube-controllersv3.28.4
root@k8s-deploy:/etc/kubeasz# ./ezctl setup k8s-cluster1 06
root@k8s-deploy:~# kubectl get pod -A #calico 初始化过程中
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system calico-kube-controllers-695bf6cc9d-vb5tl 1/1 Running 0 2m28s
kube-system calico-node-q2xzs 0/1 Init:2/3 0 2m28s
kube-system calico-node-vf7xp 1/1 Running 0 2m28s
kube-system calico-node-vtzmz 0/1 Init:2/3 0 2m28s
kube-system calico-node-vwng2 0/1 Init:2/3 0 2m28s
验证calico
root@k8s-master1:~# calicoctl node status
Calico process is running.
IPv4 BGP status
+--------------+-------------------+-------+----------+-------------+
| PEER ADDRESS | PEER TYPE | STATE | SINCE | INFO |
+--------------+-------------------+-------+----------+-------------+
| 172.31.7.102 | node-to-node mesh | up | 08:18:26 | Established |
| 172.31.7.111 | node-to-node mesh | up | 08:18:26 | Established |
| 172.31.7.112 | node-to-node mesh | up | 08:18:26 | Established |
+--------------+-------------------+-------+----------+-------------+
IPv6 BGP status
No IPv6 peers found.
root@k8s-node1:~# calicoctl node status
Calico process is running.
IPv4 BGP status
+--------------+-------------------+-------+----------+-------------+
| PEER ADDRESS | PEER TYPE | STATE | SINCE | INFO |
+--------------+-------------------+-------+----------+-------------+
| 172.31.7.102 | node-to-node mesh | up | 08:18:12 | Established |
| 172.31.7.112 | node-to-node mesh | up | 08:18:10 | Established |
| 172.31.7.101 | node-to-node mesh | up | 08:18:25 | Established |
+--------------+-------------------+-------+----------+-------------+
IPv6 BGP status
No IPv6 peers found.
1.6.4.7 验证pod通信
root@k8s-deploy:~# kubectl run net-test1 --image=registry.cnhangzhou.
aliyuncs.com/zhangshijie/alpine sleep 360000
pod/net-test1 created
root@k8s-deploy:~# kubectl run net-test2 --image=registry.cnhangzhou.
aliyuncs.com/zhangshijie/alpine sleep 360000
pod/net-test2 created
root@k8s-deploy:~# kubectl run net-test3 --image=registry.cnhangzhou.
aliyuncs.com/zhangshijie/alpine sleep 360000
pod/net-test3 created
root@k8s-deploy:~# kubectl get pod -o wide #验证pod的跨主机通信
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE
READINESS GATES
net-test1 1/1 Running 0 35s 10.200.165.1 172.31.7.112 <none>
<none>
net-test2 1/1 Running 0 30s 10.200.45.1 172.31.7.111 <none>
<none>
net-test3 1/1 Running 0 26s 10.200.165.2 172.31.7.112 <none>
<none>
root@k8s-deploy:~# kubectl exec -it net-test1 -- sh
/ # ping 10.200.45.1
PING 10.200.45.1 (10.200.45.1): 56 data bytes
64 bytes from 10.200.45.1: seq=0 ttl=62 time=0.754 ms
64 bytes from 10.200.45.1: seq=1 ttl=62 time=0.546 ms
64 bytes from 10.200.45.1: seq=2 ttl=62 time=0.566 ms
64 bytes from 10.200.45.1: seq=3 ttl=62 time=0.517 ms
64 bytes from 10.200.45.1: seq=4 ttl=62 time=0.650 ms
64 bytes from 10.200.45.1: seq=5 ttl=62 time=0.589 ms
64 bytes from 10.200.45.1: seq=6 ttl=62 time=0.645 ms
1.6.4.8 查看集群控制器leader信息
ubuntu@deploy01:~$ sudo kubectl get leases -n kube-system
NAME HOLDER AGE
apiserver-x3wkiompg56yjadueu4vtmjide apiserver-x3wkiompg56yjadueu4vtmjide_07de2ec0-34bc-4feb-a2ff-4f953d2d8a3b 38h
kube-controller-manager 172.31.7.101_ea6d8067-e59d-446f-86ee-307ed4cd15c5 38h
kube-scheduler 172.31.7.101_9bab903a-b5d8-4f58-8c09-ef922c6f7158 38h
1.7 部署CoreDNS
目前常用的dns组件有kube-dns和coredns,用于解析k8s集群中service name所对应得到IP地址,从kubernetes v1.18开始不支持使用kube-dns。
k8s 1.18版本以后将不再支持kube-dns。
https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.18.md#downloadsfor-
v1180
kubeadm: kube-dns is deprecated and will not be supported in a future version
skyDNS/kube-dns/coreDNS
kube-dns:提供service name域名的解析
dns-dnsmasq:提供DNS缓存,降低kubedns负载,提高性能
dns-sidecar:定期检查kubedns和dnsmasq的健康状态
1.7.1 部署coredns
https://github.com/coredns/coredns
https://coredns.io/
https://github.com/coredns/deployment/tree/master/kubernetes 官方部署清单文件
1.8 部署可视化web管理界面
部署kubernetes的web管理界面dashboard
1.8.1 kubernetes 官方dashboard
root@k8s-master1:~# vim dashboard-v2.7.0.yaml
root@k8s-master1:~# nerdctl pull kubernetesui/metrics-scraper:v1.0.8
root@k8s-master1:~# nerdctl tag kubernetesui/metrics-scraper:v1.0.8
harbor.myarchitect.online/baseimages/metrics-scraper:v1.0.8
root@k8s-master1:~# nerdctl push harbor.myarchitect.online/baseimages/metricsscraper:
v1.0.8
root@k8s-master1:~# nerdctl pull kubernetesui/dashboard:v2.7.0
root@k8s-master1:~# nerdctl tag kubernetesui/dashboard:v2.7.0
harbor.yourdomianname/baseimages/kubernetesui/dashboard:v2.7.0
root@k8s-master1:~# nerdctl push
harbor.yourdomianname/baseimages/kubernetesui/dashboard:v2.7.0
root@k8s-master1:~# vim dashboard-v2.7.0.yaml
root@k8s-master1:~# kubectl apply -f dashboard-v2.7.0.yaml -f admin-user.yaml -f adminsecret.
yaml
root@k8s-master1:/etc/kubeasz/2.dashboard-v2.7.0# kubectl describe secrets -n kubernetesdashboard
dashboard-admin-user
Name: dashboard-admin-user
Namespace: kubernetes-dashboard
Labels: <none>
Annotations: kubernetes.io/service-account.name: admin-user
kubernetes.io/service-account.uid: f25bd543-dedb-4d61-aa2d-749f3a1e63d9
Type: kubernetes.io/service-account-token
Data
====
ca.crt: 1310 bytes
namespace: 20 bytes
token:
eyJhbGciOiJSUzI1NiIsImtpZCI6IlltRGNDc2VMYkdXa19NMU0zLWhrN0ZYWDltMkxUTHVIdXNlY3RMRkJZV2MifQ.e
yJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc
3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hb
WUiOiJkYXNoYm9hcmQtYWRtaW4tdXNlciIsImt1YmVybmV0ZuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW5
0Lm5hbWUiOiJhZG1pbi11c2VyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWl
kIjoiZjI1YmQ1NDMtZGVkYi00ZDYxLWFhMmQtNzQ5ZjNhMWU2M2Q5Iiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW5
0Omt1YmVybmV0ZXMtZGFzaGJvYXJkOmFkbWluLXVzZXIifQ.lkwLyxWPP3tI9osh3r4JyOaAtEBBd52yqiB1pg-Kbei-
TePFG48SEE0BkPUiymp16Nd7Ckc5939PQRN2LAhFuSoNDm88TpmNO13jq_GTKol3Wpamw5yK83_qOiH1_NkBe-
F4VQTxWglmboMV3b6ny02Dx5FtBVnn95m959KhROErZKijJQSP0c1JTeXmKds5L4_J7T7NPrt2_wFKiphwrubBTM1d_8
19zD0avYopTHx1OEXdkAJJM_BMiykhxtxj29lpd2ElQTE0_chOBuYLlsvw2UitKUEU8phw58-
6bLZbH3Xgj8fWPQ9cDCeEPWnF7saP-RzYBB4M-9ST--ARlw
复制token后可登录WEB页面进行测试
1.8.3 kuboard
官方的dashboard并不太好用,可以考虑部署kuboard
#在kubernetes环境部署kuboard:
root@k8s-ha1:~# apt install nfs-server
root@k8s-ha1:~# mkdir -p /data/k8sdata/kuboard
/data/k8sdata/kuboard *(rw,no_root_squash)
root@k8s-ha1:~# vim /etc/exports
root@k8s-ha1:~# systemctl restart nfs-server.service
root@k8s-ha1:~# systemctl enable nfs-server.service
# 部署方式1:kubernetes部署:
kuboard# kubectl apply -f kuboard-all-in-one.yaml
在浏览器输入 http://your-host-ip:30080 即可访问 Kuboard v3.x 的界面,登录方式:
用户名: admin
密 码: Kuboard123
#d 部署方式2:ocker单机
root@k8s-master1:~# docker run -d \
--restart=unless-stopped \
--name=kuboard \
-p 80:80/tcp \
-p 10081:10081/tcp \
-e KUBOARD_ENDPOINT="http://172.31.7.101:80" \
-e KUBOARD_AGENT_SERVER_TCP_PORT="10081" \
-v /root/kuboard-data:/data \
swr.cn-east-2.myhuaweicloud.com/kuboard/kuboard:v3
在浏览器输入 http://your-host-ip:80 即可访问 Kuboard v3.x 的界面,登录方式:
用户名: admin
密 码: Kuboard123
部署完成后可通过导入kubeconfig将集群添加到kuboard
如需卸载kuboard可执行
kuboard# kubectl delete -f kuboard-all-in-one.yaml

浙公网安备 33010602011771号