kubernetes-1

介绍

​ Kubernetes(简称k8s)是Google在2014年6月开源的一个容器集群管理系统,使用Go语言开发,用于管理云平台中多个主机上的容器化的应用,Kubernetes的目标是让部署容器化的应用简单并且高效,Kubernetes提供了资源调度、部署管理、服务发现、扩容缩容、监控,维护等一整套功能。,努力成为跨主机集群的自动部署、扩展以及运行应用程序容器的平台。 它支持一系列容器工具, 包括Docker等。

image-20220416134611352

image-20220416135823235

控制平面

让我们从 Kubernetes 集群的神经中枢(即控制平面)开始说起。在这里,我们可以找到用于控制集群的 Kubernetes 组件以及一些有关集群状态和配置的数据。这些核心 Kubernetes 组件负责处理重要的工作,以确保容器以足够的数量和所需的资源运行。

控制平面会一直与您的计算机保持联系。集群已被配置为以特定的方式运行,而控制平面要做的就是确保万无一失。

kube-apiserver

如果需要与您的 Kubernetes 集群进行交互,就要通过 API。Kubernetes API 是 Kubernetes 控制平面的前端,用于处理内部和外部请求。API 服务器会确定请求是否有效,如果有效,则对其进行处理。您可以通过 REST 调用、kubectl 命令行界面或其他命令行工具(例如 kubeadm)来访问 API。

kube-scheduler

您的集群是否状况良好?如果需要新的容器,要将它们放在哪里?这些是 Kubernetes 调度程序所要关注的问题。

调度程序会考虑容器集的资源需求(例如 CPU 或内存)以及集群的运行状况。随后,它会将容器集安排到适当的计算节点。

kube-controller-manager

控制器负责实际运行集群,而 Kubernetes 控制器管理器则是将多个控制器功能合而为一。控制器用于查询调度程序,并确保有正确数量的容器集在运行。如果有容器集停止运行,另一个控制器会发现并做出响应。控制器会将服务连接至容器集,以便让请求前往正确的端点。还有一些控制器用于创建帐户和 API 访问令牌。

etcd

配置数据以及有关集群状态的信息位于 etcd(一个键值存储数据库)中。etcd 采用分布式、容错设计,被视为集群的最终事实来源。

节点

Kubernetes 集群中至少需要一个计算节点,但通常会有多个计算节点。容器集经过调度和编排后,就会在节点上运行。如果需要扩展集群的容量,那就要添加更多的节点。

容器集

容器集是 Kubernetes 对象模型中最小、最简单的单元。它代表了应用的单个实例。每个容器集都由一个容器(或一系列紧密耦合的容器)以及若干控制容器运行方式的选件组成。容器集可以连接至持久存储,以运行有状态应用。

容器运行时引擎

为了运行容器,每个计算节点都有一个容器运行时引擎。比如 Docker,但 Kubernetes 也支持其他符合开源容器运动(OCI)标准的运行时,例如 rkt 和 CRI-O。

kubelet

每个计算节点中都包含一个 kubelet,这是一个与控制平面通信的微型应用。kublet 可确保容器在容器集内运行。当控制平面需要在节点中执行某个操作时,kubelet 就会执行该操作。

​ 主要功能是定时从某个地方获取节点上pod的期望状态(运行什么容器、运行的副本数量、网络或者存储如何配置等等),并调用对应的容器平台接口(如docker接口)达到这个状态
定时汇报当前节点的状态给apiserver,以供调度的时候使用
镜像和容器的清理工作,保证节点上镜像不会占满磁盘空间,退出的容器不会占用太多资源
2.kube-proxy服务

kube-proxy

每个计算节点中还包含 kube-proxy,这是一个用于优化 Kubernetes 网络服务的网络代理。kube-proxy 负责处理集群内部或外部的网络通信——靠操作系统的数据包过滤层,或者自行转发流量。

是Kubernetes在每个节点上运行网络代理、service资源的载体
建立了pod网络和集群网络的关系(把clusterip和podip关联起来了)
常用三种流量调度模式Userspace【废弃】、Iptables【濒临废弃】、Ipvs【推荐】
负责建立和删除包括更新调度规则、通知apiserver自己的更新,或者从apiserver那里获取其他kube-proxy的调度规则变化来更新自己的(若干运算节点都起kube- proxy,通过apiserver找etcd同步状态)

CLI客户端

  • kubectl命令行工具

核心附件

  • CNI网络组件:fannel/calico
  • 服务发现用插件:coredns
  • 服务暴露用插件:traefik
  • GUI管理插件:Dashboard

Kubernetes有三套网络:Service(集群)网络,Node(节点)网络,Pod(容器)网络
节点网络:运算节点宿主机网络
Pod网络:所有的Pod都是从宿主机上NAT出来的
Service网络:虚的网络、Server网络和Pod网络通过kube-proxy连接起来

类型 服务器IP地址
K8S-Master、ETCD(3台) 11.0.1.40、41
K8S-Node(2台) 11.0.1.42、43、44
Keepalived+HaProxy(2台) 11.0.1.45、46
Hatrbor(1台) 11.0.1.42

部署https Harbor

安装docker

各master及node节点安装docker

内部镜像将统⼀保存在内部 Harbor服务器,不再通过互联⽹在线下载。

使用docker官方脚本安装,也可手动安装

若想安装测试版的 Docker, 请从 test.docker.com 获取脚本

curl -fsSL test.docker.com -o get-docker.sh
curl -fsSL get.docker.com -o get-docker.sh
sudo sh get-docker.sh --mirror Aliyun
sudo sh get-docker.sh --mirror AzureChinaCloud

启动docker

sudo systemctl enable docker
sudo systemctl start docker

安装docker-compose

下载docker-compose 官网

sudo curl -L "https://github.com/docker/compose/releases/download/1.29.2/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
 
sudo chmod +x /usr/local/bin/docker-compose     #二进制文件应用可执行权限
 
sudo ln -s /usr/local/bin/docker-compose /usr/bin/docker-compose #如果安装后命令docker-compose失败,请检查您的路径。您还可以/usr/bin在路径中创建指向或任何其他目录的符号链接
  
docker-compose --version  #查看版本
root@harbor1:/usr/local/src# pwd
/usr/local/src
root@harbor1:/usr/local/src# tar xf docker-19.03.15-binary-install.tar.gz root@harbor1:/usr/local/src# bash docker-install.sh 
root@harbor1:/usr/local/src# mkdir  /apps
root@harbor1:/usr/local/src# cd /apps/
root@harbor1:/apps# tar xvf harbor-offline-installer-v2.2.1.tgz 
root@harbor1:/apps# cd harbor/
root@harbor1:/apps/harbor# mkdir  certs
root@harbor1:/apps/harbor# openssl genrsa -out /apps/harbor/certs/harbor-ca.key  #⽣成私有key 
root@harbor1:/apps/harbor# touch /root/.rnd
root@harbor1:/apps/harbor/certs#  openssl req -x509 -new -nodes -key /apps/harbor/certs/harbor-ca.key  -subj "/CN=harbor.ly.local" -days 7120 -out /apps/harbor/certs/harbor-ca.crt #签证
root@harbor1:/apps/harbor/certs# cd /apps/harbor/
root@harbor1:/apps/harbor# cp harbor.yml.tmpl   harbor.yml 
root@harbor1:/apps/harbor# vim harbor.yml


# Configuration file of Harbor
  
# The IP address or hostname to access admin UI and registry service.
# DO NOT use localhost or 127.0.0.1, because Harbor needs to be accessed by external clients.
hostname: harbor.ly.local   #自己自签发域名

# http related config
http:
  # port for http, default is 80. If https enabled, this port will redirect to https port
# Configuration file of Harbor

# The IP address or hostname to access admin UI and registry service.
# DO NOT use localhost or 127.0.0.1, because Harbor needs to be accessed by external clients.
hostname: lychaud.harbor.com

# http related config
http:
  # port for http, default is 80. If https enabled, this port will redirect to https port
  port: 80

# https related config
https:
  # https port for harbor, default is 443
  port: 443
  # The path of cert and key files for nginx
  certificate: /apps/harbor/certs/harbor-ca.crt  #证书地址
  private_key: /apps/harbor/certs/harbor-ca.key

# # Uncomment following will enable tls communication between all harbor components
# internal_tls:
#   # set enabled to true means internal tls is enabled
#   enabled: true
#   # put your cert and key files on dir
#   dir: /etc/harbor/tls/internal

# Uncomment external_url if you want to enable external proxy
# And when it enabled the hostname will no longer used
# external_url: https://reg.mydomain.com:8433

# The initial password of Harbor admin
# It only works in first time to install harbor
# Remember Change the admin password from UI after launching Harbor.
harbor_admin_password: Harbor12345

# Harbor DB configuration
database:
  # The password for the root user of Harbor DB. Change this before any production use.
  password: root123
  # The maximum number of connections in the idle connection pool. If it <=0, no idle connections are retained.
  max_idle_conns: 100
  # The maximum number of open connections to the database. If it <= 0, then there is no limit on the number of open connections.
  # Note: the default number of connections is 1024 for postgres of harbor.
  max_open_conns: 900

# The default data volume
data_volume: /data

安装

root@harbor1:/apps/harbor# ./install.sh  --help
root@harbor1:/apps/harbor# ./install.sh  --with-triy
root@harbor1:/apps/harbor# ocker-compose ps

image-20220416163723260

image-20220506005234637

如是签发证书本地做好hosts解析,访问自签域名

image-20220506005247195

image-20220506005740637

基础环境准备

两台高可用机器安装

安装keepalived

# vim /etc/sysctl.conf #在主备节点添加内核参数优化,要绑定Linux下尚不存在的IP。

net.ipv4.ip_nonlocal_bind = 1

#sysctl -p #生效参数

# apt-get install haproxy keepalived -y

# cp /usr/share/doc/keepalived/samples/keepalived.conf.vrrp /etc/keepalived/keepalived.conf

# vim /etc/keepalived/keepalived.conf 
! Configuration File for keepalived

global_defs {
   notification_email {
     acassen
   }
   notification_email_from Alexandre.Cassen@firewall.loc
   smtp_server 192.168.200.1
   smtp_connect_timeout 30
   router_id LVS_DEVEL
}

vrrp_instance VI_1 {
    state MASTER            #backup需要修改自定备节点
    interface eth0
    garp_master_delay 10
    smtp_alert
    virtual_router_id 51
    priority 100
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
        11.0.1.187 dev eth0 label eth0:1   #绑定网卡设置vip
        11.0.1.188 dev eth0 label eth0:2
        11.0.1.189 dev eth0 label eth0:3

    }
}



# systemctl restart keepalived

# systemctl enable keepalived

# 查看vip
# ip a

image-20220506003357494

# scp /etc/keepalived/keepalived.conf root@11.0.1.41:/etc/keepalived/keepalived.conf

安装Haproxy

global
	log /dev/log	local0
	log /dev/log	local1 notice
	chroot /var/lib/haproxy
	stats socket /run/haproxy/admin.sock mode 660 level admin expose-fd listeners
	stats timeout 30s
	user haproxy
	group haproxy
	daemon

	# Default SSL material locations
	ca-base /etc/ssl/certs
	crt-base /etc/ssl/private

	# See: https://ssl-config.mozilla.org/#server=haproxy&server-version=2.0.3&config=intermediate
        ssl-default-bind-ciphers ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384
        ssl-default-bind-ciphersuites TLS_AES_128_GCM_SHA256:TLS_AES_256_GCM_SHA384:TLS_CHACHA20_POLY1305_SHA256
        ssl-default-bind-options ssl-min-ver TLSv1.2 no-tls-tickets

defaults
	log	global
	mode	http
	option	httplog
	option	dontlognull
        timeout connect 5000
        timeout client  50000
        timeout server  50000
	errorfile 400 /etc/haproxy/errors/400.http
	errorfile 403 /etc/haproxy/errors/403.http
	errorfile 408 /etc/haproxy/errors/408.http
	errorfile 500 /etc/haproxy/errors/500.http
	errorfile 502 /etc/haproxy/errors/502.http
	errorfile 503 /etc/haproxy/errors/503.http
	errorfile 504 /etc/haproxy/errors/504.http
listen k8s-api-6443
  bind 11.0.1.187:6443
  mode tcp
      server master1  11.0.1.43:6443  check inter 2000 fall 3 rise 5    #监听master6443
      server master2  11.0.1.44:6443  check inter 2000 fall 3 rise 5    
      server master3  11.0.1.45:6443  check inter 2000 fall 3 rise 5

使用kubeasz ansible部署

基础环境准备

各节点安装python 2.7

# apt-get install python2.7 -y
# ln -s /usr/bin/python2.7 /usr/bin/python 
# apt  install python3-pip git
# pip3 install ansible -i https://mirrors.aliyun.com/pypi/simple/



部署节点安装ansible
# apt  install python3-pip git
# pip3 install ansible -i https://mirrors.aliyun.com/pypi/simple/
# apt-get install lrzsz vim wget curl git openssl -y
# ansible --version
ansible [core 2.12.5]
  config file = None
  configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
  ansible python module location = /usr/local/lib/python3.8/dist-packages/ansible
  ansible collection location = /root/.ansible/collections:/usr/share/ansible/collections
  executable location = /usr/local/bin/ansible
  python version = 3.8.10 (default, Mar 15 2022, 12:22:08) [GCC 9.4.0]
  jinja version = 2.10.1
  libyaml = True




# ssh-keygen  
#⽣成密钥对
# apt-get install sshpass 
#安装sshpass命令⽤于同步公钥到各k8s服务器

#分发公钥脚本:
#!/bin/bash#⽬标主机列表
IP="
11.0.1.40
11.0.1.41
11.0.1.42
11.0.1.43
11.0.1.44
11.0.1.45
11.0.1.46
11.0.1.47
"
for node in ${IP};do
  sshpass -p l ssh-copy-id  ${node}  -o StrictHostKeyChecking=no
  if [ $? -eq 0 ];then
     echo "${node} 秘钥copy完成"
 else
   echo "${node} 秘钥copy失败"
  fi
done

下载kubeasz项⽬及组件

oot@k8s-master1:~# export release=3.2.0

root@k8s-master1:~# curl -C- -fLO --retry 3 https://github.com/easzlab/kubeasz/releases/download/${release}/ezdown 

https://hub.docker.com/r/easzlab/kubeasz-k8s-bin/tags?page=1&ordering=last_updated

root@k8s-master1:~# chmod  a+x ezdown 
ezdownroot@k8s-master1:~# ./ezdown -D
root@k8s-master1:~# ll /etc/kubeasz/down/

⽣成hosts⽂件:

root@k8s-master1:~# cd /etc/kubeasz/
root@k8s-master1:/etc/kubeasz# pwd/etc/kubeasz 
root@k8s-master1:/etc/kubeasz# ./ezctl new k8s-01
root@k8s-master1:/etc/kubeasz# vim  clusters/k8s-01/hosts
root@k8s-master1:/etc/kubeasz# vim clusters/k8s-01/config.yml

编辑ansible hosts⽂件:

指定etcd节点、master节点、node节点、VIP、运⾏时、⽹络组建类型、service IP与pod IP范围等配置信息。

root@k8s-master1:/etc/kubeasz/clusters/k8s-01# pwd
/etc/kubeasz/clusters/k8s-01 
root@k8s-master1:/etc/kubeasz# cat hosts 
# 'etcd' cluster should have odd member(s) (1,3,5,...)
[etcd]
11.0.1.43
11.0.1.44
11.0.1.45

# master node(s)
[kube_master]
11.0.1.43
11.0.1.44

# work node(s)
[kube_node]
11.0.1.47
ku	

# [optional] harbor server, a private docker registry
# 'NEW_INSTALL': 'true' to install a harbor server; 'false' to integrate with existed one
[harbor]
#192.168.1.8 NEW_INSTALL=false

# [optional] loadbalance for accessing k8s from outside
[ex_lb]
11.0.1.6 LB_ROLE=backup EX_APISERVER_VIP=11.0.1.187 EX_APISERVER_PORT=6443  #打开lb填写vip
11.0.1.7 LB_ROLE=master EX_APISERVER_VIP=11.0.1.187 EX_APISERVER_PORT=6443

# [optional] ntp server for the cluster
[chrony]
#192.168.1.1

[all:vars]
# --------- Main Variables ---------------
# Secure port for apiservers
SECURE_PORT="6443"

# Cluster container-runtime supported: docker, containerd
CONTAINER_RUNTIME="docker"  #可选docker和containerd

# Network plugins supported: calico, flannel, kube-router, cilium, kube-ovn
CLUSTER_NETWORK="calico"  #网络组件选即可

# Service proxy mode of kube-proxy: 'iptables' or 'ipvs'
PROXY_MODE="ipvs"

# K8S Service CIDR, not overlap with node(host) networking
SERVICE_CIDR="10.68.0.0/16"

# Cluster CIDR (Pod CIDR), not overlap with node(host) networking
CLUSTER_CIDR="10.20.0.0/16"

# NodePort Range
NODE_PORT_RANGE="10000-65000"  #可自定义放开集群端口

# Cluster DNS Domain
CLUSTER_DNS_DOMAIN="lychuad.local"   #填写自定义集群域名

# -------- Additional Variables (don't change the default value right now) ---
# Binaries Directory
bin_dir="/usr/local/bin"    #修改文件路径,可默认

# Deploy Directory (kubeasz workspace)
base_dir="/etc/kubeasz"

# Directory for a specific cluster
cluster_dir="{{ base_dir }}/clusters/k8s-01"

# CA and other components cert/key Directory
ca_dir="/etc/kubernetes/ssl"

root@k8s-master1:/etc/kubeasz# cat hosts 

############################
# prepare
############################
# 可选离线安装系统软件包 (offline|online)
INSTALL_SOURCE: "online"   

# 可选进行系统安全加固 github.com/dev-sec/ansible-collection-hardening
OS_HARDEN: false

# 设置时间源服务器【重要:集群内机器时间必须同步】
ntp_servers:
  - "ntp1.aliyun.com"
  - "time1.cloud.tencent.com"
  - "0.cn.pool.ntp.org"

# 设置允许内部时间同步的网络段,比如"10.0.0.0/8",默认全部允许
local_network: "0.0.0.0/0"


############################
# role:deploy
############################
# default: ca will expire in 100 years
# default: certs issued by the ca will expire in 50 years
CA_EXPIRY: "876000h"            #证书时间
CERT_EXPIRY: "438000h"

# kubeconfig 配置参数
CLUSTER_NAME: "cluster1"
CONTEXT_NAME: "context-{{ CLUSTER_NAME }}"

# k8s version
K8S_VER: "1.23.1"   #k8s版本

############################
# role:etcd
############################
# 设置不同的wal目录,可以避免磁盘io竞争,提高性能
ETCD_DATA_DIR: "/var/lib/etcd"    #ETCD数据存储路径
ETCD_WAL_DIR: ""


############################
# role:runtime [containerd,docker]
############################
# ------------------------------------------- containerd
# [.]启用容器仓库镜像
ENABLE_MIRROR_REGISTRY: true

# [containerd]基础容器镜像
SANDBOX_IMAGE: "easzlab/pause:3.6"

# [containerd]容器持久化存储目录
CONTAINERD_STORAGE_DIR: "/var/lib/containerd"

# ------------------------------------------- docker
# [docker]容器存储目录
DOCKER_STORAGE_DIR: "/var/lib/docker"

# [docker]开启Restful API
ENABLE_REMOTE_API: false

# [docker]信任的HTTP仓库
INSECURE_REG: '["127.0.0.1/8"]'  


############################
# role:kube-master
############################
# k8s 集群 master 节点证书配置,可以添加多个ip和域名(比如增加公网ip和域名)
MASTER_CERT_HOSTS:
  - "10.1.1.1"
  - "k8s.test.io"
  #- "www.test.com"

# node 节点上 pod 网段掩码长度(决定每个节点最多能分配的pod ip地址)
# 如果flannel 使用 --kube-subnet-mgr 参数,那么它将读取该设置为每个节点分配pod网段
# https://github.com/coreos/flannel/issues/847
NODE_CIDR_LEN: 24


############################
# role:kube-node
############################
# Kubelet 根目录
KUBELET_ROOT_DIR: "/var/lib/kubelet"

# node节点最大pod 数
MAX_PODS: 110

# 配置为kube组件(kubelet,kube-proxy,dockerd等)预留的资源量
# 数值设置详见templates/kubelet-config.yaml.j2
KUBE_RESERVED_ENABLED: "no"

# k8s 官方不建议草率开启 system-reserved, 除非你基于长期监控,了解系统的资源占用状况;
# 并且随着系统运行时间,需要适当增加资源预留,数值设置详见templates/kubelet-config.yaml.j2
# 系统预留设置基于 4c/8g 虚机,最小化安装系统服务,如果使用高性能物理机可以适当增加预留
# 另外,集群安装时候apiserver等资源占用会短时较大,建议至少预留1g内存
SYS_RESERVED_ENABLED: "no"

# haproxy balance mode
BALANCE_ALG: "roundrobin"


############################
# role:network [flannel,calico,cilium,kube-ovn,kube-router]
############################
# ------------------------------------------- flannel
# [flannel]设置flannel 后端"host-gw","vxlan"等
FLANNEL_BACKEND: "vxlan"
DIRECT_ROUTING: false

# [flannel] flanneld_image: "quay.io/coreos/flannel:v0.10.0-amd64"
flannelVer: "v0.15.1"
flanneld_image: "easzlab/flannel:{{ flannelVer }}"

# [flannel]离线镜像tar包
flannel_offline: "flannel_{{ flannelVer }}.tar"

# ------------------------------------------- calico
# [calico]设置 CALICO_IPV4POOL_IPIP=“off”,可以提高网络性能,条件限制详见 docs/setup/calico.md
CALICO_IPV4POOL_IPIP: "Always"

# [calico]设置 calico-node使用的host IP,bgp邻居通过该地址建立,可手工指定也可以自动发现
IP_AUTODETECTION_METHOD: "can-reach={{ groups['kube_master'][0] }}"

# [calico]设置calico 网络 backend: brid, vxlan, none
CALICO_NETWORKING_BACKEND: "brid"

# [calico]更新支持calico 版本: [v3.3.x] [v3.4.x] [v3.8.x] [v3.15.x]
calico_ver: "v3.19.3"

# [calico]calico 主版本
calico_ver_main: "{{ calico_ver.split('.')[0] }}.{{ calico_ver.split('.')[1] }}"

# [calico]离线镜像tar包
calico_offline: "calico_{{ calico_ver }}.tar"

# ------------------------------------------- cilium
# [cilium]CILIUM_ETCD_OPERATOR 创建的 etcd 集群节点数 1,3,5,7...
ETCD_CLUSTER_SIZE: 1

# [cilium]镜像版本
cilium_ver: "v1.4.1"

# [cilium]离线镜像tar包
cilium_offline: "cilium_{{ cilium_ver }}.tar"

# ------------------------------------------- kube-ovn
# [kube-ovn]选择 OVN DB and OVN Control Plane 节点,默认为第一个master节点
OVN_DB_NODE: "{{ groups['kube_master'][0] }}"

# [kube-ovn]离线镜像tar包
kube_ovn_ver: "v1.5.3"
kube_ovn_offline: "kube_ovn_{{ kube_ovn_ver }}.tar"

# ------------------------------------------- kube-router
# [kube-router]公有云上存在限制,一般需要始终开启 ipinip;自有环境可以设置为 "subnet"
OVERLAY_TYPE: "full"

# [kube-router]NetworkPolicy 支持开关
FIREWALL_ENABLE: "true"

# [kube-router]kube-router 镜像版本
kube_router_ver: "v0.3.1"
busybox_ver: "1.28.4"

# [kube-router]kube-router 离线镜像tar包
kuberouter_offline: "kube-router_{{ kube_router_ver }}.tar"
busybox_offline: "busybox_{{ busybox_ver }}.tar"


############################
# role:cluster-addon
############################
# coredns 自动安装
dns_install: "no"   #默认是yes,我这选择是手动安装
corednsVer: "1.8.6"
ENABLE_LOCAL_DNS_CACHE: false
dnsNodeCacheVer: "1.21.1"
# 设置 local dns cache 地址
LOCAL_DNS_CACHE: "169.254.20.10"

# metric server 自动安装
metricsserver_install: "no"  #可自定安装
metricsVer: "v0.5.2"

# dashboard 自动安装         #可自定安装
dashboard_install: "no"   
dashboardVer: "v2.4.0"
dashboardMetricsScraperVer: "v1.0.7"

# ingress 自动安装
ingress_install: "no"
ingress_backend: "traefik"
traefik_chart_ver: "10.3.0"

# prometheus 自动安装
prom_install: "no"
prom_namespace: "monitor"
prom_chart_ver: "12.10.6"

# nfs-provisioner 自动安装
nfs_provisioner_install: "no"
nfs_provisioner_namespace: "kube-system"
nfs_provisioner_ver: "v4.0.2"
nfs_storage_class: "managed-nfs-storage"
nfs_server: "192.168.1.10"
nfs_path: "/data/nfs"

############################
# role:harbor
############################
# harbor version,完整版本号
HARBOR_VER: "v2.1.3"
HARBOR_DOMAIN: "harbor.yourdomain.com"
HARBOR_TLS_PORT: 8443

# if set 'false', you need to put certs named harbor.pem and harbor-key.pem in directory 'down'
HARBOR_SELF_SIGNED_CERT: true

# install extra component
HARBOR_WITH_NOTARY: false     #如需要安装打开即可安装
HARBOR_WITH_TRIVY: false
HARBOR_WITH_CLAIR: false
HARBOR_WITH_CHARTMUSEUM: true

安装k8s

安装详细信息查看官方文档

root@master1:/etc/kubeasz# pwd
/etc/kubeasz

root@master1:/etc/kubeasz# ./ezctl setup k8s-01 01  #初始化节点

image-20220505234539379

root@master1:/etc/kubeasz# ./ezctl setup k8s-01 02  #安装ETCD

image-20220416163936007

root@master1:/etc/kubeasz# ./ezctl setup k8s-01 03  #安装docker

image-20220505235750476

root@master1:/etc/kubeasz# ./ezctl setup k8s-01 04  #安装master

image-20220506000533916

root@master1:/etc/kubeasz# ./ezctl setup k8s-01 05  #添加node节点

image-20220506000749302

使用kubeasz添加masternode节点

添加master

root@master1:/etc/kubeasz# pwd
/etc/kubeasz

#添加master 报错可忽略
./ezctl  add-master  k8s-01 11.0.1.45

image-20220506001439308

添加node

root@master1:/etc/kubeasz# ./ezctl add-node k8s-01 11.0.1.46

image-20220506001938918

posted @ 2022-05-06 01:07  LYChuad  阅读(155)  评论(0)    收藏  举报