Debian11安装k8s

一 、虚拟机安装Debian

为解决Debian安装慢的问题,采用cd安装,最小化安装,中途跳过联网

我这边是将master安装好之后,克隆master虚拟机为node节点,后面出现了问题,又重新安装从节点,并刚好记录一下搭建集群的过程

image-20240511151750836

使用桥接

image-20240511151838415

图形安装,参考教程,这里就不详细列出安装步骤了

image-20240511152032855

网络配置

前面已经使用桥接连接到物理网络

直接分配一个静态ip地址即可

source /etc/network/interfaces.d/*

auto lo
iface  lo inet  loopback
# The loopback network interface
auto ens33
iface ens33 inet static
#ipv4
address 192.168.31.31
#子网掩码
netmask 255.255.255.0
#网关
gateway 192.168.31.1
#dns
dns-nameservers 8.8.8.8

image-20240511154320240

重启网络配置

systemctl  restart networking

image-20240511155132175

换源

安装ssh服务

apt-get update

apt install -y openssh-server

systemctl start ssh

修改配置,允许ssh登录

nano /etc/ssh/sshd_config

image-20240511163810489

systemctl restart ssh

image-20240511164139280

修改软件源并升级内核

nano /etc/apt/source.list
# 默认注释了源码镜像以提高 apt update 速度,如有需要可自行取消注释
deb https://mirrors.tuna.tsinghua.edu.cn/debian/ bullseye main contrib non-free

deb https://mirrors.tuna.tsinghua.edu.cn/debian/ bullseye-updates main contrib non-free

deb https://mirrors.tuna.tsinghua.edu.cn/debian/ bullseye-backports main contrib non-free

deb https://mirrors.tuna.tsinghua.edu.cn/debian-security bullseye-security main contrib non-free

sudo apt-get update && sudo apt-get dist-upgrade

使用以下命令进行内核安装

apt -t bullseye-backports install linux-image-amd64
apt -t bullseye-backports install linux-headers-amd64
reboot

先决条件

各服务器初始化配置

配置各主节点的主机名称
hostnamectl set-hostname k8s-master-1 && hostname # 设置主节点1的主机名称
hostnamectl set-hostname k8s-node-2 && hostname # 设置从节点1
配置各节点的Host文件
192.168.31.31  k8s-master-1
192.168.31.32  k8s-node-2

安装k8s集群

二、安装docker

使用 APT 安装

由于 apt 源使用 HTTPS 以确保软件下载过程中不被篡改。因此,我们首先需要添加使用 HTTPS 传输的软件包以及 CA 证书。

 sudo apt-get update

 sudo apt-get install \
     apt-transport-https \
     ca-certificates \
     curl \
     gnupg \
     lsb-release

鉴于国内网络问题,强烈建议使用国内源,官方源请在注释中查看。

为了确认所下载软件包的合法性,需要添加软件源的 GPG 密钥。

 curl -fsSL https://mirrors.aliyun.com/docker-ce/linux/debian/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg


# 官方源
#  curl -fsSL https://download.docker.com/linux/debian/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg

然后,我们需要向 sources.list 中添加 Docker 软件源:

在一些基于 Debian 的 Linux 发行版中 $(lsb_release -cs) 可能不会返回 Debian 的版本代号,例如 Kail LinuxBunsenLabs Linux。在这些发行版中我们需要将下面命令中的 $(lsb_release -cs) 替换为 https://mirrors.aliyun.com/docker-ce/linux/debian/dists/ 中支持的 Debian 版本代号,例如 buster

echo \
  "deb [arch=amd64 signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://mirrors.aliyun.com/docker-ce/linux/debian \
  $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null


# 官方源
#   echo \
#   "deb [arch=amd64 signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/debian \
#   $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null

以上命令会添加稳定版本的 Docker APT 源,如果需要测试版本的 Docker 请将 stable 改为 test。

安装 Docker

更新 apt 软件包缓存,并安装 docker-ce

sudo apt-get update

sudo apt-get install docker-ce docker-ce-cli containerd.io
启动 Docker
sudo systemctl enable docker
sudo systemctl start docker
建立 docker 用户组

默认情况下,docker 命令会使用 Unix socket 与 Docker 引擎通讯。而只有 root 用户和 docker 组的用户才可以访问 Docker 引擎的 Unix socket。出于安全考虑,一般 Linux 系统上不会直接使用 root 用户。因此,更好地做法是将需要使用 docker 的用户加入 docker 用户组。

建立 docker 组:

sudo groupadd docker

将当前用户加入 docker 组:

sudo usermod -aG docker $USER

退出当前终端并重新登录,进行如下测试。

测试 Docker 是否安装正确
docker run --rm hello-world
镜像加速

您可以通过修改daemon配置文件/etc/docker/daemon.json来使用加速器

sudo mkdir -p /etc/docker
sudo tee /etc/docker/daemon.json <<-'EOF'
{
  "registry-mirrors": ["https://82m9ar63.mirror.aliyuncs.com"],
  "exec-opts": ["native.cgroupdriver=systemd"],
  "log-driver": "json-file",
  "log-opts": {
    "max-size": "100m"
  },
  "storage-driver": "overlay2"
}
EOF

sudo systemctl daemon-reload
sudo systemctl restart docker

kubernetes 环境配置

关闭swap

sudo swapoff -a
sudo sed -ri 's/.*swap.*/#&/' /etc/fstab

同步各节点的时区

# 设置系统时区为中国/上海
timedatectl set-timezone Asia/Shanghai

关闭各节点的防火墙

systemctl disable nftables.service && systemctl stop nftables.service && systemctl status nftables.service

设置rp_filter的值

cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF

三、安装 kubeadm(所有节点都需要安装)

3.1 安装容器运行时

项目地址:https://github.com/Mirantis/cri-dockerd

wget https://github.com/Mirantis/cri-dockerd/releases/download/v0.3.13/cri-dockerd-0.3.13.amd64.tgz
tar xf cri-dockerd-0.2.6.amd64.tgz
cp cri-dockerd/cri-dockerd /usr/bin/

3.1.1 配置启动⽂件,执行如下命令

cat <<"EOF" > /usr/lib/systemd/system/cri-docker.service
[Unit]
Description=CRI Interface for Docker Application Container Engine
Documentation=https://docs.mirantis.com
After=network-online.target firewalld.service docker.service
Wants=network-online.target
Requires=cri-docker.socket

[Service]
Type=notify
ExecStart=/usr/bin/cri-dockerd --network-plugin=cni --pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.9
ExecReload=/bin/kill -s HUP $MAINPID
TimeoutSec=0
RestartSec=2
Restart=always

# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
# Both the old, and new location are accepted by systemd 229 and up, so using the old location
# to make them work for either version of systemd.
StartLimitBurst=3

# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
# this option work for either version of systemd.
StartLimitInterval=60s


# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity

# Comment TasksMax if your systemd version does not support it.
# Only systemd 226 and above support this option.
TasksMax=infinity
Delegate=yes
KillMode=process

[Install]
WantedBy=multi-user.target
EOF

3.1.2.⽣成 socket ⽂件,执行如下命令

cat <<"EOF" > /usr/lib/systemd/system/cri-docker.socket
[Unit]
Description=CRI Docker Socket for the API
PartOf=cri-docker.service

[Socket]
ListenStream=%t/cri-dockerd.sock
SocketMode=0660
SocketUser=root
SocketGroup=docker

[Install]
WantedBy=sockets.target
EOF

3.1.3. 启动 cri-docker 并设置开机⾃动启动

systemctl daemon-reload
systemctl enable cri-docker --now
systemctl is-active cri-docker

3.2.安装 kubeadm、kubelet 和 kubectl

安装参考

3.2.1 安装使用 Kubernetes 存储库所需的包

sudo apt-get update

sudo apt-get install -y apt-transport-https ca-certificates curl gpg

3.2.2 下载 Kubernetes 软件包存储库的公共签名密钥。 所有存储库都使用相同的签名密钥,因此您可以忽略 URL 中的版本:

# sudo mkdir -p -m 755 /etc/apt/keyrings
curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.29/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg

3.2.3 添加相应的 Kubernetes 存储库。请注意,此存储库包含软件包 仅适用于 Kubernetes 1.29;对于其他 Kubernetes 次要版本,您需要 更改 URL 中的 Kubernetes 次要版本以匹配所需的次要版本 (您还应该检查是否正在阅读 Kubernetes 版本的文档 您计划安装)

echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.29/deb/ /' | sudo tee /etc/apt/sources.list.d/kubernetes.list

3.2.4 更新软件包索引,安装 kubelet、kubeadm 和 kubectl,并固定其版本:apt

sudo apt-get update
sudo apt-get install -y kubelet kubeadm kubectl
sudo apt-mark hold kubelet kubeadm kubectl

3.2.5 (可选)在运行 kubeadm 之前启用 kubelet 服务:

sudo systemctl enable --now kubelet

3.2.6 下载各个机器需要的镜像

# 查看k8s所需的镜像版本
kubeadm config images list

sudo tee ./images.sh <<-'EOF'
#!/bin/bash
images=(
kube-apiserver:v1.29.4
kube-controller-manager:v1.29.4
kube-scheduler:v1.29.4
kube-proxy:v1.29.4
coredns:v1.11.1
pause:3.9
etcd:3.5.12-0
)
for imageName in ${images[@]} ; do
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/$imageName
done
EOF


sudo chmod +x ./images.sh && ./images.sh

# 查看是否全部下载(没有下载的单独下载)7个
docker images
sudo tee ./images.sh <<-'EOF'
#!/bin/bash
images=(
kube-apiserver:v1.29.5
kube-controller-manager:v1.29.5
kube-scheduler:v1.29.5
kube-proxy:v1.29.5
coredns:v1.11.1
pause:3.9
etcd:3.5.12-0
)
for imageName in ${images[@]} ; do
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/$imageName
done
EOF

3.3. 配置 cri-dockerd

上面配置过了查看一下/usr/lib/systemd/system/cri-docker.service的--pod-infra-container-image是不是下载的镜像
cat /usr/lib/systemd/system/cri-docker.service | grep registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.9

#重启
sudo systemctl daemon-reload && sudo systemctl restart cri-docker

#检查效果
cri-dockerd --version

四、集群搭建

4.1集群初始化(Master节点执行)

kubeadm init \
--apiserver-advertise-address=192.168.31.31\
--control-plane-endpoint=k8s-master-1 \
--image-repository=registry.cn-hangzhou.aliyuncs.com/google_containers \
--kubernetes-version=v1.29.4 \
--service-cidr=10.96.0.0/16 \
--pod-network-cidr=10.114.0.0/16 \
--cri-socket=unix:///var/run/cri-dockerd.sock \
--upload-certs


kubeadm init \
--apiserver-advertise-address=192.168.159.139 \
--control-plane-endpoint=k8s-master-1 \
--image-repository=registry.cn-hangzhou.aliyuncs.com/google_containers \
--kubernetes-version=v1.29.5 \
--service-cidr=10.96.0.0/16 \
--pod-network-cidr=10.114.0.0/16 \
--cri-socket=unix:///var/run/cri-dockerd.sock \
--upload-certs

执行下面三行代码让当前用户能使用集群命令

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

安装网络插件calico(可以使用其他的),参数信息见下面说明

#下面网站查看最新calico部署文件
https://docs.projectcalico.org/manifests/calico.yaml

#下载到文件中
curl https://calico-v3-25.netlify.app/archive/v3.25/manifests/calico.yaml -O  >> calico.yaml

#使用kubectl命令安装
kubectl apply -f calico.yaml

4.2 工作节点加入集群

# 忘记加入集群的命令可以用这个命令获取
kubeadm token create --print-join-command

# 将从节点加入集群,在从节点执行
kubeadm join k8s-master-1:6443 --token rn8et2.livlh6vrwjs4c6gj \
        --discovery-token-ca-cert-hash sha256:f0b9e0ba6d62b107cd51a1b4a2ced604e65f5b5d4c2faab4616895b32b90019a  \
--cri-socket=unix:///var/run/cri-dockerd.sock

加入成功会出现

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

同样还是要等所有的pods应用都就绪才执行下一步操作。

遇到以下错误

root@k8s-node-2:~# kubeadm join k8s-master-1:6443 --token 0yncwd.j7xvheb2phj6o8q6 --discovery-token-ca-cert-hash sha256:9c98ae6274330d6c3db163d582e707e6fae8b51883cbbec0d0f3c13fefb1c85a 
Found multiple CRI endpoints on the host. Please define which one do you wish to use by setting the 'criSocket' field in the kubeadm configuration file: unix:///var/run/containerd/containerd.sock, unix:///var/run/cri-dockerd.sock
To see the stack trace of this error execute with --v=5 or higher

image-20240511173905238

原因:由于没有指定cri的套接字

解决: 在join命令后加上以下选项

--cri-socket unix:///var/run/cri-dockerd.sock 

五、安装nfs

5.1 安装配置nfs服务器

#安装nfs-kernel-server
sudo apt-get update
sudo apt install -y nfs-kernel-server nfs-common 
创建导出目录

导出目录是用于与nfs客户端共享的目录,这个目录可以是linux上的任意目录。这里我们使用一个创建的新目录。

mkdir -p /mnt/basic-service-delete/
mkdir -p /mnt/basic-service-reserve/
mkdir -p /mnt/business-service-delete/
mkdir -p /mnt/data/
#后边两步非常关键,如果没有这两步,可能导致其它客户端连接后出现访问禁止的错误
sudo chown nobody:nogroup /mnt/basic-service-reserve/
sudo chmod 777 /mnt/basic-service-reserve/

sudo chown nobody:nogroup /mnt/basic-service-delete/
sudo chmod 777 /mnt/basic-service-delete/


sudo chown nobody:nogroup /mnt/business-service-delete/
sudo chmod 777 /mnt/business-service-delete/

sudo chown nobody:nogroup /mnt/data/
sudo chmod 777 /mnt/data/

#执行以下就可以
sudo chmod 777 /mnt/basic-service-reserve/
sudo chmod 777 /mnt/basic-service-delete/
sudo chmod 777 /mnt/business-service-delete/
sudo chmod 777 /mnt/data/
通过nfs输出文件为客户端分配服务器访问权限

在master 执行以下命令

echo "/mnt/basic-service-delete/ *(insecure,rw,sync,no_root_squash,no_subtree_check)
/mnt/basic-service-reserve/ *(insecure,rw,sync,no_root_squash,no_subtree_check)
/mnt/business-service-delete/ *(insecure,rw,sync,no_root_squash,no_subtree_check)
/mnt/data/ *(insecure,rw,sync,no_root_squash,no_subtree_check)" > /etc/exports
systemctl enable rpcbind
systemctl enable nfs-server
systemctl start rpcbind
systemctl start nfs-server

输出共享目录
exportfs -r

#检查配置是否生效
exportfs
防火墙放行
NFS服务使用的111和2049端口是固定的,mountd端口是动态的,需要固定,然后在防火墙放行。

vi /etc/sysconfig/nfs
#追加端口配置
MOUNTD_PORT=4001  
STATD_PORT=4002
LOCKD_TCPPORT=4003
LOCKD_UDPPORT=4003
RQUOTAD_PORT=4004

sudo iptables -A INPUT -p tcp --dport 2049 -j ACCEPT
sudo iptables -A INPUT -p udp --dport 2049 -j ACCEPT

sudo iptables -A INPUT -p tcp --dport  111 -j ACCEPT
sudo iptables -A INPUT -p udp --dport 111 -j ACCEPT
#重载配置
sudo exportfs -ra
从节点安装nfs 客户端

安装NFS客户端软件包

sudo apt install nfs-common

所有从节点执行

showmount -e 192.168.31.31
mkdir -p /mnt/basic-service-delete/
mkdir -p /mnt/basic-service-reserve/
mkdir -p /mnt/business-service-delete/
mkdir -p /mnt/data/

mount -t nfs 192.168.31.31:/mnt/basic-service-delete/ /mnt/basic-service-delete/
mount -t nfs 192.168.31.31:/mnt/basic-service-reserve/ /mnt/basic-service-reserve/
mount -t nfs 192.168.31.31:/mnt/business-service-delete/ /mnt/business-service-delete/
mount -t nfs 192.168.31.31:/mnt/data/ /mnt/data/
NFS动态供给
#直接通过命令下载:
wget https://github.com/kubernetes-sigs/nfs-subdir-external-provisioner/archive/refs/tags/nfs-subdir-external-provisioner-4.0.18.tar.gz
#直接解压它
tar -zxvf nfs-subdir-external-provisioner-4.0.18.tar.gz

#来到这个文件夹下的deploy目录
cd nfs-subdir-external-provisioner-nfs-subdir-external-provisioner-4.0.18/deploy/

我们需要修改的就是 deployment.yaml

vim deployment.yaml

#我已经通过一些方法将它拉取下来并且上传到了国内的阿里云镜像仓库,我们可以直接用下面这个镜像来替换:
# 这个镜像是在谷歌上的,国内拉取不到
# image: registry.k8s.io/sig-storage/nfs-subdir-external-provisioner:v4.0.2
# 使用这个我先在谷歌上拉取下来再上传到阿里云上的镜像
image: registry.cn-hangzhou.aliyuncs.com/lfy_k8s_images/nfs-subdir-external-provisioner:v4.0.2
yamls=$(grep -rl 'namespace: default' ./)
for yaml in ${yamls}; do
  echo ${yaml}
  cat ${yaml} | grep 'namespace: default'
done

#我们可以新创建一个命名空间专门装这个驱动,也方便以后管理,所以我决定创建一个名为 nfs-provisioner 命名空间,为了方便就不用yaml文件了,直接通过命令创建:
kubectl create namespace base-cluster

#涉及命名空间这个配置的文件还挺多的,所以我们干脆通过一行脚本更改所有:
sed -i 's/namespace: default/namespace: base-cluster/g' `grep -rl 'namespace: default' ./`

yamls=$(grep -rl 'namespace:' ./)
for yaml in ${yamls}; do
  echo ${yaml}
  cat ${yaml} | grep 'namespace:'
done

#直接执行安装
kubectl apply -k .
kubectl delete -k .

kubectl get pod -n base-cluster
kubectl exec -it <pod-name> -- /bin/sh

修改deploymentyaml文件

nano deployment.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nfs-client-provisioner
  labels:
    app: nfs-client-provisioner
  # replace with namespace where provisioner is deployed
  namespace: base-cluster
spec:
  replicas: 1
  strategy:
    type: Recreate
  selector:
    matchLabels:
      app: nfs-client-provisioner
  template:
    metadata:
      labels:
        app: nfs-client-provisioner
    spec:
      serviceAccountName: nfs-client-provisioner
      containers:
        - name: nfs-client-provisioner
          #image: registry.k8s.io/sig-storage/nfs-subdir-external-provisioner:v4.0.2
          image: registry.cn-hangzhou.aliyuncs.com/lfy_k8s_images/nfs-subdir-external-provisioner:v4.0.2
          volumeMounts:
            - name: nfs-client-root
              mountPath: /persistentvolumes
          env:
            - name: PROVISIONER_NAME
              value: k8s-sigs.io/nfs-subdir-external-provisioner
            - name: NFS_SERVER
              value: 192.168.31.31
            - name: NFS_PATH
              value: /mnt/basic-service-reserve/
      volumes:
        - name: nfs-client-root
          nfs:
            server: 192.168.31.31
            path: /mnt/basic-service-reserve/

六、 安装可视化工具kuboard

kuboard-v3.yaml

---
apiVersion: v1
kind: Namespace
metadata:
  name: kuboard

---
apiVersion: v1
kind: ConfigMap
metadata:
  name: kuboard-v3-config
  namespace: kuboard
data:
  # 关于如下参数的解释,请参考文档 https://kuboard.cn/install/v3/install-built-in.html
  # [common]
  KUBOARD_ENDPOINT: 'http://192.168.31.31:30080'
  KUBOARD_AGENT_SERVER_UDP_PORT: '30081'
  KUBOARD_AGENT_SERVER_TCP_PORT: '30081'
  KUBOARD_SERVER_LOGRUS_LEVEL: info  # error / debug / trace
  # KUBOARD_AGENT_KEY 是 Agent 与 Kuboard 通信时的密钥,请修改为一个任意的包含字母、数字的32位字符串,此密钥变更后,需要删除 Kuboard Agent 重新导入。
  KUBOARD_AGENT_KEY: 32b7d6572c6255211b4eec9009e4a816  

  # 关于如下参数的解释,请参考文档 https://kuboard.cn/install/v3/install-gitlab.html
  # [gitlab login]
  # KUBOARD_LOGIN_TYPE: "gitlab"
  # KUBOARD_ROOT_USER: "your-user-name-in-gitlab"
  # GITLAB_BASE_URL: "http://gitlab.mycompany.com"
  # GITLAB_APPLICATION_ID: "7c10882aa46810a0402d17c66103894ac5e43d6130b81c17f7f2d8ae182040b5"
  # GITLAB_CLIENT_SECRET: "77c149bd3a4b6870bffa1a1afaf37cba28a1817f4cf518699065f5a8fe958889"
  
  # 关于如下参数的解释,请参考文档 https://kuboard.cn/install/v3/install-github.html
  # [github login]
  # KUBOARD_LOGIN_TYPE: "github"
  # KUBOARD_ROOT_USER: "your-user-name-in-github"
  # GITHUB_CLIENT_ID: "17577d45e4de7dad88e0"
  # GITHUB_CLIENT_SECRET: "ff738553a8c7e9ad39569c8d02c1d85ec19115a7"

  # 关于如下参数的解释,请参考文档 https://kuboard.cn/install/v3/install-ldap.html
  # [ldap login]
  # KUBOARD_LOGIN_TYPE: "ldap"
  # KUBOARD_ROOT_USER: "your-user-name-in-ldap"
  # LDAP_HOST: "ldap-ip-address:389"
  # LDAP_BIND_DN: "cn=admin,dc=example,dc=org"
  # LDAP_BIND_PASSWORD: "admin"
  # LDAP_BASE_DN: "dc=example,dc=org"
  # LDAP_FILTER: "(objectClass=posixAccount)"
  # LDAP_ID_ATTRIBUTE: "uid"
  # LDAP_USER_NAME_ATTRIBUTE: "uid"
  # LDAP_EMAIL_ATTRIBUTE: "mail"
  # LDAP_DISPLAY_NAME_ATTRIBUTE: "cn"
  # LDAP_GROUP_SEARCH_BASE_DN: "dc=example,dc=org"
  # LDAP_GROUP_SEARCH_FILTER: "(objectClass=posixGroup)"
  # LDAP_USER_MACHER_USER_ATTRIBUTE: "gidNumber"
  # LDAP_USER_MACHER_GROUP_ATTRIBUTE: "gidNumber"
  # LDAP_GROUP_NAME_ATTRIBUTE: "cn"

---
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: kuboard-etcd
  namespace: kuboard
  labels:
    app: kuboard-etcd
spec:
  serviceName: kuboard-etcd
  replicas: 3
  selector:
    matchLabels:
      app: kuboard-etcd
  template:
    metadata:
      name: kuboard-etcd
      labels:
        app: kuboard-etcd
    spec:
      containers:
      - name: kuboard-etcd
        image: swr.cn-east-2.myhuaweicloud.com/kuboard/etcd:v3.4.14
        ports:
        - containerPort: 2379
          name: client
        - containerPort: 2380
          name: peer
        env:
        - name: KUBOARD_ETCD_ENDPOINTS
          value: >-
            kuboard-etcd-0.kuboard-etcd:2379,kuboard-etcd-1.kuboard-etcd:2379,kuboard-etcd-2.kuboard-etcd:2379
        volumeMounts:
        - name: data
          mountPath: /data
        command:
          - /bin/sh
          - -c
          - |
            PEERS="kuboard-etcd-0=http://kuboard-etcd-0.kuboard-etcd:2380,kuboard-etcd-1=http://kuboard-etcd-1.kuboard-etcd:2380,kuboard-etcd-2=http://kuboard-etcd-2.kuboard-etcd:2380"
            exec etcd --name ${HOSTNAME} \
              --listen-peer-urls http://0.0.0.0:2380 \
              --listen-client-urls http://0.0.0.0:2379 \
              --advertise-client-urls http://${HOSTNAME}.kuboard-etcd:2379 \
              --initial-advertise-peer-urls http://${HOSTNAME}:2380 \
              --initial-cluster-token kuboard-etcd-cluster-1 \
              --initial-cluster ${PEERS} \
              --initial-cluster-state new \
              --data-dir /data/kuboard.etcd
  volumeClaimTemplates:
  - metadata:
      name: data
    spec:
      # 请填写一个有效的 StorageClass name
      storageClassName: nfs-client 
      accessModes: [ "ReadWriteMany" ]
      resources:
        requests:
          storage: 5Gi


---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: kuboard-data-pvc
  namespace: kuboard 
spec:
  storageClassName: nfs-client
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 10Gi

---
apiVersion: v1
kind: Service
metadata:
  name: kuboard-etcd
  namespace: kuboard
spec:
  type: ClusterIP
  ports:
  - port: 2379
    name: client
  - port: 2380
    name: peer
  selector:
    app: kuboard-etcd

---
apiVersion: apps/v1
kind: Deployment
metadata:
  annotations:
    deployment.kubernetes.io/revision: '9'
    k8s.kuboard.cn/ingress: 'false'
    k8s.kuboard.cn/service: NodePort
    k8s.kuboard.cn/workload: kuboard-v3
  labels:
    k8s.kuboard.cn/name: kuboard-v3
  name: kuboard-v3
  namespace: kuboard
spec:
  replicas: 1
  selector:
    matchLabels:
      k8s.kuboard.cn/name: kuboard-v3
  template:
    metadata:
      labels:
        k8s.kuboard.cn/name: kuboard-v3
    spec:
      containers:
        - env:
            - name: KUBOARD_ETCD_ENDPOINTS
              value: >-
                kuboard-etcd-0.kuboard-etcd:2379,kuboard-etcd-1.kuboard-etcd:2379,kuboard-etcd-2.kuboard-etcd:2379
          envFrom:
            - configMapRef:
                name: kuboard-v3-config
          image: 'swr.cn-east-2.myhuaweicloud.com/kuboard/kuboard:v3'
          imagePullPolicy: Always
          name: kuboard
          volumeMounts:
            - mountPath: "/data"
              name: kuboard-data
      volumes:
      - name: kuboard-data
        persistentVolumeClaim:
          claimName: kuboard-data-pvc

---
apiVersion: v1
kind: Service
metadata:
  annotations:
    k8s.kuboard.cn/workload: kuboard-v3
  labels:
    k8s.kuboard.cn/name: kuboard-v3
  name: kuboard-v3
  namespace: kuboard
spec:
  ports:
    - name: webui
      nodePort: 30080
      port: 80
      protocol: TCP
      targetPort: 80
    - name: agentservertcp
      nodePort: 30081
      port: 10081
      protocol: TCP
      targetPort: 10081
    - name: agentserverudp
      nodePort: 30081
      port: 10081
      protocol: UDP
      targetPort: 10081
  selector:
    k8s.kuboard.cn/name: kuboard-v3
  sessionAffinity: None
  type: NodePort
  
kubectl get pod -n kuboard

image-20240511210502553

进入集群页体验

image-20240511210358801

posted @ 2024-05-30 17:47  scmie  阅读(632)  评论(0)    收藏  举报