【K8S】UTM虚拟机安装K8S集群

二进制安装参考:https://www.cnblogs.com/security-guard/p/15356262.html
官方下载链接 https://www.downloadkubernetes.com

1. 安装前准备

电脑:Macbook Pro 2020 , 16g, M1
系统版本:MacOS Monterey 12.1
安装虚拟机 UTM
安装三个节点(镜像版本:CentOS-7-aarch64-Minimal-2009)

集群规划:
共3个节点-node1 node2 node3
node1 master
node2 worker
node3 worker

1.1 系统初始化设置

# 1. 修改主机名
hostnamectl set-hostname node1

# 2. 关闭防火墙
systemctl stop firewalld
systemctl disable firewalld

# 3. 关闭swap
swapoff -a  # 临时
sed -ri 's/.*swap.*/#&/' /etc/fstab    # 永久

# 4. 关闭selinux
sed -i 's/enforcing/disabled/' /etc/selinux/config  # 永久
setenforce 0  # 临时

# 5. 在master(node1)添加hosts
cat >> /etc/hosts << EOF
192.168.64.5 node1
192.168.64.6 node2
192.168.64.7 node3
EOF

# 6. 时间同步
yum install ntpdate -y
ntpdate cn.ntp.org.cn # 不管用可以搜下其它的试一下


# 7. 将桥接的IPv4流量传递到iptables的链
cat > /etc/sysctl.d/k8s.conf << EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
vm.swappiness = 0
EOF
# 加载br_netfilter模块
modprobe br_netfilter
# 查看是否加载
# lsmod | grep br_netfilter
br_netfilter          262144  0
bridge                327680  1 br_netfilter

sysctl --system  # 生效

# 8. 确保各节点MAC 地址和 product_uuid 的唯一性 
# 确保各节点mac地址唯一性
ifconfig -a|grep ether
# 确保各节点produce_uuid唯一性
sudo cat /sys/class/dmi/id/product_uuid #

1.2 开启ipvs

在kubernetes中service有两种代理模型,一种是基于iptables,另一种是基于ipvs的。ipvs的性能要高于iptables的,但是如果要使用它,需要手动载入ipvs模块。

# 安装ipvs ipvsadm
$ yum -y install ipset ipvsadm

$ cat > /etc/sysconfig/modules/ipvs.modules <<EOF
#!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack_ipv4
EOF

授权、运行、
$ chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules

检查是否加载
$ lsmod | grep -e ipvs -e nf_conntrack_ipv4
nf_conntrack_ipv4     262144  0
nf_defrag_ipv4        262144  1 nf_conntrack_ipv4
nf_conntrack          327680  2 nf_conntrack_ipv4,ip_vs

1.3 安装docker

k8s支持的容器运行时有很多如docker、containerd、cri-o等等,由于现在主流还是docker,所以这里选择安装docker

1.3.1 安装

# 添加源
wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -O /etc/yum.repos.d/docker-ce.repo

# 安装包
yum -y install docker-ce-19.03.9-3.el7

# 启动并设置开机自启
systemctl enable docker && systemctl start docker

# docker version
Client: Docker Engine - Community
 Version:           20.10.13
 API version:       1.40
 Go version:        go1.16.15
 Git commit:        a224086
 Built:             Thu Mar 10 14:08:05 2022
 OS/Arch:           linux/arm64
 Context:           default
 Experimental:      true

Server: Docker Engine - Community
 Engine:
  Version:          19.03.9
  API version:      1.40 (minimum version 1.12)
  Go version:       go1.13.10
  Git commit:       9d98839
  Built:            Fri May 15 00:24:27 2020
  OS/Arch:          linux/arm64
  Experimental:     false
 containerd:
  Version:          1.5.10
  GitCommit:        2a1d4dbdb2a1030dc5b01e96fb110a9d9f150ecc
 runc:
  Version:          1.0.3
  GitCommit:        v1.0.3-0-gf46b6ba
 docker-init:
  Version:          0.18.0
  GitCommit:        fec3683

问题解决

# 安装过程报错:
--> 解决依赖关系完成
错误:软件包:3:docker-ce-19.03.9-3.el7.aarch64 (docker-ce-stable)
          需要:container-selinux >= 2:2.74
错误:软件包:containerd.io-1.5.10-3.1.el7.aarch64 (docker-ce-stable)
          需要:container-selinux >= 2:2.74
 您可以尝试添加 --skip-broken 选项来解决该问题
 您可以尝试执行:rpm -Va --nofiles --nodigest
 
 解决办法:
 安装 container-selinux >= 2:2.74
yum install  https://mirrors.aliyun.com/centos-altarch/7.9.2009/extras/aarch64/Packages/container-selinux-2.119.2-1.911c772.el7_8.noarch.rpm -y

1.3.2 设置docker镜像加速

vim /etc/docker/daemon.json
 {
  "registry-mirrors": [
 "https://uyqa6c1l.mirror.aliyuncs.com",
   "https://hub-mirror.c.163.com",
    "https://dockerhub.azk8s.cn",
    "https://reg-mirror.qiniu.com",
    "https://registry.docker-cn.com"
  ]
}

systemctl daemon-reload
systemctl restart docker

docker info

2. k8s安装

2.1 添加阿里云的YUM软件源

https://developer.aliyun.com/mirror/kubernetes?spm=a2c6h.13651102.0.0.32d51b11vVlb09

# 添加yum源
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-aarch64/
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF



2.2 安装kubeadm、kubelet和kubectl(所有节点)

选择版本:https://www.downloadkubernetes.com

# 安装kubeadm、kubelet和kubectl,版本更新频繁,指定稳定版本安装
$ yum install -y kubelet-1.22.7 kubectl-1.22.7  kubeadm-1.22.7


# 为了实现Docker使用的cgroup drvier和kubelet使用的cgroup driver一致,建议修改"/etc/sysconfig/kubelet"文件的内容
$ vim /etc/sysconfig/kubelet
# 修改
KUBELET_EXTRA_ARGS="--cgroup-driver=systemd"
KUBE_PROXY_MODE="ipvs"


$ systemctl enable kubelet

# 查看所需镜像
$ kubeadm config images list
I0312 21:34:05.743622    5929 version.go:255] remote version is much newer: v1.23.4; falling back to: stable-1.22
k8s.gcr.io/kube-apiserver:v1.22.7
k8s.gcr.io/kube-controller-manager:v1.22.7
k8s.gcr.io/kube-scheduler:v1.22.7
k8s.gcr.io/kube-proxy:v1.22.7
k8s.gcr.io/pause:3.5
k8s.gcr.io/etcd:3.5.0-0
k8s.gcr.io/coredns/coredns:v1.8.4

问题解决
kubelet启动错误

# systemctl status kubelet
● kubelet.service - kubelet: The Kubernetes Node Agent
   Loaded: loaded (/usr/lib/systemd/system/kubelet.service; enabled; vendor preset: disabled)
  Drop-In: /usr/lib/systemd/system/kubelet.service.d
           └─10-kubeadm.conf
   Active: activating (auto-restart) (Result: exit-code) since 六 2022-03-12 21:50:13 EST; 1s ago
     Docs: https://kubernetes.io/docs/
  Process: 15364 ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_EXTRA_ARGS (code=exited, status=1/FAILURE)
 Main PID: 15364 (code=exited, status=1/FAILURE)

3月 12 21:50:13 node1 systemd[1]: Unit kubelet.service entered failed state.
3月 12 21:50:13 node1 systemd[1]: kubelet.service failed.


# 命令查看日志,检查报错原因
$ journalctl _PID=15364 |less -  
3月 12 22:06:45 node1 kubelet[15364]: E0312 22:06:45.708421   15364 server.go:294] "Failed to run kubelet" err="failed to run Kubelet: misconfiguration: kubelet cgroup driver: \"systemd\" is different from docker cgroup driver: \"cgroupfs\""

解决办法:

# 由上面报错信息知道:docker使用的cgroup driver为cgroupfs,k8s使用的cgroup driver为systemed
# 报以修改docker的cgroup driver为systemd

vim /etc/docker/daemon.json
"exec-opts": ["native.cgroupdriver=systemd"],
systemctl restart docker

2.3 部署k8s的Master节点(node1)

# 由于默认拉取镜像地址k8s.gcr.io国内无法访问,这里需要指定阿里云镜像仓库地址  
kubeadm init \
  --apiserver-advertise-address=192.168.64.5 \
  --image-repository registry.aliyuncs.com/google_containers \
  --control-plane-endpoint=192.168.64.5:6443 \
  --kubernetes-version v1.22.7 \
  --service-cidr=10.96.0.0/12 \
  --pod-network-cidr=10.244.0.0/16 \
  --upload-certs
  --token-ttl 0 # 为了尝试token失效情况,这个没加

参数说明:

  • --image-repository:指定要使用的镜像仓库,默认为gcr.io。
  • --kubernetes-version:Kubernetes程序组件的版本号,它必须与安装的kubelet程序包的版本号相同。
  • --control-plane-endpoint:控制平面的固定访问端点,可以是IP地址或DNS名称,作为集群管理员与集群组件的kubeconfig配置文件的API Server的访问地址。单控制平面部署时可以不使用该选项。
  • --pod-network-cidr:Pod网络的地址范围,其值为CIDR格式的网络地址,通常Flannel网络插件的默认值为10.244.0.0/16,Project Calico插件的默认值为192.168.0.0/16。
  • --service-cidr:Service的网络地址范围,其值为CIDR格式的网络地址,默认为10.96.0.0/12。通常,仅Flannel一类的网络插件需要手动指定该地址。
  • --apiserver-advertise-address:API Server通告给其他组件的IP地址,一般为Master节点用于集群内通信的IP地址,0.0.0.0表示节点上所有可用地址。
  • --token-ttl:共享令牌的过期时长,默认为24小时,0表示永不过期。为防止不安全存储等原因导致的令牌泄露危及集群安全,建议为其设定过期时长

初始化输出日志内容如下:日志很重要

...
Your Kubernetes control-plane has initialized successfully!

# 通过如下配置,即可操作集群
To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

#  通过如下命令添加master节点
You can now join any number of the control-plane node running the following command on each as root:

  kubeadm join 192.168.64.5:6443 --token qv4vub.h7aov4ae3z182y99 \
	--discovery-token-ca-cert-hash sha256:f3b753e4484154f11c9105427ca614a1e07dfc8ddaa167eec86c1cfed8cbfb7e \
	--control-plane --certificate-key a6c6f91d8d2360934b884eb6a5f65d8bad3a2be25c3da0e280de7ad2225668af

# token过期,可通过如下命令生成新的token
Please note that the certificate-key gives access to cluster sensitive data, keep it secret!
As a safeguard, uploaded-certs will be deleted in two hours; If necessary, you can use
"kubeadm init phase upload-certs --upload-certs" to reload certs afterward.

# 通过如下命令,添加node节点
Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.64.5:6443 --token qv4vub.h7aov4ae3z182y99 \
	--discovery-token-ca-cert-hash sha256:f3b753e4484154f11c9105427ca614a1e07dfc8ddaa167eec86c1cfed8cbfb7e

另个可通过如下命令生成加入node节点的命令

# kubeadm token create --print-join-command
kubeadm join 192.168.64.5:6443 --token esl6lt.7jm2h1lc6oa077vp --discovery-token-ca-cert-hash sha256:f3b753e4484154f11c9105427ca614a1e07dfc8ddaa167eec86c1cfed8cbfb7e

2.4 部署k8s的node节点(node2 node3)

$ kubeadm join 192.168.64.5:6443 --token qv4vub.h7aov4ae3z182y99 \
	--discovery-token-ca-cert-hash sha256:f3b753e4484154f11c9105427ca614a1e07dfc8ddaa167eec86c1cfed8cbfb7e
[preflight] Running pre-flight checks
	[WARNING Hostname]: hostname "node2" could not be reached
	[WARNING Hostname]: hostname "node2": lookup node2 on [fe80::b0be:83ff:fed5:ce64%eth0]:53: no such host
	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

2.5 部署CNI网络插件flannel

kubernetes支持多种网络插件,比如flannel、calico、canal等,任选一种即可,本次选择flannel。
https://github.com/flannel-io/flannel

## For Kubernetes v1.17+ 
# kubectl apply -f https://raw.githubusercontent.com/flannel-io/flannel/master/Documentation/kube-flannel.yml
# 这里选择安装0.17.0,但是这个文件貌似有问题,打不开
kubectl apply -f  https://raw.githubusercontent.com/flannel-io/flannel/v0.17.0/Documentation/kube-flannel.yml

# 先下载再安装
wget https://github.com/flannel-io/flannel/archive/refs/tags/v0.17.0.tar.gz
tar zxf v0.17.0.tar.gz && cd flannel-0.17.0/Documentation
kubectl apply -f kube-flannel.yml


# kubectl get pods -A
NAMESPACE     NAME                            READY   STATUS    RESTARTS      AGE
kube-system   coredns-7f6cbbb7b8-bsrj2        1/1     Running   0             70m
kube-system   coredns-7f6cbbb7b8-z24sw        1/1     Running   0             70m
kube-system   etcd-node1                      1/1     Running   0             70m
kube-system   kube-apiserver-node1            1/1     Running   0             70m
kube-system   kube-controller-manager-node1   1/1     Running   1 (40m ago)   70m
kube-system   kube-flannel-ds-d7rvn           1/1     Running   0             3m54s
kube-system   kube-flannel-ds-wclpk           1/1     Running   0             3m54s
kube-system   kube-flannel-ds-wmlbl           1/1     Running   0             3m54s
kube-system   kube-proxy-4v8gw                1/1     Running   0             25m
kube-system   kube-proxy-5khwr                1/1     Running   0             70m
kube-system   kube-proxy-xhgtz                1/1     Running   0             26m
kube-system   kube-scheduler-node1            1/1     Running   1 (40m ago)   70m

2.6 配置命令初全

# 安装bash-completion
yum -y install bash-completion

# 加载bash-completion
source /etc/profile.d/bash_completion.sh

# 加载环境变量
echo "export KUBECONFIG=/root/.kube/config" >> /root/.bash_profile
echo "source <(kubectl completion bash)" >> /root/.bash_profile
source .bash_profile 
posted @ 2022-03-13 15:38  大梦想家  阅读(624)  评论(0编辑  收藏  举报