1.21版本kubernetes部署实践
一、环境准备
1、环境要求
- 一台兼容的 Linux 主机。Kubernetes 项目为基于 Debian 和 Red Hat 的 Linux 发行版以及一些不提供包管理器的发行版提供通用的指令(本次实验使用一台master,2台node主机)
- 每台机器 2 GB 或更多的 RAM (如果少于这个数字将会影响你应用的运行内存)
- 集群中的所有机器的网络彼此均能相互连接(公网和内网都可以)
- 节点之中不可以有重复的主机名、MAC 地址或 product_uuid
- 禁用交换分区。为了保证 kubelet 正常工作,你 必须 禁用交换分区
| ip |主机名 |
| 10.4.7.10 | k8smaster |
| 10.4.7.11 | k8snode1 |
| 10.4.7.12 | k8snode2|
2、环境准备安装环境
1. 确保每个节点上 MAC 地址和 product_uuid 的唯一性
- 你可以使用命令 ip link 或 ifconfig -a 来获取网络接口的 MAC 地址
- 可以使用 sudo cat /sys/class/dmi/id/product_uuid 命令对 product_uuid 校验
- 一般来讲,硬件设备会拥有唯一的地址,但是有些虚拟机的地址可能会重复。 Kubernetes 使用这些值来唯一确定集群中的节点。 如果这些值在每个节点上不唯一,可能会导致安装失败。
2. 允许iptables检查桥接流量
-
确保 br_netfilter 模块被加载。这一操作可以通过运行 lsmod | grep br_netfilter 来完成。若要显式加载该模块,可执行 sudo modprobe br_netfilter。
-
为了让你的 Linux 节点上的 iptables 能够正确地查看桥接流量,你需要确保在你的 sysctl 配置中将 net.bridge.bridge-nf-call-iptables 设置为 1。
cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf br_netfilter EOF cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 EOF sudo sysctl --system
3、安装docker
1.21版本的kubernetes支持的docker版本为20.10故在安装时需要指定docker版本
- 配置docker安装源
step 1: 安装必要的一些系统工具
sudo yum install -y yum-utils device-mapper-persistent-data lvm2
Step 2: 添加软件源信息
sudo yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
Step 3
sudo sed -i 's+download.docker.com+mirrors.aliyun.com/docker-ce+' /etc/yum.repos.d/docker-ce.repo
Step 4: 更新并安装Docker-CE
sudo yum makecache fast
sudo yum install -y docker-ce-20.10.15
Step 5: 开启Docker服务
sudo systemctl start docker
step 6:修改docker配置
cat /etc/docker/daemon.json
{
"exec-opts": ["native.cgroupdriver=systemd"],
"log-driver": "json-file",
"log-opts": {
"max-size": "100m"
},
"storage-driver": "overlay2",
"registry-mirrors": ["https://ud7j6x4v.mirror.aliyuncs.com"]
}
sudo systemctl enable docker
sudo systemctl daemon-reload
sudo systemctl restart docker
4、kubeadm、kubelet 和 kubectl
1、配置kubernetes为国内阿里镜像并安装kubeadm、kubelet 和 kubectl
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
配置selinux
/etc/selinux/config
# This file controls the state of SELinux on the system.
# SELINUX= can take one of these three values:
# enforcing - SELinux security policy is enforced.
# permissive - SELinux prints warnings instead of enforcing.
# disabled - No SELinux policy is loaded.
SELINUX=disabled
# SELINUXTYPE= can take one of three values:
# targeted - Targeted processes are protected,
# minimum - Modification of targeted policy. Only selected processes are protected.
# mls - Multi Level Security protection.
SELINUXTYPE=targeted
yum install -y kubelet-1.21.0 kubeadm-1.21.0 kubectl-v1.21.0 --disableexcludes=kubernetes
systemctl enable kubelet && systemctl start kubelet
5、配置hosts文件
vi /etc/hosts
10.4.7.10 k8smaster
10.4.7.11 k8snode1
10.4.7.12 k8snode2
二、使用 kubeadm 创建集群
1、下载镜像
由于国外镜像无法下载需要通过国内镜像下载
可以通过 :kubeadm config images pull --image-repository registry.aliyuncs.com/google_containers 手动下载
coredns下载报错使用下面命令单独下载
docker pull coredns/coredns:1.8.0
2、镜像打标
创建脚本 tags.sh
#!/usr/bin/bash
echo " begin down images"
docker tag 91fafe96fdb0 k8s.gcr.io/kube-apiserver:v1.21.12
docker tag 0e15b54208a5 k8s.gcr.io/kube-controller-manager:v1.21.12
docker tag b08b5dafd696 k8s.gcr.io/kube-scheduler:v1.21.12
docker tag d8d3246c8e90 k8s.gcr.io/kube-proxy:v1.21.12
docker tag 0f8457a4c2ec k8s.gcr.io/pause:3.4.1
docker tag 0369cf4303ff k8s.gcr.io/etcd:3.4.13-0
docker tag 296a6d5035e2 k8s.gcr.io/coredns/coredns:v1.8.0
echo "compelete images down"
docker images
chmod u+x tags.sh
执行脚本
2、安装集群
kubeadm init --control-plane-endpoint "10.4.7.10:6443" \
--pod-network-cidr 172.16.0.0/16 \
--service-cidr 10.96.0.0/16
3、验证集群屏幕出现以下内容说明安装成功
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Alternatively, if you are the root user, you can run:
export KUBECONFIG=/etc/kubernetes/admin.conf
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
You can now join any number of the control-plane node running the following command on each as root:
kubeadm join 10.4.7.10:6443 --token c1mdzt.jtw965664ryh1fsy \
--discovery-token-ca-cert-hash sha256:dc198d5e0913a3e190e8126667cbf407fed02576a9c59812a0d7f84ca72aafcf \
--control-plane --certificate-key c6031bc4183b7bc246b709dcb61666f85c2bdd62c792a9c541ded7a1f33bf0d0
Please note that the certificate-key gives access to cluster sensitive data, keep it secret!
As a safeguard, uploaded-certs will be deleted in two hours; If necessary, you can use
"kubeadm init phase upload-certs --upload-certs" to reload certs afterward.
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 10.4.7.10:6443 --token c1mdzt.jtw965664ryh1fsy \
--discovery-token-ca-cert-hash sha256:dc198d5e0913a3e190e8126667cbf407fed02576a9c59812a0d7f84ca72aafcf

浙公网安备 33010602011771号