[K8s] 使用kubeadm方式快速部署一套k8s环境(一主两从)版本1.30
K8s--- 安装部署
官网地址
注意:Kubernetes 至今都没有一个长期支持版(LTS)。
新版 1.30.3 版本安装
参考链接:https://egonlin.com/?p=10762
Kubernetes 的两种安装方式
-
kubeadm 工具
- 容器引擎、kubelet:使用
yum安装。 - 其他组件:被做成镜像,kubeadm 会用这些镜像启动相关组件。
- 静态 Pod:kubeadm 会将这些容器做成静态 Pod(没有控制器管理,挂掉会自动重启)。
- 总结:静态 Pod + 容器引擎 + kubelet = Kubernetes 环境。
- 容器引擎、kubelet:使用
-
二进制安装
- 通过
yum命令或源码安装所有组件。
- 通过
kubeadm 方式安装
环境准备
- 三台机器,内存 >= 2G。
192.168.41.11:master192.168.41.12:node1192.168.41.13:node2
修改主机名,添加 host 解析
hostnamectl set-hostname k8s-master
hostnamectl set-hostname k8s-node1
hostnamectl set-hostname k8s-node2
关闭防火墙、swap 分区、selinux
systemctl stop firewalld
systemctl disable firewalld
swapoff -a
sed -i '/swap/s/^/#/' /etc/fstab
setenforce 0
sed -i 's/^SELINUX=enforcing/SELINUX=disabled/' /etc/selinux/config
SSH 服务优化
# 1. 加速访问
sed -ri 's@^#UseDNS yes@UseDNS no@g' /etc/ssh/sshd_config
sed -ri 's#^GSSAPIAuthentication yes#GSSAPIAuthentication no#g' /etc/ssh/sshd_config
grep ^UseDNS /etc/ssh/sshd_config
grep ^GSSAPIAuthentication /etc/ssh/sshd_config
systemctl restart sshd
# 2. 密钥登录(主机点做)
ssh-keygen
ssh-copy-id -i root@master
ssh-copy-id -i root@node1
ssh-copy-id -i root@node2
增大文件打开数量(需重连会话立即生效)
cat > /etc/security/limits.d/k8s.conf << EOF
* soft nofile 65535
* hard nofile 131070
EOF
ulimit -Sn
ulimit -Hn
所有节点配置模块自动加载
modprobe br_netfilter
modprobe ip_conntrack
cat >>/etc/rc.sysinit<<EOF
#!/bin/bash
for file in /etc/sysconfig/modules/*.modules ; do
[ -x $file ] && $file
done
EOF
echo "modprobe br_netfilter" >/etc/sysconfig/modules/br_netfilter.modules
echo "modprobe ip_conntrack" >/etc/sysconfig/modules/ip_conntrack.modules
chmod 755 /etc/sysconfig/modules/br_netfilter.modules
chmod 755 /etc/sysconfig/modules/ip_conntrack.modules
lsmod | grep br_netfilter
同步集群时间
# 1. 安装 chrony
yum -y install chrony
# 2. 修改配置文件
mv /etc/chrony.conf /etc/chrony.conf.bak
cat > /etc/chrony.conf << EOF
server ntp1.aliyun.com iburst minpoll 4 maxpoll 10
server ntp2.aliyun.com iburst minpoll 4 maxpoll 10
server ntp3.aliyun.com iburst minpoll 4 maxpoll 10
server ntp4.aliyun.com iburst minpoll 4 maxpoll 10
server ntp5.aliyun.com iburst minpoll 4 maxpoll 10
server ntp6.aliyun.com iburst minpoll 4 maxpoll 10
server ntp7.aliyun.com iburst minpoll 4 maxpoll 10
driftfile /var/lib/chrony/drift
makestep 10 3
rtcsync
allow 0.0.0.0/0
local stratum 10
keyfile /etc/chrony.keys
logdir /var/log/chrony
stratumweight 0.05
noclientlog
logchange 0.5
EOF
# 3. 启动 chronyd 服务
systemctl restart chronyd.service
systemctl enable chronyd.service
systemctl status chronyd.service
# 从节点上也要安装
# 1. 安装 chrony
yum -y install chrony
# 2. 修改客户端配置文件
mv /etc/chrony.conf /etc/chrony.conf.bak
cat > /etc/chrony.conf << EOF
server 192.168.41.11 iburst
driftfile /var/lib/chrony/drift
makestep 10 3
rtcsync
local stratum 10
keyfile /etc/chrony.key
logdir /var/log/chrony
stratumweight 0.05
noclientlog
logchange 0.5
EOF
# 3. 启动 chronyd
systemctl restart chronyd.service
systemctl enable chronyd.service
systemctl status chronyd.service
# 4. 验证
chronyc sources -v
更新 yum 源
# 1. 清理
rm -rf /etc/yum.repos.d/*
yum remove epel-release -y
rm -rf /var/cache/yum/x86_64/6/epel/
# 2. 安装阿里的 base 与 epel 源
curl -s -o /etc/yum.repos.d/CentOS-Base.repo https://mirrors.aliyun.com/repo/Centos-7.repo
curl -s -o /etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-7.repo
yum clean all ; yum makecache
更新系统软件(排除内核)
yum update -y --exclude=kernel*
安装基础常用软件
yum -y install expect wget jq psmisc vim net-tools telnet yum-utils device-mapper-persistent-data lvm2 git ntpdate chrony bind-utils rsync unzip git
更新系统内核(Docker 对系统内核要求较高,最好使用 4.4+)
wget https://elrepo.org/linux/kernel/el7/x86_64/RPMS/kernel-lt-5.4.274-1.el7.elrepo.x86_64.rpm
wget https://elrepo.org/linux/kernel/el7/x86_64/RPMS/kernel-lt-devel-5.4.274-1.el7.elrepo.x86_64.rpm
# 如果下载慢,可以从网盘获取
# 链接:https://pan.baidu.com/s/18paSDYV4YwTHl0FmxaiVpw 提取码: utsw
scp kernel-lt* root@node1:/root
scp kernel-lt* root@node2:/root
安装内核并设置默认启动(所有机器)
# 安装
yum localinstall -y /root/kernel-lt*
# 设置默认启动
grub2-set-default 0 && grub2-mkconfig -o /etc/grub2.cfg
# 查看当前默认启动的内核
grubby --default-kernel
# 重启系统
reboot
所有节点安装 IPVS
# 1. 安装 ipvsadm 等相关工具
yum -y install ipvsadm ipset sysstat conntrack libseccomp
# 2. 配置加载
cat > /etc/sysconfig/modules/ipvs.modules <<"EOF"
#!/bin/bash
ipvs_modules="ip_vs ip_vs_lc ip_vs_wlc ip_vs_rr ip_vs_wrr ip_vs_lblc ip_vs_lblcr ip_vs_dh ip_vs_sh ip_vs_fo ip_vs_nq ip_vs_sed ip_vs_ftp nf_conntrack"
for kernel_module in ${ipvs_modules};
do
/sbin/modinfo -F filename ${kernel_module} > /dev/null 2>&1
if [ $? -eq 0 ]; then
/sbin/modprobe ${kernel_module}
fi
done
EOF
chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep ip_vs
所有节点修改内核参数
cat > /etc/sysctl.d/k8s.conf << EOF
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
fs.may_detach_mounts = 1
vm.overcommit_memory=1
vm.panic_on_oom=0
fs.inotify.max_user_watches=89100
fs.file-max=52706963
fs.nr_open=52706963
net.ipv4.tcp_keepalive_time = 600
net.ipv4.tcp_keepalive_probes = 3
net.ipv4.tcp_keepalive_intvl = 15
net.ipv4.tcp_max_tw_buckets = 36000
net.ipv4.tcp_tw_reuse = 1
net.ipv4.tcp_max_orphans = 327680
net.ipv4.tcp_orphan_retries = 3
net.ipv4.tcp_syncookies = 1
net.ipv4.tcp_max_syn_backlog = 16384
net.ipv4.ip_conntrack_max = 65536
net.ipv4.tcp_max_syn_backlog = 16384
net.ipv4.tcp_timestamps = 0
net.core.somaxconn = 16384
EOF
# 立即生效
sysctl --system
安装 containerd(所有节点)
注意:Kubernetes 1.24 版本之后,不再原生支持 Docker。containerd 来自于 Docker,后被 Docker 捐献给了云原生计算基金会。安装 Docker 会一并装上 containerd。
升级 libseccomp
CentOS 7 默认的 libseccomp 版本为 2.3.1,不满足 containerd 的需求,需要下载 2.4 版本以上。
rpm -qa | grep libseccomp
rpm -e libseccomp-2.3.1-4.el7.x86_64 --nodeps
wget https://mirrors.aliyun.com/centos/8/BaseOS/x86_64/os/Packages/libseccomp-2.5.1-1.el8.x86_64.rpm
rpm -ivh libseccomp-2.5.1-1.el8.x86_64.rpm
安装 containerd
yum remove docker docker-ce containerd docker-common docker-selinux docker-engine -y
cd /etc/yum.repos.d/
wget http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
yum install -y containerd*
配置 containerd
# 创建配置文件目录
mkdir -pv /etc/containerd
# 为 containerd 生成配置文件
containerd config default > /etc/containerd/config.toml
# 替换默认 pause 镜像地址
grep sandbox_image /etc/containerd/config.toml
sed -i 's/registry.k8s.io/registry.cn-hangzhou.aliyuncs.com\/google_containers/' /etc/containerd/config.toml
grep sandbox_image /etc/containerd/config.toml
# 配置 systemd 作为容器的 cgroup driver
grep SystemdCgroup /etc/containerd/config.toml
sed -i 's/SystemdCgroup = false/SystemdCgroup = true/' /etc/containerd/config.toml
grep SystemdCgroup /etc/containerd/config.toml
# 配置镜像加速
# 添加 config_path = "etc/containerd/certs.d"
sed -i 's/config_path =.*/config_path = \"\/etc\/containerd\/certs.d\"/g' /etc/containerd/config.toml
# 创建镜像加速目录
mkdir -p /etc/containerd/certs.d/docker.io
cat > /etc/containerd/certs.d/docker.io/hosts.toml << EOF
server = "https://docker.io"
[host."https://dockerproxy.com"]
capabilities = ["pull", "resolve"]
[host."https://docker.m.daocloud.io"]
capabilities = ["pull", "resolve"]
[host."https://docker.agsv.top"]
capabilities = ["pull", "resolve"]
[host."https://registry.docker-cn.com"]
capabilities = ["pull", "resolve"]
EOF
# 启动 containerd 服务并设置开机自启
systemctl daemon-reload && systemctl restart containerd
systemctl enable --now containerd
# 查看 containerd 状态
systemctl status containerd
# 查看版本
ctr version
随机一台拉取镜像测试
ctr image pull registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.6
ctr image ls
安装 Kubernetes
准备 Kubernetes 源
cat > /etc/yum.repos.d/kubernetes.repo << "EOF"
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes-new/core/stable/v1.30/rpm/
enabled=1
gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes-new/core/stable/v1.30/rpm/repodata/repomd.xml.key
EOF
yum install -y kubelet-1.30.3* kubeadm-1.30.3* kubectl-1.30.3*
systemctl enable kubelet && systemctl start kubelet && systemctl status kubelet
部署方式一:生成配置文件,编辑修改部署(推荐)
# 在主节点上操作,查看镜像列表
kubeadm config images list
生成配置文件 kubeadm.yaml 并修改(主节点)
kubeadm config print init-defaults > kubeadm.yaml
修改配置文件
apiVersion: kubeadm.k8s.io/v1beta3
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
token: abcdef.0123456789abcdef
ttl: 24h0m0s
usages:
- signing
- authentication
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.41.11 # 控制节点 master 的 IP 地址
bindPort: 6443
nodeRegistration:
criSocket: unix:///var/run/containerd/containerd.sock
imagePullPolicy: IfNotPresent
name: k8s-master # 指定名字
taints: null
---
apiServer:
timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta3
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
dns: {}
etcd:
local:
dataDir: /var/lib/etcd
imageRepository: registry.cn-hangzhou.aliyuncs.com/google_containers # 换成阿里云镜像地址
kind: ClusterConfiguration
kubernetesVersion: 1.30.0 # 指定 Kubernetes 版本
networking:
dnsDomain: cluster.local
serviceSubnet: 10.96.0.0/12 # 指定 Service 网段
podSubnet: 10.244.0.0/16 # 增加一行,指定 Pod 网段
scheduler: {}
# 文件最后,添加以下内容:
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
mode: ipvs # 表示 kube-proxy 代理模式为 ipvs,不指定会默认使用 iptables
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
cgroupDriver: systemd
部署
kubeadm init --config=kubeadm.yaml --ignore-preflight-errors=SystemVerification --ignore-preflight-errors=Swap
部署方式二:kubeadm init 部署
kubeadm init \
--image-repository=registry.cn-hangzhou.aliyuncs.com/google_containers \
--kubernetes-version=v1.30.0 \
--service-cidr=10.96.0.0/12 \
--pod-network-cidr=10.244.0.0/16
如果遇到 init 失败或报错,清理集群,重新 init
kubeadm reset -f
rm -rf ~/.kube/
rm -rf /etc/kubernetes/
rm -rf /etc/cni
rm -rf /opt/cni
rm -rf /var/lib/etcd
rm -rf /var/
成功示例截图
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Alternatively, if you are the root user, you can run:
export KUBECONFIG=/etc/kubernetes/admin.conf
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 192.168.41.11:6443 --token abcdef.0123456789abcdef \
--discovery-token-ca-cert-hash sha256:2fa7368af1ca6a1236dad4a9d4402ba32efd632fe7a4c490fb8d88481fd585df
根据提示创建所需目录
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
node节点加入集群
kubeadm join 192.168.41.11:6443 --token abcdef.0123456789abcdef \
--discovery-token-ca-cert-hash sha256:2fa7368af1ca6a1236dad4a9d4402ba32efd632fe7a4c490fb8d88481fd585df
查看node,查看pods
[root@k8s-master ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
master NotReady control-plane 4m18s v1.30.3
[root@k8s-master ~]# kubectl -n kube-system get pods
NAME READY STATUS RESTARTS AGE
coredns-7c445c467-89pnk 0/1 Pending 0 5m45s
coredns-7c445c467-p9dkg 0/1 Pending 0 5m45s
etcd-master 1/1 Running 0 6m
kube-apiserver-master 1/1 Running 0 6m1s
kube-controller-manager-master 1/1 Running 0 6m
kube-proxy-ft6zm 1/1 Running 0 5m45s
kube-scheduler-master 1/1 Running 0 6m
部署网络插件
wget https://github.com/flannel-io/flannel/releases/latest/download/kube-flannel.yml
#提前下载下来,将yml文件中镜像改为国内
[root@k8s-master ~]# grep -i image kube-flannel.yml
image: ghcr.io/flannel-io/flannel:v0.26.5
image: ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1
image: ghcr.io/flannel-io/flannel:v0.26.5
修改配置文件,将镜像地址替换
registry-vpc.cn-shanghai.aliyuncs.com/sucloud/flannel:v0.26.5
registry-vpc.cn-shanghai.aliyuncs.com/sucloud/flannel-cni-plugin:v1.6.2-flannel1
部署
kubectl apply -f kube-flannel.yml
安装kubectl命令补全插件
yum install bash-completion* -y
kubectl completion bash > ~/.kube/completion.bash.inc
echo "source '$HOME/.kube/completion.bash.inc'" >> $HOME/.bash_profile
source $HOME/.bash_profile

浙公网安备 33010602011771号