Kubernetes 1.30单master,双node安装教程

Kubernetes 1.30 手动安装

环境配置

OS hostname IP
CentosStream8 master 192.168.96.160
CentosStream8 node1 192.168.96.161
CentosStream8 node2 192.168.96.162

安装

1、关闭防火墙和Selinux

systemctl disable --now firewalld 
setenforce  0 
sed -i 's/SELINUX=enforcing/SELINUX=disabled/g'  /etc/selinux/config

2、修改主机名并做本地解析

# master
hostnamectl set-hostname master
# node1
hostnamectl set-hostname node1
# node2
hostnamectl set-hostname node2

vim /etc/hosts
192.168.96.160 master
192.168.96.161 node1
192.168.96.162 node2

3、配置仓库并下载基础软件包

rm -rf /etc/yum.repos.d/*
curl -o /etc/yum.repos.d/CentOS-Base.repo https://mirrors.aliyun.com/repo/Centos-vault-8.5.2111.repo
yum install -y bash-completion net-tools vim  yum-utils wget 

4、关闭swap分区

swapoff -a 
swapon -s 
vim /etc/fstab
# /dev/mapper/cl-swap     none                    swap    defaults        0 0

5、配置docker

安装

yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
yum install docker-ce -y 
systemctl enable --now docker

配置加速器

mkdir -p /etc/docker
tee /etc/docker/daemon.json <<-'EOF'
{
  "registry-mirrors": ["https://ux1eh0cf.mirror.aliyuncs.com"]
}
EOF
systemctl daemon-reload
systemctl restart docker

6、 开启转发和iptables过滤

modprobe br_netfilter

cat <<EOF > /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
EOF

sysctl -p /etc/sysctl.d/k8s.conf

配置开机运行

vim /etc/rc.sysinit
#!/bin/bash
for file in /etc/sysconfig/modules/*.modules ; do
[ -x $file ] && $file
done
vim /etc/sysconfig/modules/br_netfilter.modules
modprode br_netfilter

7、修改containerd镜像源和配置 systemd cgroup 驱动

 containerd config default > /etc/containerd/config.toml
 vim /etc/containerd/config.toml
 sandbox_image = "registry.k8s.io/pause:3.6" # 修改这一行
 sandbox_image = "registry.aliyuncs.com/google_containers/pause:3.9" # 修改后
 SystemdCgroup = false   # 修改这一行
 SystemdCgroup = true    # 修改后
 
 systemctl restart containerd

8、配置K8s仓库,并安装软件包

cat <<EOF | tee /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes-new/core/stable/v1.30/rpm/
enabled=1
gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes-new/core/stable/v1.30/rpm/repodata/repomd.xml.key
EOF

yum install -y kubelet-1.30.0 kubeadm-1.30.0 kubectl-1.30.0 --disableexcludes=kubernetes
systemctl enable --now kubelet

9、k8s初始化(仅master节点)

kubeadm init --image-repository registry.aliyuncs.com/google_containers --kubernetes-version=1.30.8 --pod-network-cidr=10.244.0.0/16

错误


Unfortunately, an error has occurred:
	context deadline exceeded

This error is likely caused by:
	- The kubelet is not running
	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	- 'systemctl status kubelet'
	- 'journalctl -xeu kubelet'

Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all running Kubernetes containers by using crictl:
	- 'crictl --runtime-endpoint unix:///var/run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
	Once you have found the failing container, you can inspect its logs with:
	- 'crictl --runtime-endpoint unix:///var/run/containerd/containerd.sock logs CONTAINERID'
error execution phase wait-control-plane: could not initialize a Kubernetes cluster

解决方法

更换成1.30.0,并重启containerd并执行以下命令,然后再次初始化
kubeadm reset -f
rm -rf /etc/kubernetes/*
rm -rf /var/lib/kubelet/*
rm -rf /var/lib/etcd/*

结果

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.96.160:6443 --token az3hsb.zi91glehcxgafi3o \
	--discovery-token-ca-cert-hash sha256:cec47603d2463abe3566e45e1e464f123e7763d54d634ab3d8c6453f936798e1 

在master节点上执行

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >> /etc/profile

在node节点上执行

kubeadm join 192.168.96.160:6443 --token az3hsb.zi91glehcxgafi3o \
	--discovery-token-ca-cert-hash sha256:cec47603d2463abe3566e45e1e464f123e7763d54d634ab3d8c6453f936798e1 

验证

[root@master ~]# kubectl get node
NAME     STATUS     ROLES           AGE     VERSION
master   NotReady   control-plane   3m33s   v1.30.0
node1    NotReady   <none>          19s     v1.30.0
node2    NotReady   <none>          19s     v1.30.0

10、安装calico网络插件 【三节点执行】

wget https://github.com/projectcalico/calico/releases/download/v3.24.2/release-v3.24.2.tgz

由于这个包体量过大,建议使用浏览器下载,再上传到虚拟机,或者使用以下地址

通过网盘分享的文件:release-v3.26.0.tgz等2个文件
链接: https://pan.baidu.com/s/1dlPT92Aomlyg3bTjeWuj7A?pwd=ZWW6 提取码: ZWW6
--来自百度网盘超级会员v4的分享

mkdir /calico
mv release-v3.24.2.tgz /calico/
cd /calico/
tar -xvf release-v3.24.2.tgz 
cd release-v3.24.2/images
ctr -n k8s.io images import calico-cni.tar 
ctr -n k8s.io images import calico-node.tar
ctr -n k8s.io images import calico-dikastes.tar
ctr -n k8s.io images import calico-pod2daemon.tar
ctr -n k8s.io images import calico-flannel-migration-controller.tar
ctr -n k8s.io images import calico-typha.tar
ctr -n k8s.io images import calico-kube-controllers.tar
cd ../manifests/

master节点执行

kubectl apply -f calico.yaml
[root@master manifests]# kubectl get node
NAME     STATUS   ROLES           AGE   VERSION
master   Ready    control-plane   19m   v1.30.0
node1    Ready    <none>          16m   v1.30.0
node2    Ready    <none>          16m   v1.30.0

11、补全kube命令

echo "source <(kubectl  completion bash)" >> /etc/profile   && source /etc/profile

12、添加监控组件metric

wget https://github.com/kubernetes-sigs/metrics-server/releases/download/v0.6.4/components.yaml

修改文件

vim components.yaml
# 原文
    spec:
      containers:
      - args:
        - --cert-dir=/tmp
        - --secure-port=4443
        - --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
        - --kubelet-use-node-status-port
        - --metric-resolution=15s
        image: registry.k8s.io/metrics-server/metrics-server:v0.6.4

修改后

    spec:
      containers:
      - args:
        - --cert-dir=/tmp
        - --secure-port=4443
        - --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
        - --kubelet-use-node-status-port
        - --metric-resolution=15s
        - --kubelet-insecure-tls  # 添加
        image: registry.aliyuncs.com/google_containers/metrics-server:v0.6.4  # 修改

使用containerd拉取镜像

crictl pull registry.aliyuncs.com/google_containers/metrics-server:v0.6.4

如果出现以下内容

WARN[0000] image connect using default endpoints: [unix:///run/containerd/containerd.sock unix:///run/crio/crio.sock unix:///var/run/cri-dockerd.sock]. As the default settings are now deprecated, you should set the endpoint instead.

更换端点即可

vim /etc/crictl.yaml
runtime-endpoint: unix:///run/containerd/containerd.sock
image-endpoint: unix:///run/containerd/containerd.sock
timeout: 5
debug: false
[root@master ~]# crictl img
IMAGE                                                             TAG                 IMAGE ID            SIZE
docker.io/calico/cni                                              v3.24.2             6f826e4e4dae8       198MB
docker.io/calico/dikastes                                         v3.24.2             b9c57fc59e04b       41.4MB
docker.io/calico/flannel-migration-controller                     v3.24.2             1456bb06e02ec       117MB
docker.io/calico/kube-controllers                                 v3.24.2             7e37e5d5ee0ee       71.4MB
docker.io/calico/node                                             v3.24.2             a7de69da7d13a       228MB
docker.io/calico/pod2daemon-flexvol                               v3.24.2             21d7826d3c515       14.6MB
docker.io/calico/typha                                            v3.24.2             7a67242337e3a       66MB
registry.aliyuncs.com/google_containers/coredns                   v1.11.1             cbb01a7bd410d       18.2MB
registry.aliyuncs.com/google_containers/coredns                   v1.11.3             c69fa2e9cbf5f       18.6MB
registry.aliyuncs.com/google_containers/etcd                      3.5.12-0            3861cfcd7c04c       57.2MB
registry.aliyuncs.com/google_containers/etcd                      3.5.15-0            2e96e5913fc06       56.9MB
registry.aliyuncs.com/google_containers/kube-apiserver            v1.30.8             772392d372035       32.7MB
registry.aliyuncs.com/google_containers/kube-controller-manager   v1.30.8             85333d41dd3ce       31.1MB
registry.aliyuncs.com/google_containers/kube-proxy                v1.30.8             ce61fda67eb41       29.1MB
registry.aliyuncs.com/google_containers/kube-scheduler            v1.30.8             eb53b988d5e03       19.2MB
registry.aliyuncs.com/google_containers/metrics-server            v0.6.4              a608c686bac93       30MB
registry.aliyuncs.com/google_containers/pause                     3.9                 e6f1816883972       322kB

可以看到我们已经成功拉取了镜像

运行yaml文件

kubectl apply -f components.yaml

等待一会,验证

[root@master ~]# kubectl get pod -n kube-system  
NAME                                       READY   STATUS    RESTARTS   AGE
calico-kube-controllers-85d46d4467-kvb2l   1/1     Running   0          13m
calico-node-bn4d7                          1/1     Running   0          13m
calico-node-g5qfs                          1/1     Running   0          13m
calico-node-zvswg                          1/1     Running   0          13m
coredns-7b5944fdcf-tvhks                   1/1     Running   0          33m
coredns-7b5944fdcf-x6xxb                   1/1     Running   0          33m
etcd-master                                1/1     Running   0          33m
kube-apiserver-master                      1/1     Running   0          33m
kube-controller-manager-master             1/1     Running   0          33m
kube-proxy-2xlh7                           1/1     Running   0          30m
kube-proxy-g4vs2                           1/1     Running   0          30m
kube-proxy-qlgnz                           1/1     Running   0          33m
kube-scheduler-master                      1/1     Running   0          33m
metrics-server-66b498696-cwvgq             1/1     Running   0          68s

[root@master ~]# kubectl top node
NAME     CPU(cores)   CPU%   MEMORY(bytes)   MEMORY%   
master   136m         3%     1438Mi          37%       
node1    66m          1%     841Mi           22%       
node2    71m          1%     1033Mi          27%       

posted @ 2025-01-06 16:07  super派大星  阅读(215)  评论(0)    收藏  举报