k8s搭建

高可用节点

k8s节点 3个机器,2个master,一个node节点即可

  • 思考一下

k8s搭建高可用

  • 针对的是api-server

  • keepalived+nginx,nginx提供负载均衡,keepalived提供vip地址

  • iso镜像是openEuler 22.03 (LTS-SP3)

  • 免密登录即可,使用脚本来完成免密登录

  • 将kubernetes的所有组件除了kubelet以外,全部部署到容器中

1、前期准备

主机名 ip vip
master01 192.168.50.20 192.168.50.77
master02 192.168.50.21 192.168.50.77
node 192.168.50.22 192.168.50.77

1、修改主机配置(所有节点操作)

1.修改主机名
2.关闭防火墙和selinux
3.关闭swap
4.配置时间同步

[root@server ~]# hostnamectl hostname master01
[root@server ~]# bash
[root@master01 ~]# systemctl disable firewalld --now
[root@master01 ~]# getenforce 
Disabled
[root@master01 ~]# swapoff -a
[root@master01 ~]# yum -y install chrony
[root@master01 ~]# chronyc sources

2、开启ipvs(所有节点)

cat > /etc/sysconfig/modules/ipvs.modules << 'EOF'
#!/bin/bash 
ipvs_modules="ip_vs ip_vs_lc ip_vs_wlc ip_vs_rr ip_vs_wrr ip_vs_lblc ip_vs_lblcr ip_vs_dh ip_vs_sh ip_vs_nq ip_vs_sed ip_vs_ftp nf_conntrack"
for kernel_module in ${ipvs_modules};do
  /sbin/modinfo -F filename ${kernel_module} > /dev/null 2>&1
  if [ $? -eq 0 ]; then
    /sbin/modprobe ${kernel_module}
  fi
done
EOF

chmod +x /etc/sysconfig/modules/ipvs.modules
bash /etc/sysconfig/modules/ipvs.modules

lsmod | grep ip_vs


# ipv4地址转发
vim /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1 
net.bridge.bridge-nf-call-iptables = 1 
net.ipv4.ip_forward = 1 

sysctl -p /etc/sysctl.d/k8s.conf

3、配置k8s yum源(所有节点)

# 直接到华为镜像站搜索kubernetes
[root@master01 ~]#  cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.huaweicloud.com/kubernetes/yum/repos/kubernetes-el7-$basearch
enabled=1
gpgcheck=1
repo_gpgcheck=0
gpgkey=https://mirrors.huaweicloud.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.huaweicloud.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF


# 使用欧拉的话需要将$basearch 改为自己的架构 x86_64

[root@master01 yum.repos.d]# yum repoinfo kubernetes
Last metadata expiration check: 0:03:22 ago on Mon 23 Mar 2026 07:32:18 PM CST.
Repo-id            : kubernetes
Repo-name          : Kubernetes
Repo-status        : enabled
Repo-revision      : 1639678674448160
Repo-updated       : Wed 16 Jan 2030 09:08:48 AM CST
Repo-pkgs          : 751
Repo-available-pkgs: 751
Repo-size          : 9.6 G
Repo-baseurl       : https://mirrors.huaweicloud.com/kubernetes/yum/repos/kubernetes-el7-x86_64
Repo-expire        : 172,800 second(s) (last: Mon 23 Mar 2026 07:31:24 PM
                   : CST)
Repo-filename      : /etc/yum.repos.d/kubernetes.repo
Total packages: 751

2、安装docker(所有节点)

  • 24年,欧拉支持的k8s版本是1.23,所以的话,需要安装docker

1、安装

[root@master01 yum.repos.d]# yum -y install docker 

2、修改docker配置

# 启用systemd来管理docker
[root@master01 docker]# cat daemon.json 
{
        "exec-opts":["native.cgroupdriver=systemd"]
}

3、重启docker

[root@master01 ~]# systemctl daemon-reload
[root@master01 ~]# systemctl restart docker
  • 需要提前拉取镜像下来才行,因为docker网络的问题

3、配置高可用(所有master节点)

  • 四层负载均衡器,不需要走七层负载均衡即可

1、安装软件包

[root@master01 ~]#  yum install nginx nginx-all-modules keepalived -y

2、配置nginx负载

  • 在主配置文件里面,修改文件即可

  • 配置nginx负载均衡器,转发到master节点上面即可

# 四层负载均衡,不在七层的配置中,http段落中
stream {
   upstream k8s-apiserver {  # 负载均衡服务器
        server 192.168.50.20:6443;  # 代理到master01上
        server 192.168.50.21:6443;  # 代理到master02上
   }

   server {
        listen 16443;  # 服务器监听的端口
        proxy_pass k8s-apiserver;  # 反向代理的服务器
   }

}

nginx -t # 也可以检查这个语法,是否配置正确

systemctl restart nginx
systemctl enable nginx

# 拷贝到其他master节点上

3、配置keepalived

  • 对vip进行高可用,就是访问vip+16443的时候,代理到其他的负载均衡器上面即可 192.168.50.20:16443 反向代理

  • master01为主服务器,master02服务器为备用服务器

  • 可以先备份文件

  • 主master配置

global_defs {
   router_id master01  # router_id 全局唯一
}

vrrp_instance vr_master01 {  # 第一个实例
    state MASTER  # master节点,主节点
    interface ens33  # 网卡
    virtual_router_id 51  # vrrp组的id,多个vip实例的话,需要保持一致
    priority 200  # 优先级,master的节点的必须高于其他备用节点,才能成为master
    advert_int 1
    authentication {
        auth_type PASS  # 验证信息
        auth_pass 1111
    }
    virtual_ipaddress {
       192.168.50.77  # vip地址的配置
    }
}

  • 备用master配置
global_defs {
   router_id master02
}

vrrp_instance vr_master02 {
    state BACKUP
    interface ens33
    virtual_router_id 51
    priority 100  # 备用节点优先级小于master节点即可
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
        192.168.50.77
    }
}

  • 可以测试一下,停掉master01服务,然后查看vip地址是否转移到master02上面即可

4、部署k8s

  • 24年的时候,最高支持1.23版本
# 发现版本是1.23的,还支持docker的
[root@master01 ~]# yum list | grep kubeadm
kubeadm.x86_64                                          1.23.1-0    


1、安装软件包(所有节点)

yum -y install kubeadm kubelet kubectl

[root@master02 keepalived]# systemctl enable kubelet
Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /usr/lib/systemd/system/kubelet.service.

  • kubelet 这个开启自启,但是不要立刻启动,会报错的,还有东西没有安装,后面会自动的起来的

2、生成部署文件

  • 只需要在master01上面生成即可
[root@master01 ~]# kubeadm config print init-defaults > init.yaml

3、修改部署文件

  • 只需要在mamster01上面配置即可

[root@master01 ~]# cat init.yaml 
apiVersion: kubeadm.k8s.io/v1beta3
bootstrapTokens:
- groups:
  - system:bootstrappers:kubeadm:default-node-token
  token: abcdef.0123456789abcdef
  ttl: 24h0m0s
  usages:
  - signing
  - authentication
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: 192.168.50.20  # api 的地址,修改为自己的地址
  bindPort: 6443
nodeRegistration:
  criSocket: /var/run/dockershim.sock
  imagePullPolicy: IfNotPresent
  name: master01  # 修改你的主机名或者ip,集群部署出来在集群内部显示的名称
  taints: null
---
apiServer:
  timeoutForControlPlane: 4m0s
  certSANs:   # 添加这一段,master主机能够使用证书,因为这个使用的是https,后面所有节点会自动的下载这个证书到本地来的,里面也有svc的地址,后面会学到的
  - master01
  - master02
  - 127.0.0.1
  - localhost
  - kubernetes
  - kubernetes.default
  - kubernetes.default.svc
  - kubernetes.default.svc.cluster.local
  - 192.168.50.20  # 这里是2个master的地址
  - 192.168.50.21
  - 192.168.50.77  # 这个是vip地址,也需要写上
controlPlaneEndpoint: 192.168.50.77:16443  # 高可用架构设计,所有的node节点通过这个负载均衡器来进行连接
apiVersion: kubeadm.k8s.io/v1beta3
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
dns: {}
etcd:
  local:
    dataDir: /var/lib/etcd
imageRepository: swr.cn-east-3.myhuaweicloud.com/hcie_openeuler  # 镜像仓库修改一下,这样的话,拉取的速度快一些,默认是google仓库
kind: ClusterConfiguration
kubernetesVersion: 1.23.1  # kubeadm的版本,可以通过kubeadm version 查看k8s版本
networking:
  dnsDomain: cluster.local
  podSubnet: 10.244.0.0/12  # pod的网段,就是后面创建的pod都在这一个网段里面
  serviceSubnet: 10.96.0.0/12
scheduler: {}
--- # 添加这一段,是ipvs的配置
apiVersion: kubeproxy.config.k8s.io/v1alpha1 
kind: KubeProxyConfiguration
mode: ipvs


4、提前拉取镜像

1、镜像的分布

  • 控制节点上的镜像

  • 工作节点上的镜像

# 查看需要拉取的镜像

[root@master01 ~]# kubeadm  config images list  --config=init.yaml
swr.cn-east-3.myhuaweicloud.com/hcie_openeuler/kube-apiserver:v1.23.1
swr.cn-east-3.myhuaweicloud.com/hcie_openeuler/kube-controller-manager:v1.23.1
swr.cn-east-3.myhuaweicloud.com/hcie_openeuler/kube-scheduler:v1.23.1
swr.cn-east-3.myhuaweicloud.com/hcie_openeuler/kube-proxy:v1.23.1
swr.cn-east-3.myhuaweicloud.com/hcie_openeuler/pause:3.6
swr.cn-east-3.myhuaweicloud.com/hcie_openeuler/etcd:3.5.1-0
swr.cn-east-3.myhuaweicloud.com/hcie_openeuler/coredns:v1.8.6

[root@master01 ~]# kubeadm config images pull --config ./init.yaml 
[config/images] Pulled swr.cn-east-3.myhuaweicloud.com/hcie_openeuler/kube-apiserver:v1.23.1
[config/images] Pulled swr.cn-east-3.myhuaweicloud.com/hcie_openeuler/kube-controller-manager:v1.23.1
[config/images] Pulled swr.cn-east-3.myhuaweicloud.com/hcie_openeuler/kube-scheduler:v1.23.1
[config/images] Pulled swr.cn-east-3.myhuaweicloud.com/hcie_openeuler/kube-proxy:v1.23.1
[config/images] Pulled swr.cn-east-3.myhuaweicloud.com/hcie_openeuler/pause:3.6
[config/images] Pulled swr.cn-east-3.myhuaweicloud.com/hcie_openeuler/etcd:3.5.1-0
[config/images] Pulled swr.cn-east-3.myhuaweicloud.com/hcie_openeuler/coredns:v1.8.6

5、部署k8s

# 上传证书到etcd里面,然后其他节点使用即可

[root@master01 ~]# kubeadm init --upload-certs --config ./init.yaml 
# 如果安装失败了可以执行kubeadm reset -f 重置环境再来init,如果直接init会报错


# 查看输出信息
[root@master01 ~]# kubeadm init --upload-certs --config ./init.yaml
[init] Using Kubernetes version: v1.23.1
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local localhost master01 master02] and IPs [10.96.0.1 192.168.50.20 192.168.50.77 127.0.0.1 192.168.50.21]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [localhost master01] and IPs [192.168.50.20 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [localhost master01] and IPs [192.168.50.20 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[kubeconfig] Writing "admin.conf" kubeconfig file
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 10.576489 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.23" in namespace kube-system with the configuration for the kubelets in the cluster
NOTE: The "kubelet-config-1.23" naming of the kubelet ConfigMap is deprecated. Once the UnversionedKubeletConfigMap feature gate graduates to Beta the default name will become just "kubelet-config". Kubeadm upgrade will handle this transition transparently.
[upload-certs] Storing the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace
[upload-certs] Using certificate key:
fcdc70dcf2129fadbe9d09e5dadb2e85b4aff57e34fb94e14f86bb7a383cecec
[mark-control-plane] Marking the node master01 as control-plane by adding the labels: [node-role.kubernetes.io/master(deprecated) node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node master01 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: abcdef.0123456789abcdef
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[addons] Applied essential addon: kube-proxy

  • 有这些信息表示集群安装成功

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of the control-plane node running the following command on each as root:

  # 加入新的master就使用这个命令
  kubeadm join 192.168.50.77:16443 --token abcdef.0123456789abcdef \
	--discovery-token-ca-cert-hash sha256:7a5e99386558a3148f91008c065cd1be7371ba0b92e3a39e46560cb7719c15ea \
	--control-plane --certificate-key fcdc70dcf2129fadbe9d09e5dadb2e85b4aff57e34fb94e14f86bb7a383cecec

Please note that the certificate-key gives access to cluster sensitive data, keep it secret!
As a safeguard, uploaded-certs will be deleted in two hours; If necessary, you can use
"kubeadm init phase upload-certs --upload-certs" to reload certs afterward.

Then you can join any number of worker nodes by running the following on each as root:

# 加入node节点进来就使用这个命令
kubeadm join 192.168.50.77:16443 --token abcdef.0123456789abcdef \
	--discovery-token-ca-cert-hash sha256:7a5e99386558a3148f91008c065cd1be7371ba0b92e3a39e46560cb7719c15ea 

1、master01执行命令

# 配置命令行,配置好后,就能使用kubectl了
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config


# 这个kubelet自己起来了
[root@master01 ~]# systemctl is-active kubelet
active

2、其他master节点加入集群

  • 生成token

kubeadm token create --print-join-command
kubeadm join 192.168.200.200:16443 --token gb00dz.tevdizf7mxqx1egj --discovery-token-ca-cert-hash sha256:de0e41b3bc59d2879af43da29f3f25cc1b133efda1f202d4c4ce5f865cad71d3 这个命令可以直接让node节点加入
如果需要加入master节点,那么需要加上 --control-plane --certificate-key 8be5d0b8d4914a930d58c4171e748210cbdd118befa0635ffcc1031b7840386e

  • 需要加入master的话,就需要带上证书密钥,这个密钥需要保存下来

  • 加入node节点的话,直接执行即可

# 这个就是将其他的master加入进来
# 这个token是24小时失效的,如果失效的话,还可以自己创建,然后再次加入进来即可

  kubeadm join 192.168.50.77:16443 --token abcdef.0123456789abcdef \
	--discovery-token-ca-cert-hash sha256:7a5e99386558a3148f91008c065cd1be7371ba0b92e3a39e46560cb7719c15ea \
	--control-plane --certificate-key fcdc70dcf2129fadbe9d09e5dadb2e85b4aff57e34fb94e14f86bb7a383cecec

# kubectl配置
[root@master02 ~]# mkdir -p $HOME/.kube
[root@master02 ~]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@master02 ~]# sudo chown $(id -u):$(id -g) $HOME/.kube/config

  • 在加入节点的时候,可以使用--node-name 指定加入集群的名字

3、node节点加入集群

[root@node sysctl.d]# kubeadm join 192.168.50.77:16443 --token abcdef.0123456789abcdef \
> --discovery-token-ca-cert-hash sha256:7a5e99386558a3148f91008c065cd1be7371ba0b92e3a39e46560cb7719c15ea 

4、查看集群节点

# 这个name就是集群的名字,在加入的时候,可以自己指定的
[root@master01 ~]# kubectl get node
NAME       STATUS     ROLES                  AGE    VERSION
master01   NotReady   control-plane,master   31m    v1.23.1
master02   NotReady   control-plane,master   9m3s   v1.23.1
node       NotReady   <none>                 91s    v1.23.1

# 没有安装网络插件,所以是notready的状态

# 这个coredns 就是一个提供网络信息的,为pending状态

[root@master01 ~]# kubectl get pod -A -o wide
NAMESPACE     NAME                               READY   STATUS    RESTARTS      AGE     IP              NODE       NOMINATED NODE   READINESS GATES
kube-system   coredns-665d974787-46d59           0/1     Pending   0             39m     <none>          <none>     <none>           <none>
kube-system   coredns-665d974787-s564k           0/1     Pending   0             39m     <none>          <none>     <none>           <none>
kube-system   etcd-master01                      1/1     Running   0             39m     192.168.50.20   master01   <none>           <none>
kube-system   etcd-master02                      1/1     Running   0             17m     192.168.50.21   master02   <none>           <none>
kube-system   kube-apiserver-master01            1/1     Running   0             39m     192.168.50.20   master01   <none>           <none>
kube-system   kube-apiserver-master02            1/1     Running   0             17m     192.168.50.21   master02   <none>           <none>
kube-system   kube-controller-manager-master01   1/1     Running   1 (17m ago)   39m     192.168.50.20   master01   <none>           <none>
kube-system   kube-controller-manager-master02   1/1     Running   0             17m     192.168.50.21   master02   <none>           <none>
kube-system   kube-proxy-95mlq                   1/1     Running   0             9m46s   192.168.50.22   node       <none>           <none>
kube-system   kube-proxy-svx52                   1/1     Running   0             17m     192.168.50.21   master02   <none>           <none>
kube-system   kube-proxy-tzts4                   1/1     Running   0             39m     192.168.50.20   master01   <none>           <none>
kube-system   kube-scheduler-master01            1/1     Running   1 (17m ago)   39m     192.168.50.20   master01   <none>           <none>
kube-system   kube-scheduler-master02            1/1     Running   0             17m     192.168.50.21   master02   <none>           <none>

6、安装网络插件(calico)

  • 安装好网络插件后,这个就是ready的状态了

  • 为什么是pending状态,没有能够为我分配一个ip地址,为什么下面都是running呢?因为这些都是宿主机的ip地址,不需要为我分配ip地址,所以是running状态,这些容器是直接使用了宿主机的ip

  • 但是coredns需要从pod网关分配ip地址才行

  • 网络插件

    • calico

    • cliium

    • flannel 支持的vxlan overlay;

# 发现running状态的pod,都是使用的集群的网段,不是别人分配的ip地址
[root@master01 ~]# kubectl get pod -A -o wide
NAMESPACE     NAME                               READY   STATUS    RESTARTS      AGE   IP              NODE       NOMINATED NODE   READINESS GATES
kube-system   coredns-665d974787-46d59           0/1     Pending   0             64m   <none>          <none>     <none>           <none>
kube-system   coredns-665d974787-s564k           0/1     Pending   0             64m   <none>          <none>     <none>           <none>
kube-system   etcd-master01                      1/1     Running   0             64m   192.168.50.20   master01   <none>           <none>
kube-system   etcd-master02                      1/1     Running   0             42m   192.168.50.21   master02   <none>           <none>
kube-system   kube-apiserver-master01            1/1     Running   0             64m   192.168.50.20   master01   <none>           <none>
kube-system   kube-apiserver-master02            1/1     Running   0             42m   192.168.50.21   master02   <none>           <none>
kube-system   kube-controller-manager-master01   1/1     Running   1 (41m ago)   64m   192.168.50.20   master01   <none>           <none>
kube-system   kube-controller-manager-master02   1/1     Running   0             42m   192.168.50.21   master02   <none>           <none>
kube-system   kube-proxy-95mlq                   1/1     Running   0             34m   192.168.50.22   node       <none>           <none>
kube-system   kube-proxy-svx52                   1/1     Running   0             42m   192.168.50.21   master02   <none>           <none>
kube-system   kube-proxy-tzts4                   1/1     Running   0             64m   192.168.50.20   master01   <none>           <none>
kube-system   kube-scheduler-master01            1/1     Running   1 (41m ago)   64m   192.168.50.20   master01   <none>           <none>
kube-system   kube-scheduler-master02            1/1     Running   0             42m   192.168.50.21   master02   <none>           <none>



# 在一个master上面安装即可
[root@master01 ~]# curl https://projectcalico.docs.tigera.io/archive/v3.23/manifests/calico.yaml -O

# 里面的镜像需要修改一下,改成本地的即可,如果能从docker hub上面拉取镜像的话,就不需要修改,直接运行即可
[root@master01 ~]# kubectl apply -f calico.yaml 

[root@master01 ~]# kubectl get node
NAME       STATUS   ROLES                  AGE    VERSION
master01   Ready    control-plane,master   119m   v1.23.1
master02   Ready    control-plane,master   97m    v1.23.1
node       Ready    <none>                 89m    v1.23.1

kubectl get pod -A

7、k8s命令补全

echo 'source <(kubectl completion bash)' >>~/.bashrc
bash

8、验证集群是否可用

kubectl run busybox1 --image=busybox:1.23 -- sleep 3600

[root@master01 ~]# kubectl get pod 
NAME       READY   STATUS    RESTARTS   AGE
busybox1   1/1     Running   0          4m5s

# 进入pod,访问外网
[root@master01 ~]# kubectl exec -ti busybox1 -- /bin/sh
/ # ping qq.com
PING qq.com (203.205.254.157): 56 data bytes
64 bytes from 203.205.254.157: seq=0 ttl=127 time=65.747 ms
^C
--- qq.com ping statistics ---
1 packets transmitted, 1 packets received, 0% packet loss
round-trip min/avg/max = 65.747/65.747/65.747 ms

9、命令输出详解

1、kubectl get pod -A

[root@master01 ~]# kubectl get pod -A
NAMESPACE     NAME                                       READY   STATUS    RESTARTS   AGE
default       busybox1                                   1/1     Running   0          5m17s
kube-system   calico-kube-controllers-54756b744f-dmqrm   1/1     Running   0          25m
kube-system   calico-node-qjff7                          1/1     Running   0          25m
kube-system   calico-node-x8fqd                          1/1     Running   0          25m
kube-system   calico-node-xs5m8                          1/1     Running   0          25m


# 名称空间

# pod的名字

# 1/1 前面是就绪,准备好的状态,后面是一个pod里面的容器总的数量,默认pause容器不显示

# status 状态,不要以为是running状态,就认为这个服务没有用,要看前面的就绪是否准备好了

# restarts 重试次数

# age 运行的时间

2、kubectl get node

[root@master01 ~]# kubectl get node
NAME       STATUS   ROLES                  AGE    VERSION
master01   Ready    control-plane,master   121m   v1.23.1
master02   Ready    control-plane,master   99m    v1.23.1
node       Ready    <none>                 92m    v1.23.1

# name 集群节点的名字

# status 集群的状态

# roles  集群的角色,controle-plane 这个角色 控制平面节点,master,none 工作节点

# ags    创建的时间

# version k8s的版本 1.23.1

后面安装k8s高版本

  • 1.28这个版本

一主一备,这个是master,一个node节点,,总共三个节点

pod里面包含了容器,可以有多个容器
一个pod里面最少有2个pod,里面有一个pause容器,容器运行时

就是封装了一个pod,内置的容器,pause,汇编语言写的,ip信息都绑定在这个上面,充当这个转发器
这个外部是不可见的,底层逻辑是存在的

pod分布在不同的节点上,跨主机的访问,为pod指定了ip网段

  • 运行的过程,因为污点的存在,运行的pod会被调度在这个node节点上面
posted @ 2026-03-24 09:37  乔的港口  阅读(2)  评论(0)    收藏  举报