实用指南:k8s——pod详解2

k8s——pod详解2

一、pod生命周期

我们一般将pod对象从创建至终的这段时间范围称为pod的生命周期,它主要包含下面的过程:

1、pod创建过程

2、运行初始化容器(init container)过程

3、运行主容器(main container)

(1)容器启动后钩子(post start)、容器终止前钩子(pre stop)

(2)容器的存活性探测(liveness probe)、就绪性探测(readiness probe)

4、pod终止过程

在这里插入图片描述

在整个生命周期中,Pod会出现5种状态相位),分别如下:

1、挂起(Pending):apiserver已经创建了pod资源对象,但它尚未被调度完成或者仍处于下载镜像的过程中

2、运行中(Running):pod已经被调度至某节点,并且所有容器都已经被kubelet创建完成

3、成功(Succeeded):pod中的所有容器都已经成功终止并且不会被重启

4、失败(Failed):所有容器都已经终止,但至少有一个容器终止失败,即容器返回了非0值的退出状态

5、未知(Unknown):apiserver无法正常获取到pod对象的状态信息,通常由网络通信失败所导致

二、创建和终止

pod的创建过程

1、用户通过kubectl或其他api客户端提交需要创建的pod信息给apiServer

2、apiServer开始生成pod对象的信息,并将信息存入etcd,然后返回确认信息至客户端

3、apiServer开始反映etcd中的pod对象的变化,其它组件使用watch机制来跟踪检查apiServer上的变动

4、scheduler发现有新的pod对象要创建,开始为Pod分配主机并将结果信息更新至apiServer

5、node节点上的kubelet发现有pod调度过来,尝试调用docker启动容器,并将结果回送至apiServer

6、apiServer将接收到的pod状态信息存入etcd中

在这里插入图片描述

pod的终止过程

1、用户向apiServer发送删除pod对象的命令

2、apiServcer中的pod对象信息会随着时间的推移而更新,在宽限期内(默认30s),pod被视为dead

3、将pod标记为terminating状态

4、 kubelet在监控到pod对象转为terminating状态的同时启动pod关闭过程

5、节点控制器监控到pod对象的关闭行为时将其从所有匹配到此节点的service资源的节点列表中移除

6、 如果当前pod对象定义了preStop钩子处理器,则在其标记为terminating后即会以同步的方式启动执行

7、pod对象中的容器进程收到停止信号

8、宽限期结束后,若pod中还存在仍在运行的进程,那么pod对象会收到立即终止的信号

9、kubelet请求apiServer将此pod资源的宽限期设置为0从而完成删除操作,此时pod对于用户已不可见

三、初始化容器

初始化容器是在pod的主容器启动之前要运行的容器,主要是做一些主容器的前置工作,它具有两大特征

  • 初始化容器必须运行完成直至结束,若某初始化容器运行失败,那么kubernetes需要重启它直到成功完成
  • 初始化容器必须按照定义的顺序执行,当且仅当前一个成功之后,后面的一个才能运行

初始化容器有很多的应用场景,下面列出的是最常见的几个:

(1)提供主容器镜像中不具备的工具程序或自定义代码

(2)初始化容器要先于应用容器串行启动并运行完成,因此可用于延后应用容器的启动直至其依赖的条件得到满足

接下来做一个案例,模拟下面这个需求:

假设要以主容器来运行nginx,但是要求在运行nginx之前先要能够连接上mysql和redis所在服务器

为了简化测试,事先规定好mysql(192.168.100.10)和redis(192.168.100.20)服务器的地址

1、创建pod-initcontainer.yaml
[root@master ~]# vim pod-initcontainer.yaml
apiVersion: v1
kind: Pod
metadata:
name: pod-initcontainer
namespace: dev
spec:
containers:
- name: main-container
image: nginx:1.17.1
ports:
- name: nginx-port
containerPort: 80
initContainers:
- name: test-mysql
image: busybox:1.30
command: ['sh', '-c', 'until ping 192.168.100.110 -c 1 ; do echo waiting for mysql...; sleep 2; done;']
- name: test-redis
image: busybox:1.30
command: ['sh', '-c', 'until ping 192.168.100.120 -c 1 ; do echo waiting for reids...; sleep 2; done;']
2、创建pod,查看pod状态
[root@master ~]# kubectl apply -f pod-initcontainer.yaml 
pod/pod-initcontainer created
[root@master ~]# kubectl describe pod pod-initcontainer -n dev
Name:             pod-initcontainer
Namespace:        dev
Priority:         0
Service Account:  default
Node:             node1/192.168.100.110
Start Time:       Mon, 03 Nov 2025 17:16:08 +0800
Labels:           <none>
  ......
  ......
  Events:
  Type    Reason     Age   From               Message
  ----    ------     ----  ----               -------
  Normal  Scheduled  4s    default-scheduler  Successfully assigned dev/pod-initcontainer to node1
  Normal  Pulled     1s    kubelet            Container image "busybox:1.30" already present on machine
  Normal  Created    1s    kubelet            Created container test-mysql
  Normal  Started    0s    kubelet            Started container test-mysql
  Normal  Pulled     0s    kubelet            Container image "busybox:1.30" already present on machine
  Normal  Created    0s    kubelet            Created container test-redis
  Normal  Started    0s    kubelet            Started container test-redis
  Normal  Pulled     0s    kubelet            Container image "nginx:1.17.1" already present on machine
  Normal  Created    0s    kubelet            Created container main-container
  Normal  Started    0s    kubelet            Started container main-container
  # 动态查看pod
  [root@master ~]# kubectl get pods pod-initcontainer -n dev -w
  NAME                READY   STATUS    RESTARTS   AGE
  pod-initcontainer   1/1     Running   0          33s
验证了 Init Container 的核心特性:“先于主容器启动且顺序执行”

Init Container 的核心逻辑在实验中完全体现,这是主容器稳定运行的前置保障。

  • 执行顺序优先:实验中 test-mysqltest-redis 两个 Init Container 会严格按定义顺序执行,只有前一个成功完成,后一个才会启动。
  • 主容器启动条件:只有所有 Init Container 都执行成功(返回 0 状态码),Kubernetes 才会启动 main-container(Nginx 容器);只要有一个 Init Container 未完成,主容器就会一直等待。

三、钩子函数

钩子函数能够感知自身生命周期中的事件,并在相应的时刻到来时运行用户指定的程序代码。

kubernetes在主容器的启动之后和停止之前提供了两个钩子函数:

  • post start:容器创建之后执行,如果失败了会重启容器
  • pre stop :容器终止之前执行,执行完成之后容器将成功终止,在其完成之前会阻塞删除容器的操作

钩子处理器支持使用下面三种方式定义动作:

Exec命令:在容器内执行一次命令
TCPSocket:在当前容器尝试访问指定的socket
HTTPGet:在当前容器中向某url发起http请求
1、以exec方式为例,演示下钩子函数的使用,创建pod-hook-exec.yaml文件
[root@master ~]# vim pod-hook-exec.yaml
[root@master ~]# cat pod-hook-exec.yaml
apiVersion: v1
kind: Pod
metadata:
name: pod-hook-exec
namespace: dev
spec:
containers:
- name: main-container
image: nginx:1.17.1
ports:
- name: nginx-port
containerPort: 80
lifecycle:
postStart:
exec:      # 在容器启动的时候执行一个命令,修改掉nginx的默认首页内容
command: ["/bin/sh", "-c", "echo postStart... > /usr/share/nginx/html/index.html"]
preStop:
exec:      # 在容器停止之前停止nginx服务
command: ["/usr/sbin/nginx","-s","quit"]
# 创建pod
[root@master ~]# kubectl apply -f pod-hook-exec.yaml 
pod/pod-hook-exec created
# 查看pod
[root@master ~]# kubectl get pods pod-hook-exec -n dev -o wide
NAME            READY   STATUS    RESTARTS   AGE   IP               NODE    NOMINATED NODE   READINESS GATES
pod-hook-exec   1/1     Running   0          36s   172.16.166.152   node1   <none>           <none>
  # 访问pod
  [root@master ~]# curl 172.16.166.152
  postStart...

四、容器探测

容器探测用于检测容器中的应用实例是否正常工作,是保障业务可用性的一种传统机制。如果经过探测,实例的状态不符合预期,那么kubernetes就会把该问题实例" 摘除 ",不承担业务流量。kubernetes提供了两种探针来实现容器探测,分别是:

  • liveness probes:存活性探针,用于检测应用实例当前是否处于正常运行状态,如果不是,k8s会重启容器
  • readiness probes:就绪性探针,用于检测应用实例当前是否可以接收请求,如果不能,k8s不会转发流量

livenessProbe 决定是否重启容器,readinessProbe 决定是否将请求转发给容器。

上面两种探针目前均支持三种探测方式:

Exec命令:在容器内执行一次命令,如果命令执行的退出码为0,则认为程序正常,否则不正常
TCPSocket:将会尝试访问一个用户容器的端口,如果能够建立这条连接,则认为程序正常,否则不正常
HTTPGet:调用容器内Web应用的URL,如果返回的状态码在200和399之间,则认为程序正常,否则不正常

以liveness probes为例,做几个演示:

1、方式一:Exec
(1)创建pod-liveness-exec.yaml
[root@master ~]# cat pod-liveness-exec.yaml
apiVersion: v1
kind: Pod
metadata:
name: pod-liveness-exec
namespace: dev
spec:
containers:
- name: nginx
image: nginx:1.17.1
ports:
- name: nginx-port
containerPort: 80
livenessProbe:
exec:
command: ["/bin/cat","/tmp/hello.txt"]    # 执行一个查看文件的命令
(2)创建pod,观察效果
[root@master ~]# kubectl apply -f pod-liveness-exec.yaml 
pod/pod-liveness-exec created
# 查看Pod详情
[root@master ~]# kubectl describe pods pod-liveness-exec -n dev
Name:             pod-liveness-exec
Namespace:        dev
Priority:         0
Service Account:  default
Node:             node2/192.168.100.120
Start Time:       Mon, 03 Nov 2025 17:49:32 +0800
Labels:           <none>
  ......
  ......
  Events:
  Type     Reason     Age                      From               Message
  ----     ------     ----                     ----               -------
  Normal   Scheduled  33s                      default-scheduler  Successfully assigned dev/pod-liveness-exec to node2
  Normal   Pulled     <invalid> (x2 over 24s)  kubelet            Container image "nginx:1.17.1" already present on machine
    Normal   Created    <invalid> (x2 over 24s)  kubelet            Created container nginx
      Normal   Started    <invalid> (x2 over 23s)  kubelet            Started container nginx
        Warning  Unhealthy  <invalid> (x3 over 15s)  kubelet            Liveness probe failed: /bin/cat: /tmp/hello.txt: No such file or directory
          Normal   Killing    <invalid>                kubelet            Container nginx failed liveness probe, will be restarted
            # 观察上面的信息就会发现nginx容器启动之后就进行了健康检查# 检查失败之后,容器被kill掉,然后尝试进行重启
            # 稍等一会之后,再观察pod信息,就可以看到RESTARTS不再是0,而是一直增长
            [root@master ~]# kubectl get pods pod-liveness-exec -n dev
            NAME                READY   STATUS    RESTARTS     AGE
            pod-liveness-exec   1/1     Running   2 (0s ago)   68s
            [root@master ~]# kubectl get pods pod-liveness-exec -n dev
            NAME                READY   STATUS    RESTARTS     AGE
            pod-liveness-exec   1/1     Running   3 (0s ago)   97s
            #由两次重启变为三次重启
可以修改成一个存在的文件,比如/usr/share/nginx/html/index.html,再试,结果就正常了
[root@master ~]# cat pod-liveness-exec.yaml 
apiVersion: v1
kind: Pod
metadata:
name: pod-liveness-exec
namespace: dev
spec:
containers:
- name: nginx
image: nginx:1.17.1
ports:
- name: nginx-port
containerPort: 80
livenessProbe:
exec:
command: ["/bin/cat","/usr/share/nginx/html/index.html"]
[root@master ~]# kubectl apply -f pod-liveness-exec.yaml 
pod/pod-liveness-exec created
[root@master ~]# kubectl describe pods pod-liveness-exec -n dev
Name:             pod-liveness-exec
Namespace:        dev
Priority:         0
Service Account:  default
Node:             node2/192.168.100.120
Start Time:       Mon, 03 Nov 2025 17:56:44 +0800
Labels:           <none>
  ......
  ......
  Events:
  Type    Reason     Age   From               Message
  ----    ------     ----  ----               -------
  Normal  Scheduled  9s    default-scheduler  Successfully assigned dev/pod-liveness-exec to node2
  Normal  Pulled     0s    kubelet            Container image "nginx:1.17.1" already present on machine
  Normal  Created    0s    kubelet            Created container nginx
  Normal  Started    0s    kubelet            Started container nginx
  [root@master ~]# kubectl get pods pod-liveness-exec -n dev
  NAME                READY   STATUS    RESTARTS   AGE
  pod-liveness-exec   1/1     Running   0          17s
  [root@master ~]# kubectl get pods pod-liveness-exec -n dev
  NAME                READY   STATUS    RESTARTS   AGE
  pod-liveness-exec   1/1     Running   0          82s
2、方式二:TCPSocket
(1)创建pod-liveness-tcpsocket.yaml
[root@master ~]# cat pod-liveness-tcpsocket.yaml
apiVersion: v1
kind: Pod
metadata:
name: pod-liveness-tcpsocket
namespace: dev
spec:
containers:
- name: nginx
image: nginx:1.17.1
ports:
- name: nginx-port
containerPort: 80
livenessProbe:
tcpSocket:
port: 8080    # 尝试访问8080端口
(2)创建pod,观察效果
[root@master ~]# kubectl apply -f pod-liveness-tcpsocket.yaml 
pod/pod-liveness-tcpsocket created
[root@master ~]# kubectl describe pods pod-liveness-tcpsocket -n dev
Name:             pod-liveness-tcpsocket
Namespace:        dev
Priority:         0
Service Account:  default
Node:             node2/192.168.100.120
Start Time:       Mon, 03 Nov 2025 18:01:50 +0800
Labels:           <none>
  ......
  Events:
  Type     Reason     Age              From               Message
  ----     ------     ----             ----               -------
  Normal   Scheduled  26s              default-scheduler  Successfully assigned dev/pod-liveness-tcpsocket to node2
  Normal   Pulled     17s              kubelet            Container image "nginx:1.17.1" already present on machine
  Normal   Created    17s              kubelet            Created container nginx
  Normal   Started    17s              kubelet            Started container nginx
  Warning  Unhealthy  0s (x2 over 8s)  kubelet            Liveness probe failed: dial tcp 172.16.104.26:8080: connect: connection refused
  # 观察信息,发现尝试访问8080端口,但是失败了
  # 稍等一会之后,再观察pod信息,就可以看到RESTARTS不再是0,而是一直增长
  [root@master ~]# kubectl get pods pod-liveness-tcpsocket -n dev
  NAME                     READY   STATUS    RESTARTS      AGE
  pod-liveness-tcpsocket   1/1     Running   1 (11s ago)   50s
  [root@master ~]# kubectl get pods pod-liveness-tcpsocket -n dev
  NAME                     READY   STATUS    RESTARTS      AGE
  pod-liveness-tcpsocket   1/1     Running   2 (12s ago)   81s
  # 当然接下来,可以修改成一个可以访问的端口,比如80,再试,结果就正常了......
3、方式三:HTTPGet
(1)创建pod-liveness-httpget.yaml
[root@master ~]# cat pod-liveness-httpget.yaml
apiVersion: v1
kind: Pod
metadata:
name: pod-liveness-httpget
namespace: dev
spec:
containers:
- name: nginx
image: nginx:1.17.1
ports:
- name: nginx-port
containerPort: 80
livenessProbe:
httpGet:
scheme: HTTP    #支持的协议,http或者https
port: 80        #端口号
path: /file1    #URI地址
host: 192.168.100.120   #支持的主机
(2)创建pod,观察效果
[root@master ~]# kubectl apply -f pod-liveness-httpget.yaml 
pod/pod-liveness-httpget created
[root@master ~]# kubectl describe pods pod-liveness-httpget -n dev
Name:             pod-liveness-httpget
Namespace:        dev
Priority:         0
Service Account:  default
Node:             node2/192.168.100.120
Start Time:       Mon, 03 Nov 2025 18:16:33 +0800
Labels:           <none>
  ......
  Events:
  Type     Reason     Age   From               Message
  ----     ------     ----  ----               -------
  Normal   Scheduled  16s   default-scheduler  Successfully assigned dev/pod-liveness-httpget to node2
  Normal   Pulled     7s    kubelet            Container image "nginx:1.17.1" already present on machine
  Normal   Created    7s    kubelet            Created container nginx
  Normal   Started    7s    kubelet            Started container nginx
  Warning  Unhealthy  0s    kubelet            Liveness probe failed: HTTP probe failed with statuscode: 404
  # 观察上面信息,尝试访问路径,但是未找到,出现404错误
  # 稍等一会之后,再观察pod信息,就可以看到RESTARTS不再是0,而是一直增长
  [root@master ~]# kubectl get pods pod-liveness-httpget -n dev
  NAME                   READY   STATUS    RESTARTS     AGE
  pod-liveness-httpget   1/1     Running   1 (6s ago)   44s
  [root@master ~]# kubectl get pods pod-liveness-httpget -n dev
  NAME                   READY   STATUS    RESTARTS      AGE
  pod-liveness-httpget   1/1     Running   3 (13s ago)   111s
(3)可以修改成一个可以访问的路径path,比如/file2,再试,结果就正常了
node2:
[root@node2 ~]# yum -y install httpd
[root@node2 ~]# cd /var/www/html
[root@node2 html]# echo 123 > file2
[root@node2 html]# cat file2
123
[root@node2 html]# systemctl restart httpd
[root@node2 html]# systemctl enable httpd
Created symlink /etc/systemd/system/multi-user.target.wants/httpd.service → /usr/lib/systemd/system/httpd.service.
master:
[root@master ~]# cat pod-liveness-httpget.yaml
apiVersion: v1
kind: Pod
metadata:
name: pod-liveness-httpget
namespace: dev
spec:
containers:
- name: nginx
image: nginx:1.17.1
ports:
- name: nginx-port
containerPort: 80
livenessProbe:
httpGet:
scheme: HTTP
port: 80
path: /file2
host: 192.168.100.120
[root@master ~]# kubectl apply -f pod-liveness-httpget.yaml 
pod/pod-liveness-httpget created
[root@master ~]# kubectl describe pods pod-liveness-httpget -n dev
Name:             pod-liveness-httpget
Namespace:        dev
Priority:         0
Service Account:  default
Node:             node1/192.168.100.110
Start Time:       Mon, 03 Nov 2025 18:20:47 +0800
Labels:           <none>
  ......
  Events:
  Type    Reason     Age        From               Message
  ----    ------     ----       ----               -------
  Normal  Scheduled  4s         default-scheduler  Successfully assigned dev/pod-liveness-httpget to node1
  Normal  Pulled     <invalid>  kubelet            Container image "nginx:1.17.1" already present on machine
    Normal  Created    <invalid>  kubelet            Created container nginx
      Normal  Started    <invalid>  kubelet            Started container nginx
        [root@master ~]# curl 192.168.100.120/file2
        123

至此,已经使用liveness Probe演示了三种探测方式,但是查看livenessProbe的子属性,会发现除了这三种方式,还有一些其他的配置,在这里一并解释下:

[root@master ~]# kubectl explain pod.spec.containers.livenessProbe
FIELDS:
exec <Object>
  tcpSocket    <Object>
    httpGet      <Object>
      initialDelaySeconds  <integer>  # 容器启动后等待多少秒执行第一次探测
        timeoutSeconds       <integer>  # 探测超时时间。默认1秒,最小1秒
          periodSeconds        <integer>  # 执行探测的频率。默认是10秒,最小1秒
            failureThreshold     <integer>  # 连续探测失败多少次才被认定为失败。默认是3。最小值是1
              successThreshold     <integer>  # 连续探测成功多少次才被认定为成功。默认是1

五、重启策略

在上一节中,一旦容器探测出现了问题,kubernetes就会对容器所在的Pod进行重启,其实这是由pod的重启策略决定的,pod的重启策略有 3 种,分别如下:

  • Always :容器失效时,自动重启该容器,这也是默认值
  • OnFailure : 容器终止运行且退出码不为0时重启
  • Never : 不论状态为何,都不重启该容器

重启策略适用于pod对象中的所有容器,首次需要重启的容器,将在其需要时立即进行重启,随后再次需要重启的操作将由kubelet延迟一段时间后进行,且反复的重启操作的延迟时长以此为10s、20s、40s、80s、160s和300s,300s是最大延迟时长。

1、创建pod-restartpolicy.yaml
[root@master ~]# cat pod-restartpolicy.yaml
apiVersion: v1
kind: Pod
metadata:
name: pod-restartpolicy
namespace: dev
spec:
containers:
- name: nginx
image: nginx:1.17.1
ports:
- name: nginx-port
containerPort: 80
livenessProbe:
httpGet:
scheme: HTTP
port: 80
path: /hello
restartPolicy: Never    # 设置重启策略为Never
2、运行Pod测试
[root@master ~]# kubectl apply -f pod-restartpolicy.yaml 
pod/pod-restartpolicy created
# 查看Pod详情,发现nginx容器失败
[root@master ~]# kubectl describe pods pod-restartpolicy -n dev
Name:             pod-restartpolicy
Namespace:        dev
Priority:         0
Service Account:  default
Node:             node2/192.168.100.120
Start Time:       Mon, 03 Nov 2025 18:33:01 +0800
Labels:           <none>
  ......
  Events:
  Type     Reason     Age                     From               Message
  ----     ------     ----                    ----               -------
  Normal   Scheduled  25s                     default-scheduler  Successfully assigned dev/pod-restartpolicy to node2
  Normal   Pulled     16s                     kubelet            Container image "nginx:1.17.1" already present on machine
  Normal   Created    16s                     kubelet            Created container nginx
  Normal   Started    16s                     kubelet            Started container nginx
  Warning  Unhealthy  <invalid> (x2 over 7s)  kubelet            Liveness probe failed: HTTP probe failed with statuscode: 404
    # 多等一会,再观察pod的重启次数,发现一直是0,并未重启   
    [root@master ~]# kubectl get pods pod-restartpolicy -n dev
    NAME                READY   STATUS    RESTARTS   AGE
    pod-restartpolicy   1/1     Running   0          25s
    [root@master ~]# kubectl get pods pod-restartpolicy -n dev
    NAME                READY   STATUS      RESTARTS   AGE
    pod-restartpolicy   0/1     Completed   0          44s

六、pod调度

在默认情况下,一个Pod在哪个Node节点上运行,是由Scheduler组件采用相应的算法计算出来的,这个过程是不受人工控制的。但是在实际使用中,这并不满足的需求,因为很多情况下,我们想控制某些Pod到达某些节点上,那么应该怎么做呢?这就要求了解kubernetes对Pod的调度规则,kubernetes提供了四大类调度方式:

  • 自动调度:运行在哪个节点上完全由Scheduler经过一系列的算法计算得出
  • 定向调度:NodeName、NodeSelector
  • 亲和性调度:NodeAffinity、PodAffinity、PodAntiAffinity
  • 污点(容忍)调度:Taints、Toleration
1、定向调度

定向调度,指的是利用在pod上声明nodeName或者nodeSelector,以此将Pod调度到期望的node节点上。注意,这里的调度是强制的,这就意味着即使要调度的目标Node不存在,也会向上面进行调度,只不过pod运行失败而已。

1.1 NodeName

NodeName用于强制约束将Pod调度到指定的Name的Node节点上。这种方式,其实是直接跳过Scheduler的调度逻辑,直接将Pod调度到指定名称的节点。

(1)创建一个pod-nodename.yaml文件
[root@master ~]# cat pod-nodename.yaml
apiVersion: v1
kind: Pod
metadata:
name: pod-nodename
namespace: dev
spec:
containers:
- name: nginx
image: nginx:1.17.1
nodeName: node1     # 指定调度到node1节点上
[root@master ~]# kubectl apply -f pod-nodename.yaml 
pod/pod-nodename created
#查看Pod调度到NODE属性,确实是调度到了node1节点上
[root@master ~]# kubectl get pods pod-nodename -n dev -o wide
NAME           READY   STATUS    RESTARTS   AGE   IP               NODE    NOMINATED NODE   READINESS GATES
pod-nodename   1/1     Running   0          40s   172.16.166.154   node1   <none>           <none>
  # 接下来,删除pod,修改nodeName的值为node3(并没有node3节点)
  [root@master ~]# kubectl delete -f pod-nodename.yaml 
  pod "pod-nodename" deleted
  [root@master ~]# vim pod-nodename.yaml
  [root@master ~]# cat pod-nodename.yaml
  apiVersion: v1
  kind: Pod
  metadata:
  name: pod-nodename
  namespace: dev
  spec:
  containers:
  - name: nginx
  image: nginx:1.17.1
  nodeName: node3
  [root@master ~]# kubectl apply -f pod-nodename.yaml 
  pod/pod-nodename created
  #再次查看,发现已经向Node3节点调度,但是由于不存在node3节点,所以pod无法正常运行
  [root@master ~]# kubectl get pods pod-nodename -n dev -o wide
  NAME           READY   STATUS    RESTARTS   AGE   IP       NODE    NOMINATED NODE   READINESS GATES
  pod-nodename   0/1     Pending   0          6s    <none>   node3   <none>           <none>
1.2 NodeSelector

NodeSelector用于将pod调度到添加了指定标签的node节点上。它是通过kubernetes的label-selector机制实现的,也就是说,在pod创建之前,会由scheduler使用MatchNodeSelector调度策略进行label匹配,找出目标node,然后将pod调度到目标节点,该匹配规则是强制约束。

(1)首先分别为node节点添加标签
[root@master ~]# kubectl label nodes node1 nodeenv=pro
node/node1 labeled
[root@master ~]# kubectl label nodes node2 nodeenv=test
node/node2 labeled
(2)创建一个pod-nodeselector.yaml文件,并使用它创建Pod
[root@master ~]# vim pod-nodeselector.yaml
[root@master ~]# cat pod-nodeselector.yaml
apiVersion: v1
kind: Pod
metadata:
name: pod-nodeselector
namespace: dev
spec:
containers:
- name: nginx
image: nginx:1.17.1
nodeSelector:
nodeenv: pro   # 指定调度到具有nodeenv=pro标签的节点上
[root@master ~]# kubectl apply -f pod-nodeselector.yaml 
pod/pod-nodeselector created
#查看Pod调度到NODE属性,确实是调度到了node1节点上
[root@master ~]# kubectl get pods pod-nodeselector -n dev -o wide
NAME               READY   STATUS    RESTARTS   AGE   IP               NODE    NOMINATED NODE   READINESS GATES
pod-nodeselector   1/1     Running   0          27s   172.16.166.155   node1   <none>           <none>
  # 接下来,删除pod,修改nodeSelector的值为nodeenv: xxxx(不存在打有此标签的节点)
  [root@master ~]# kubectl delete -f pod-nodeselector.yaml 
  pod "pod-nodeselector" deleted
  [root@master ~]# vim pod-nodeselector.yaml
  [root@master ~]# cat pod-nodeselector.yaml
  apiVersion: v1
  kind: Pod
  metadata:
  name: pod-nodeselector
  namespace: dev
  spec:
  containers:
  - name: nginx
  image: nginx:1.17.1
  nodeSelector:
  nodeenv: xxxx
  [root@master ~]# kubectl apply -f pod-nodeselector.yaml 
  pod/pod-nodeselector created
  #再次查看,发现pod无法正常运行,Node的值为none
  [root@master ~]# kubectl get pods pod-nodeselector -n dev -o wide
  NAME               READY   STATUS    RESTARTS   AGE   IP       NODE     NOMINATED NODE   READINESS GATES
  pod-nodeselector   0/1     Pending   0          7s    <none>   <none>   <none>           <none>
    # 查看详情,发现node selector匹配失败的提示
    [root@master ~]# kubectl describe pods pod-nodeselector -n dev
    ......
    Events:
    Type     Reason            Age   From               Message
    ----     ------            ----  ----               -------
    Warning  FailedScheduling  31s   default-scheduler  0/3 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }, 2 node(s) didn't match Pod's node affinity/selector. preemption: 0/3 nodes are available: 3 Preemption is not helpful for scheduling..
2、亲和性调度

上一节,介绍了两种定向调度的方式,使用起来非常方便,但是也有一定的问题,那就是如果没有满足条件的Node,那么Pod将不会被运行,即使在集群中还有可用Node列表也不行,这就限制了它的使用场景。

基于上面的问题,kubernetes还提供了一种亲和性调度(Affinity)。它在NodeSelector的基础之上的进行了扩展,可以通过配置的形式,实现优先选择满足条件的Node进行调度,如果没有,也可以调度到不满足条件的节点上,使调度更加灵活。

Affinity主要分为三类:

  • nodeAffinity(node亲和性): 以node为目标,解决pod可以调度到哪些node的问题
  • podAffinity(pod亲和性) : 以pod为目标,解决pod可以和哪些已存在的pod部署在同一个拓扑域中的问题
  • podAntiAffinity(pod反亲和性) : 以pod为目标,解决pod不能和哪些已存在pod部署在同一个拓扑域中的问题

关于亲和性(反亲和性)使用场景的说明:

亲和性:如果两个应用频繁交互,那就有必要利用亲和性让两个应用的尽可能的靠近,这样可以减少因网络通信而带来的性能损耗。

反亲和性:当应用的采用多副本部署时,有必要采用反亲和性让各个应用实例打散分布在各个node上,这样可以提高服务的高可用性。

2.1 NodeAffinity
(1)首先来看一下NodeAffinity的可配置项
pod.spec.affinity.nodeAffinity
requiredDuringSchedulingIgnoredDuringExecution  Node节点必须满足指定的所有规则才可以,相当于硬限制
nodeSelectorTerms  节点选择列表
matchFields   按节点字段列出的节点选择器要求列表
matchExpressions   按节点标签列出的节点选择器要求列表(推荐)
key    键
values 值
operator 关系符 支持Exists, DoesNotExist, In, NotIn, Gt, Lt
preferredDuringSchedulingIgnoredDuringExecution 优先调度到满足指定的规则的Node,相当于软限制 (倾向)
preference   一个节点选择器项,与相应的权重相关联
matchFields   按节点字段列出的节点选择器要求列表
matchExpressions   按节点标签列出的节点选择器要求列表(推荐)
key    键
values 值
operator 关系符 支持In, NotIn, Exists, DoesNotExist, Gt, Lt
weight 倾向权重,在范围1-100。
关系符的使用说明:
- matchExpressions:
- key: nodeenv              # 匹配存在标签的key为nodeenv的节点
operator: Exists
- key: nodeenv              # 匹配标签的key为nodeenv,且value是"xxx"或"yyy"的节点
operator: In
values: ["xxx","yyy"]
- key: nodeenv              # 匹配标签的key为nodeenv,且value大于"xxx"的节点
operator: Gt
values: "xxx"
requiredDuringSchedulingIgnoredDuringExecution(硬限制)
(1)创建pod-nodeaffinity-required.yaml
[root@master ~]# cat pod-nodeaffinity-required.yaml
apiVersion: v1
kind: Pod
metadata:
name: pod-nodeaffinity-required
namespace: dev
spec:
containers:
- name: nginx
image: nginx:1.17.1
affinity:     #亲和性设置
nodeAffinity:    #设置node亲和性
requiredDuringSchedulingIgnoredDuringExecution:    # 硬限制
nodeSelectorTerms:
- matchExpressions:    # 匹配env的值在["xxx","yyy"]中的标签
- key: nodeenv
operator: In
values: ["xxx","yyy"]
[root@master ~]# kubectl apply -f pod-nodeaffinity-required.yaml 
pod/pod-nodeaffinity-required created
# 查看pod状态 (运行失败)
[root@master ~]# kubectl get pods pod-nodeaffinity-required -n dev -o wide
NAME                        READY   STATUS    RESTARTS   AGE   IP       NODE     NOMINATED NODE   READINESS GATES
pod-nodeaffinity-required   0/1     Pending   0          59s   <none>   <none>   <none>           <none>
  # 查看Pod的详情# 发现调度失败,提示node选择失败
  [root@master ~]# kubectl describe pod pod-nodeaffinity-required -n dev
  ......
  Events:
  Type     Reason            Age   From               Message
  ----     ------            ----  ----               -------
  Warning  FailedScheduling  72s   default-scheduler  0/3 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }, 2 node(s) didn't match Pod's node affinity/selector. preemption: 0/3 nodes are available: 3 Preemption is not helpful for scheduling..
  #接下来,停止pod
  [root@master ~]# kubectl delete -f pod-nodeaffinity-required.yaml 
  pod "pod-nodeaffinity-required" deleted
  # 修改文件,将values: ["xxx","yyy"]------> ["pro","yyy"]
  [root@master ~]# vim pod-nodeaffinity-required.yaml
  # 再次启动
  [root@master ~]# kubectl apply -f pod-nodeaffinity-required.yaml 
  pod/pod-nodeaffinity-required created
  # 此时查看,发现调度成功,已经将pod调度到了node1上
  [root@master ~]# kubectl get pods pod-nodeaffinity-required -n dev -o wide
  NAME                        READY   STATUS    RESTARTS   AGE   IP               NODE    NOMINATED NODE   READINESS GATES
  pod-nodeaffinity-required   1/1     Running   0          7s    172.16.166.156   node1   <none>           <none>
preferredDuringSchedulingIgnoredDuringExecution(软限制)
(1)创建pod-nodeaffinity-preferred.yaml
[root@master ~]# cat pod-nodeaffinity-preferred.yaml
apiVersion: v1
kind: Pod
metadata:
name: pod-nodeaffinity-preferred
namespace: dev
spec:
containers:
- name: nginx
image: nginx:1.17.1
affinity:   #亲和性设置
nodeAffinity:    #设置node亲和性
preferredDuringSchedulingIgnoredDuringExecution:   # 软限制
- weight: 1
preference:
matchExpressions:   # 匹配env的值在["xxx","yyy"]中的标签(当前环境没有)
- key: nodeenv
operator: In
values: ["xxx","yyy"]
[root@master ~]# kubectl create -f pod-nodeaffinity-preferred.yaml
pod/pod-nodeaffinity-preferred created
# 查看pod状态 (运行成功)
[root@master ~]# kubectl get pod pod-nodeaffinity-preferred -n dev
NAME                         READY   STATUS    RESTARTS   AGE
pod-nodeaffinity-preferred   1/1     Running   0          8s
[root@master ~]# kubectl get pod pod-nodeaffinity-preferred -n dev -o wide
NAME                         READY   STATUS    RESTARTS   AGE   IP              NODE    NOMINATED NODE   READINESS GATES
pod-nodeaffinity-preferred   1/1     Running   0          21s   172.16.104.30   node2   <none>           <none>

NodeAffinity规则设置的注意事项:

1、如果同时定义了nodeSelector和nodeAffinity,那么必须两个条件都得到满足,Pod才能运行在指定的Node上

2、如果nodeAffinity指定了多个nodeSelectorTerms,那么只需要其中一个能够匹配成功即可

3、如果一个nodeSelectorTerms中有多个matchExpressions ,则一个节点必须满足所有的才能匹配成功

4、如果一个pod所在的Node在Pod运行期间其标签发生了改变,不再符合该Pod的节点亲和性需求,则系统将忽略此变化

2.2 PodAffinity

PodAffinity主要实现以运行的Pod为参照,实现让新创建的Pod跟参照pod在一个区域的功能。

(1)PodAffinity的可配置项
pod.spec.affinity.podAffinity
requiredDuringSchedulingIgnoredDuringExecution  硬限制
namespaces       指定参照pod的namespace
topologyKey      指定调度作用域
labelSelector    标签选择器
matchExpressions  按节点标签列出的节点选择器要求列表(推荐)
key    键
values 值
operator 关系符 支持In, NotIn, Exists, DoesNotExist.
matchLabels    指多个matchExpressions映射的内容
preferredDuringSchedulingIgnoredDuringExecution 软限制
podAffinityTerm  选项
namespaces
topologyKey
labelSelector
matchExpressions
key    键
values 值
operator
matchLabels
weight 倾向权重,在范围1-100
topologyKey用于指定调度时作用域,例如:
如果指定为kubernetes.io/hostname,那就是以Node节点为区分范围
如果指定为beta.kubernetes.io/os,则以Node节点的操作系统类型来区分
requiredDuringSchedulingIgnoredDuringExecution(硬限制)
(1)首先创建一个参照Pod,pod-podaffinity-target.yaml
[root@master ~]# vim pod-podaffinity-target.yaml
[root@master ~]# cat pod-podaffinity-target.yaml
apiVersion: v1
kind: Pod
metadata:
name: pod-podaffinity-target
namespace: dev
labels:
podenv: pro   #设置标签
spec:
containers:
- name: nginx
image: nginx:1.17.1
nodeName: node1    # 将目标pod名确指定到node1上
[root@master ~]# kubectl create -f pod-podaffinity-target.yaml
pod/pod-podaffinity-target created
[root@master ~]# kubectl get pods  pod-podaffinity-target -n dev
NAME                     READY   STATUS    RESTARTS   AGE
pod-podaffinity-target   1/1     Running   0          10s
(2)创建pod-podaffinity-required.yaml
[root@master ~]# vim pod-podaffinity-required.yaml
[root@master ~]# cat pod-podaffinity-required.yaml
apiVersion: v1
kind: Pod
metadata:
name: pod-podaffinity-required
namespace: dev
spec:
containers:
- name: nginx
image: nginx:1.17.1
affinity:   #亲和性设置
podAffinity:   #设置pod亲和性
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:    # 匹配env的值在["xxx","yyy"]中的标签
- key: podenv
operator: In
values: ["xxx","yyy"]
topologyKey: kubernetes.io/hostname
#上面配置表达的意思是:新Pod必须要与拥有标签nodeenv=xxx或者nodeenv=yyy的pod在同一Node上,显然现在没有这样pod,接下来,运行测试一下
[root@master ~]# kubectl create -f pod-podaffinity-required.yaml
pod/pod-podaffinity-required created
# 查看pod状态,发现未运行
[root@master ~]# kubectl get pods pod-podaffinity-required -n dev
NAME                       READY   STATUS    RESTARTS   AGE
pod-podaffinity-required   0/1     Pending   0          10s
# 查看详细信息
[root@master ~]# kubectl describe pods pod-podaffinity-required  -n dev
......
Events:
Type     Reason            Age   From               Message
----     ------            ----  ----               -------
Warning  FailedScheduling  22s   default-scheduler  0/3 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }, 2 node(s) didn't match pod affinity rules. preemption: 0/3 nodes are available: 3 Preemption is not helpful for scheduling..
# 接下来修改  values: ["xxx","yyy"]----->values:["pro","yyy"]# 意思是:新Pod必须要与拥有标签nodeenv=xxx或者nodeenv=yyy的pod在同一Node上
[root@master ~]# kubectl delete -f pod-podaffinity-required.yaml
pod "pod-podaffinity-required" deleted
[root@master ~]# vim pod-podaffinity-required.yaml
[root@master ~]# cat pod-podaffinity-required.yaml
apiVersion: v1
kind: Pod
metadata:
name: pod-podaffinity-required
namespace: dev
spec:
containers:
- name: nginx
image: nginx:1.17.1
affinity:             #亲和性设置
podAffinity:         #设置pod反亲和性
requiredDuringSchedulingIgnoredDuringExecution:    # 硬限制
- labelSelector:
matchExpressions:        # 匹配podenv的值在["pro"]中的标签
- key: podenv
operator: In
values: ["pro","yyy"]
topologyKey: kubernetes.io/hostname
[root@master ~]# kubectl apply -f pod-podaffinity-required.yaml
pod/pod-podaffinity-required created
[root@master ~]# kubectl get pods pod-podaffinity-required -n dev
NAME                       READY   STATUS    RESTARTS   AGE
pod-podaffinity-required   1/1     Running   0          9s
PodAntiAffinity

PodAntiAffinity主要实现以运行的Pod为参照,让新创建的Pod跟参照pod不在一个区域中的功能。

它的配置方式和选项跟PodAffinty是一样的,这里不再做详细解释,直接做一个测试案例

将上面的案例的亲和性改为反亲和性(将PodAffinity改为PodAntiAffinity)
[root@master ~]# cat pod-podaffinity-required.yaml
apiVersion: v1
kind: Pod
metadata:
name: pod-podaffinity-required
namespace: dev
spec:
containers:
- name: nginx
image: nginx:1.17.1
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: podenv
operator: In
values: ["pro","yyy"]
topologyKey: kubernetes.io/hostname
[root@master ~]# kubectl delete -f pod-podaffinity-required.yaml
pod "pod-podaffinity-required" deleted
[root@master ~]# kubectl apply -f pod-podaffinity-required.yaml
pod/pod-podaffinity-required created
# 查看pod
# 发现调度到了node2上
[root@master ~]# kubectl get pods pod-podaffinity-required -n dev -o wide
NAME                       READY   STATUS    RESTARTS   AGE   IP              NODE    NOMINATED NODE   READINESS GATES
pod-podaffinity-required   1/1     Running   0          12s   172.16.104.31   node2   <none>           <none>

上面配置表达的意思是:新Pod必须要与拥有标签nodeenv=pro的pod不在同一Node上。

七、污点和容忍

1、污点(Taints)

前面的调度方式都是站在Pod的角度上,通过在Pod上添加属性,来确定Pod是否要调度到指定的Node上,其实我们也可以站在Node的角度上,通过在Node上添加污点属性,来决定是否允许Pod调度过来。

Node被设置上污点之后就和Pod之间存在了一种相斥的关系,进而拒绝Pod调度进来,甚至可以将已经存在的Pod驱逐出去。

污点的格式为:key=value:effect, key和value是污点的标签,effect描述污点的作用,支持如下三个选项:

  • PreferNoSchedule:kubernetes将尽量避免把Pod调度到具有该污点的Node上,除非没有其他节点可调度
  • NoSchedule:kubernetes将不会把Pod调度到具有该污点的Node上,但不会影响当前Node上已存在的Pod
  • NoExecute:kubernetes将不会把Pod调度到具有该污点的Node上,同时也会将Node上已存在的Pod驱离

在这里插入图片描述

(1)使用kubectl设置和去除污点
# 设置污点
kubectl taint nodes node1 key=value:effect
# 去除污点
kubectl taint nodes node1 key:effect-
# 去除所有污点
kubectl taint nodes node1 key-
2、演示下污点的效果
(1)准备节点node2(为了演示效果更加明显,暂时停止node1节点)
可直接将node2节点关机
[root@master ~]# kubectl get nodes
NAME     STATUS     ROLES           AGE     VERSION
master   Ready      control-plane   6d16h   v1.28.15
node1    NotReady   <none>          6d16h   v1.28.15
  node2    Ready      <none>          6d16h   v1.28.15
(2) 为node2节点设置一个污点: tag=stw:PreferNoSchedule;然后创建pod1( pod1 可以 )
[root@master ~]# kubectl taint nodes node2 tag=stw:PreferNoSchedule
node/node2 tainted
[root@master ~]# kubectl run taint1 --image=nginx:1.17.1 -n dev
pod/taint1 created
[root@master ~]# kubectl get pods taint1 -n dev -o wide
NAME     READY   STATUS    RESTARTS   AGE   IP              NODE    NOMINATED NODE   READINESS GATES
taint1   1/1     Running   0          38s   172.16.104.38   node2   <none>           <none>
(3)修改为node2节点设置一个污点: tag=stw:NoSchedule;然后创建pod2( pod1 正常 pod2 失败 ,且node2节点的其余pod都正常)
[root@master ~]# kubectl taint nodes node2 tag:PreferNoSchedule-
node/node2 untainted
[root@master ~]# kubectl taint nodes node2 tag=stw:NoSchedule
node/node2 tainted
[root@master ~]# kubectl run taint2 --image=nginx:1.17.1 -n dev
pod/taint2 created
[root@master ~]# kubectl get pods taint2 -n dev -o wide
NAME     READY   STATUS    RESTARTS   AGE   IP       NODE     NOMINATED NODE   READINESS GATES
taint2   0/1     Pending   0          8s    <none>   <none>   <none>           <none>
  [root@master ~]# kubectl get pods taint1 -n dev -o wide
  NAME     READY   STATUS    RESTARTS   AGE     IP              NODE    NOMINATED NODE   READINESS GATES
  taint1   1/1     Running   0          4m20s   172.16.104.38   node2   <none>           <none>
    [root@master ~]# kubectl get pods -n dev -o wide
    NAME                         READY   STATUS        RESTARTS      AGE     IP               NODE     NOMINATED NODE   READINESS GATES
    nginx-6c45cbd8c5-2jgdb       1/1     Running       2 (37m ago)   3d22h   172.16.104.32    node2    <none>           <none>
      nginx-6c45cbd8c5-7cgpn       1/1     Running       0             4m47s   172.16.104.39    node2    <none>           <none>
        nginx-6c45cbd8c5-tz458       1/1     Running       2 (37m ago)   3d22h   172.16.104.37    node2    <none>           <none>
          nginx-6c45cbd8c5-vrmdw       1/1     Terminating   2 (37m ago)   3d22h   172.16.166.165   node1    <none>           <none>
            pod-hook-exec                1/1     Terminating   1 (37m ago)   16h     172.16.166.163   node1    <none>           <none>
              pod-initcontainer            1/1     Terminating   1 (37m ago)   16h     172.16.166.161   node1    <none>           <none>
                pod-liveness-exec            1/1     Running       1 (37m ago)   16h     172.16.104.33    node2    <none>           <none>
                  pod-liveness-httpget         1/1     Terminating   1 (37m ago)   15h     172.16.166.160   node1    <none>           <none>
                    pod-nodeaffinity-preferred   1/1     Running       1 (37m ago)   14h     172.16.104.36    node2    <none>           <none>
                      pod-nodeaffinity-required    1/1     Terminating   1 (37m ago)   14h     172.16.166.162   node1    <none>           <none>
                        pod-nodeselector             0/1     Pending       0             14h     <none>           <none>   <none>           <none>
                          pod-podaffinity-required     1/1     Running       1 (37m ago)   14h     172.16.104.35    node2    <none>           <none>
                            pod-podaffinity-target       1/1     Terminating   1 (37m ago)   14h     172.16.166.164   node1    <none>           <none>
                              taint1                       1/1     Running       0             5m6s    172.16.104.38    node2    <none>           <none>
                                taint2                       0/1     Pending       0             102s    <none>           <none>   <none>           <none>
(4)修改为node2节点设置一个污点: tag=stw:NoExecute;然后创建pod3 ( 3个pod都失败 ,即NoExecute不会让新创建的pod加入,并且会剔除已存在的pod)
[root@master ~]# kubectl taint nodes node2 tag:NoSchedule-
node/node2 untainted
[root@master ~]# kubectl taint nodes node2 tag=stw:NoExecute
node/node2 tainted
[root@master ~]# kubectl run taint3 --image=nginx:1.17.1 -n dev
pod/taint3 created
[root@master ~]# kubectl get pods taint1 -n dev -o wide
Error from server (NotFound): pods "taint1" not found
[root@master ~]# kubectl get pods taint2 -n dev -o wide
Error from server (NotFound): pods "taint2" not found
[root@master ~]# kubectl get pods taint3 -n dev -o wide
NAME     READY   STATUS    RESTARTS   AGE     IP       NODE     NOMINATED NODE   READINESS GATES
taint3   0/1     Pending   0          2m56s   <none>   <none>   <none>           <none>
  [root@master ~]# kubectl get pods -n dev -o wide
  NAME                        READY   STATUS        RESTARTS      AGE     IP               NODE     NOMINATED NODE   READINESS GATES
  nginx-6c45cbd8c5-gpn7z      0/1     Pending       0             34s     <none>           <none>   <none>           <none>
    nginx-6c45cbd8c5-km7rw      0/1     Pending       0             34s     <none>           <none>   <none>           <none>
      nginx-6c45cbd8c5-vrmdw      1/1     Terminating   2 (39m ago)   3d22h   172.16.166.165   node1    <none>           <none>
        nginx-6c45cbd8c5-wk6v4      0/1     Pending       0             34s     <none>           <none>   <none>           <none>
          pod-hook-exec               1/1     Terminating   1 (39m ago)   16h     172.16.166.163   node1    <none>           <none>
            pod-initcontainer           1/1     Terminating   1 (39m ago)   16h     172.16.166.161   node1    <none>           <none>
              pod-liveness-httpget        1/1     Terminating   1 (39m ago)   15h     172.16.166.160   node1    <none>           <none>
                pod-nodeaffinity-required   1/1     Terminating   1 (39m ago)   14h     172.16.166.162   node1    <none>           <none>
                  pod-nodeselector            0/1     Pending       0             14h     <none>           <none>   <none>           <none>
                    pod-podaffinity-target      1/1     Terminating   1 (39m ago)   14h     172.16.166.164   node1    <none>           <none>
                      taint3                      0/1     Pending       0             16s     <none>           <none>   <none>           <none>

注意:使用kubeadm搭建的集群,默认就会给master节点添加一个污点标记,所以pod就不会调度到master节点上

2、容忍(Toleration)

上面介绍了污点的作用,我们可以在node上添加污点用于拒绝pod调度上来,但是如果就是想将一个pod调度到一个有污点的node上去,这时候应该怎么做呢?这就要使用到容忍

在这里插入图片描述

污点就是拒绝,容忍就是忽略,Node通过污点拒绝pod调度上去,Pod通过容忍忽略拒绝
案例

1、上一小节,已经在node1节点上打上了NoExecute的污点,此时pod是调度不上去的

2、本小节,可以通过给pod添加容忍,然后将其调度上去

(1)创建pod-toleration.yaml
[root@master ~]# cat pod-toleration.yaml
apiVersion: v1
kind: Pod
metadata:
name: pod-toleration
namespace: dev
spec:
containers:
- name: nginx
image: nginx:1.17.1
tolerations:             # 添加容忍
- key: "tag"             # 要容忍的污点的key
operator: "Equal"      # 操作符
value: "stw"           # 容忍的污点的value
effect: "NoExecute"    # 添加容忍的规则,这里必须和标记的污点规则相同
[root@master ~]# kubectl apply -f pod-toleration.yaml 
pod/pod-toleration created
# 添加容忍的pod
[root@master ~]# kubectl get -f pod-toleration.yaml -o wide
NAME             READY   STATUS    RESTARTS   AGE   IP              NODE    NOMINATED NODE   READINESS GATES
pod-toleration   1/1     Running   0          9s    172.16.104.46   node2   <none>           <none>
  # 没有添加容忍的pod
  [root@master ~]# kubectl run taint3 --image=nginx -n dev
  pod/taint3 created
  [root@master ~]# kubectl get pods taint3 -n dev -o wide
  NAME     READY   STATUS    RESTARTS   AGE   IP       NODE     NOMINATED NODE   READINESS GATES
  taint3   0/1     Pending   0          8s    <none>   <none>   <none>           <none>
(2)容忍的详细配置
[root@master ~]# kubectl explain pod.spec.tolerations
......
FIELDS:
key       # 对应着要容忍的污点的键,空意味着匹配所有的键
value     # 对应着要容忍的污点的值
operator  # key-value的运算符,支持Equal和Exists(默认)
effect    # 对应污点的effect,空意味着匹配所有影响
tolerationSeconds   # 容忍时间, 当effect为NoExecute时生效,表示pod在Node上的停留时间
posted @ 2025-12-04 15:00  gccbuaa  阅读(8)  评论(0)    收藏  举报