k8s玩转Pod调度基础

1、hostprot

hostPort是指的主机端口,如果pod调度的节点端口被占用,则新的Pod不会调度到该节点

[root@master231 18-scheduler]# cat 02-scheduler-hostPort.yaml 
apiVersion: apps/v1
kind: Deployment
metadata:
  name: deploy-hostport
spec:
  replicas: 3
  selector:
    matchLabels:
      apps: v1
  template:
    metadata:
      labels:
        apps: v1
    spec:
      containers:
      - name: c1
        image: harbor250.oldboyedu.com/oldboyedu-xiuxian/apps:v3
        ports:
        - containerPort: 80
          # 将pod的80端口映射到调度worker节点的90端口。
          # 这意味着其他pod如果也想要使用90端口是不可以的,因为端口被占用就无法完成调度!
          hostPort: 90

kubectl apply -f  02-scheduler-hostPort.yaml 
deployment.apps/deploy-hostport created
[root@master231 18-scheduler]# 
[root@master231 18-scheduler]# kubectl get pods -o wide
NAME                               READY   STATUS    RESTARTS   AGE   IP             NODE        NOMINATED NODE   READINESS GATES
deploy-hostport-565858676b-6ncs4   1/1     Running   0          8s    10.100.1.119   worker232   <none>           <none>
deploy-hostport-565858676b-974lr   0/1     Pending   0          8s    <none>         <none>      <none>           <none>
deploy-hostport-565858676b-vqhnv   1/1     Running   0          8s    10.100.2.86    worker233   <none>           <none>

2、玩转Pod调度基础之hostNetwork

apiVersion: apps/v1
kind: Deployment
metadata:
  name: deploy-hostnetwork
spec:
  replicas: 3
  selector:
    matchLabels:
      apps: v1
  template:
    metadata:
      labels:
        apps: v1
    spec:
      # 使用宿主机网络通常情况下会搭配dnsPolicy和可选的'containerPort'字段。
      hostNetwork: true
      dnsPolicy: ClusterFirstWithHostNet
      containers:
      - name: c1
        image: harbor250.oldboyedu.com/oldboyedu-xiuxian/apps:v3
        ports:
        - containerPort: 80
kubectl apply -f   03-scheduler-hostNetwork.yaml 

[root@master231 18-scheduler]# kubectl get pods -o wide
NAME                                  READY   STATUS    RESTARTS   AGE   IP           NODE        NOMINATED NODE   READINESS GATES
deploy-hostnetwork-68f6858c4b-dkblh   0/1     Pending   0          3s    <none>       <none>      <none>           <none>
deploy-hostnetwork-68f6858c4b-mj8n5   1/1     Running   0          3s    10.0.0.232   worker232   <none>           <none>
deploy-hostnetwork-68f6858c4b-w6mkw   1/1     Running   0          3s    10.0.0.233   worker233   <none>           <none>

3、玩转Pod调度基础之resources

所谓的resources就是配置容器的资源限制。

可以通过requests配置容器的期望资源,可以通过limits配置容器的资源使用上限。

apiVersion: apps/v1
kind: Deployment
metadata:
  name: deploy-resources
spec:
  replicas: 10
  selector:
    matchLabels:
      app: stress
  template:
    metadata:
      labels:
        app: stress
    spec:
      containers:
      # - image: jasonyin2020/oldboyedu-linux-tools:v0.1
      - image: harbor250.oldboyedu.com/oldboyedu-test/stress:v0.1
        name: oldboyedu-linux-tools
        args:
        - tail
        - -f
        - /etc/hosts
        # 配置容器的资源限制
        resources: 
          # 1.若节点不满足期望资源,则无法调度到该节点;
          # 2.调度到该节点后也不会立刻吃掉期望的资源。
          # 3.若没有定义requests字段,则默认等效于limits的配置;
          requests:
            cpu: 0.5
            memory: 300Mi
            # memory: 10G
          # 1.定义容器资源使用的上限;
          # 2.如果没有定义limits资源,则默认使用调度到该节点的所有资源。
          limits:
            cpu: 1.5
            memory: 500Mi
[root@master231 18-scheduler]# 
[root@master231 18-scheduler]# kubectl apply -f  04-scheduler-resources.yaml
deployment.apps/deploy-resources created
[root@master231 18-scheduler]# 
[root@master231 18-scheduler]# kubectl get pods -o wide
NAME                                READY   STATUS    RESTARTS   AGE     IP             NODE        NOMINATED NODE   READINESS GATES
deploy-resources-7496f54d5b-4jvpz   1/1     Running   0          3m39s   10.100.2.95    worker233   <none>           <none>
deploy-resources-7496f54d5b-644wl   1/1     Running   0          3m39s   10.100.1.131   worker232   <none>           <none>
deploy-resources-7496f54d5b-67whf   0/1     Pending   0          3m39s   <none>         <none>      <none>           <none>
deploy-resources-7496f54d5b-6mfmk   0/1     Pending   0          3m39s   <none>         <none>      <none>           <none>
deploy-resources-7496f54d5b-6p86l   0/1     Pending   0          3m39s   <none>         <none>      <none>           <none>
deploy-resources-7496f54d5b-dhhcn   0/1     Pending   0          3m39s   <none>         <none>      <none>           <none>
deploy-resources-7496f54d5b-h6xxv   1/1     Running   0          3m39s   10.100.2.94    worker233   <none>           <none>
deploy-resources-7496f54d5b-qc4vh   1/1     Running   0          3m39s   10.100.2.96    worker233   <none>           <none>
deploy-resources-7496f54d5b-rx4nv   1/1     Running   0          3m39s   10.100.1.132   worker232   <none>           <none>
deploy-resources-7496f54d5b-skfrd   1/1     Running   0          3m39s   10.100.1.133   worker232   <none>           <none>
[root@master231 18-scheduler]# 

4、玩转Pod调度基础之Taints

Taints表示污点,作用在worker工作节点上。
污点类型大概分为三类:

  • NoSchedule:
    不在接受新的Pod调度,已经调度到该节点的Pod不会被驱逐。
  • PreferNoSchedule:
    优先将Pod调度到其他节点,当其他节点不可调度时,再往该节点调度。
  • NoExecute:
    不在接受新的Pod调度,且已经调度到该节点的Pod会被立刻驱逐。
    污点的格式:
    key[=value]:effect
污点测试代码
 18-scheduler]# kubectl describe nodes  | grep  Taints
Taints:             node-role.kubernetes.io/master:NoSchedule
Taints:             <none>
Taints:             <none>
[root@master231 18-scheduler]# 


温馨提示:
	<none>表示该节点没有污点。
	
		2.2 给指定节点打污点 
[root@master231 18-scheduler]# kubectl taint node --all school=oldboyedu:PreferNoSchedule
node/master231 tainted
node/worker232 tainted
node/worker233 tainted
[root@master231 18-scheduler]# 
[root@master231 18-scheduler]# kubectl describe nodes  | grep  Taints -A 2
Taints:             node-role.kubernetes.io/master:NoSchedule
                    school=oldboyedu:PreferNoSchedule
Unschedulable:      false
--
Taints:             school=oldboyedu:PreferNoSchedule
Unschedulable:      false
Lease:
--
Taints:             school=oldboyedu:PreferNoSchedule
Unschedulable:      false
Lease:
[root@master231 18-scheduler]# 

	
		2.3 修改污点【只能修改value字段,修改effect则表示创建了一个新的污点类型】
[root@master231 18-scheduler]# kubectl taint node worker233 school=laonanhai:PreferNoSchedule --overwrite 
node/worker233 modified
[root@master231 18-scheduler]#
[root@master231 18-scheduler]# kubectl describe nodes  | grep  Taints -A 2
Taints:             node-role.kubernetes.io/master:NoSchedule
                    school=oldboyedu:PreferNoSchedule
Unschedulable:      false
--
Taints:             school=oldboyedu:PreferNoSchedule
Unschedulable:      false
Lease:
--
Taints:             school=laonanhai:PreferNoSchedule
Unschedulable:      false
Lease:
[root@master231 18-scheduler]# 

	
		2.3 删除污点 
[root@master231 18-scheduler]# kubectl taint node --all school-
node/master231 untainted
node/worker232 untainted
node/worker233 untainted
[root@master231 18-scheduler]# 
[root@master231 18-scheduler]# kubectl describe nodes  | grep  Taints -A 2
Taints:             node-role.kubernetes.io/master:NoSchedule
Unschedulable:      false
Lease:
--
Taints:             <none>
Unschedulable:      false
Lease:
--
Taints:             <none>
Unschedulable:      false
Lease:
[root@master231 18-scheduler]# 


	3.测试污点类型案例
		3.1 添加污点
[root@master231 18-scheduler]# kubectl describe nodes  | grep  Taints -A 2
Taints:             node-role.kubernetes.io/master:NoSchedule
Unschedulable:      false
Lease:
--
Taints:             <none>
Unschedulable:      false
Lease:
--
Taints:             <none>
Unschedulable:      false
Lease:
[root@master231 18-scheduler]# 
[root@master231 18-scheduler]# kubectl taint node worker233 school:NoSchedule
node/worker233 tainted
[root@master231 18-scheduler]# 
[root@master231 18-scheduler]# kubectl describe nodes  | grep  Taints -A 2
Taints:             node-role.kubernetes.io/master:NoSchedule
Unschedulable:      false
Lease:
--
Taints:             <none>
Unschedulable:      false
Lease:
--
Taints:             school:NoSchedule
Unschedulable:      false
Lease:
[root@master231 18-scheduler]# 


		3.2 测试案例 
[root@master231 18-scheduler]# cat 05-scheduler-taints.yaml 
apiVersion: apps/v1
kind: Deployment
metadata:
  name: deploy-taints
spec:
  replicas: 5
  selector:
    matchLabels:
      app: stress
  template:
    metadata:
      labels:
        app: stress
    spec:
      containers:
      - image: harbor250.oldboyedu.com/oldboyedu-test/stress:v0.1
        name: oldboyedu-linux-tools
        args:
        - tail
        - -f
        - /etc/hosts
        resources: 
          requests:
            cpu: 0.5
            memory: 300Mi
          limits:
            cpu: 1.5
            memory: 500Mi
[root@master231 18-scheduler]# 
[root@master231 18-scheduler]# kubectl apply -f  05-scheduler-taints.yaml 
deployment.apps/deploy-taints created
[root@master231 18-scheduler]# 
[root@master231 18-scheduler]# kubectl get pods -o wide
NAME                             READY   STATUS    RESTARTS   AGE   IP             NODE        NOMINATED NODE   READINESS GATES
deploy-taints-7496f54d5b-9qfch   0/1     Pending   0          4s    <none>         <none>      <none>           <none>
deploy-taints-7496f54d5b-llmqv   1/1     Running   0          4s    10.100.1.138   worker232   <none>           <none>
deploy-taints-7496f54d5b-n4vx4   0/1     Pending   0          4s    <none>         <none>      <none>           <none>
deploy-taints-7496f54d5b-nvm7h   1/1     Running   0          4s    10.100.1.139   worker232   <none>           <none>
deploy-taints-7496f54d5b-t9bxt   1/1     Running   0          4s    10.100.1.137   worker232   <none>           <none>
[root@master231 18-scheduler]# 



		3.3 修改污点类型 
[root@master231 18-scheduler]# kubectl taint node worker233 school:PreferNoSchedule
node/worker233 tainted
[root@master231 18-scheduler]# 
[root@master231 18-scheduler]# kubectl taint node worker233 school:NoSchedule-
node/worker233 untainted
[root@master231 18-scheduler]# 
[root@master231 18-scheduler]# kubectl describe nodes  | grep  Taints -A 2
Taints:             node-role.kubernetes.io/master:NoSchedule
Unschedulable:      false
Lease:
--
Taints:             <none>
Unschedulable:      false
Lease:
--
Taints:             school:PreferNoSchedule
Unschedulable:      false
Lease:
[root@master231 18-scheduler]# 
[root@master231 18-scheduler]# kubectl get pods -o wide
NAME                             READY   STATUS    RESTARTS   AGE     IP             NODE        NOMINATED NODE   READINESS GATES
deploy-taints-7496f54d5b-9qfch   1/1     Running   0          2m32s   10.100.2.99    worker233   <none>           <none>
deploy-taints-7496f54d5b-llmqv   1/1     Running   0          2m32s   10.100.1.138   worker232   <none>           <none>
deploy-taints-7496f54d5b-n4vx4   1/1     Running   0          2m32s   10.100.2.100   worker233   <none>           <none>
deploy-taints-7496f54d5b-nvm7h   1/1     Running   0          2m32s   10.100.1.139   worker232   <none>           <none>
deploy-taints-7496f54d5b-t9bxt   1/1     Running   0          2m32s   10.100.1.137   worker232   <none>           <none>
[root@master231 18-scheduler]# 

	
		3.4 再次修改污点类型
[root@master231 18-scheduler]# kubectl taint node worker233 school=oldboyedu:NoExecute
node/worker233 tainted
[root@master231 18-scheduler]# 
[root@master231 18-scheduler]# kubectl get pods -o wide
NAME                             READY   STATUS        RESTARTS   AGE     IP             NODE        NOMINATED NODE   READINESS GATES
deploy-taints-7496f54d5b-4rpbg   0/1     Pending       0          1s      <none>         <none>      <none>           <none>
deploy-taints-7496f54d5b-9qfch   1/1     Terminating   0          4m35s   10.100.2.99    worker233   <none>           <none>
deploy-taints-7496f54d5b-llmqv   1/1     Running       0          4m35s   10.100.1.138   worker232   <none>           <none>
deploy-taints-7496f54d5b-n4vx4   1/1     Terminating   0          4m35s   10.100.2.100   worker233   <none>           <none>
deploy-taints-7496f54d5b-nvm7h   1/1     Running       0          4m35s   10.100.1.139   worker232   <none>           <none>
deploy-taints-7496f54d5b-t2pcv   0/1     Pending       0          1s      <none>         <none>      <none>           <none>
deploy-taints-7496f54d5b-t9bxt   1/1     Running       0          4m35s   10.100.1.137   worker232   <none>           <none>
[root@master231 18-scheduler]# 
[root@master231 18-scheduler]# kubectl get pods -o wide
NAME                             READY   STATUS    RESTARTS   AGE    IP             NODE        NOMINATED NODE   READINESS GATES
deploy-taints-7496f54d5b-4rpbg   0/1     Pending   0          33s    <none>         <none>      <none>           <none>
deploy-taints-7496f54d5b-llmqv   1/1     Running   0          5m7s   10.100.1.138   worker232   <none>           <none>
deploy-taints-7496f54d5b-nvm7h   1/1     Running   0          5m7s   10.100.1.139   worker232   <none>           <none>
deploy-taints-7496f54d5b-t2pcv   0/1     Pending   0          33s    <none>         <none>      <none>           <none>
deploy-taints-7496f54d5b-t9bxt   1/1     Running   0          5m7s   10.100.1.137   worker232   <none>           <none>
[root@master231 18-scheduler]# 



		3.5.删除测试 
[root@master231 18-scheduler]# kubectl delete -f 05-scheduler-taints.yaml 
deployment.apps "deploy-taints" deleted
[root@master231 18-scheduler]# 
[root@master231 18-scheduler]# kubectl describe nodes  | grep  Taints -A 2
Taints:             node-role.kubernetes.io/master:NoSchedule
Unschedulable:      false
Lease:
--
Taints:             <none>
Unschedulable:      false
Lease:
--
Taints:             school=oldboyedu:NoExecute
                    school:PreferNoSchedule
Unschedulable:      false
[root@master231 18-scheduler]# 
[root@master231 18-scheduler]# kubectl taint node worker233 school-
node/worker233 untainted
[root@master231 18-scheduler]# 
[root@master231 18-scheduler]# kubectl describe nodes  | grep  Taints -A 2
Taints:             node-role.kubernetes.io/master:NoSchedule
Unschedulable:      false
Lease:
--
Taints:             <none>
Unschedulable:      false
Lease:
--
Taints:             <none>
Unschedulable:      false
Lease:
[root@master231 18-scheduler]# 

5、玩转Pod调度基础之tolerations

tolerations是污点容忍,用该技术可以让Pod调度到一个具有污点的节点。
指的注意的是,一个Pod如果想要调度到某个worker节点,则必须容忍该worker的所有污点。

kubectl describe nodes | grep Taints -A 2
Taints:             node-role.kubernetes.io/master:NoSchedule
Unschedulable:      false
Lease:
--
Taints:             <none>
Unschedulable:      false
Lease:
--
Taints:             <none>
Unschedulable:      false


kubectl taint node --all school=oldboyedu:NoSchedule
kubectl taint node worker233 class=linux98:NoExecute
 kubectl describe nodes | grep Taints -A 2
Taints:             node-role.kubernetes.io/master:NoSchedule
                    school=oldboyedu:NoSchedule
Unschedulable:      false
--
Taints:             school=oldboyedu:NoSchedule
Unschedulable:      false
Lease:
--
Taints:             class=linux98:NoExecute
                    school=oldboyedu:NoSchedule
Unschedulable:      false


 cat 06-scheduler-tolerations.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: deploy-tolerations
spec:
  replicas: 10
  selector:
    matchLabels:
      app: stress
  template:
    metadata:
      labels:
        app: stress
    spec:
      # 配置污点容忍
      tolerations:
        # 指定污点的key,如果不定义,则默认匹配所有的key。
      - key: school
        # 指定污点的value,如果不定义,则默认匹配所有的value。
        value: oldboyedu
        # 指定污点的effect类型,如果不定义,则默认匹配所有的effect类型。
        effect: NoSchedule
      - key: class
        value: linux98
        effect: NoExecute
      - key: node-role.kubernetes.io/master
        effect: NoSchedule
      #  # 注意,operator表示key和value的关系,有效值为: Exists and Equal,默认值为: Equal。
      #  # 如果将operator的值设置为: Exists,且不定义key,value,effect时,表示无视污点。
      #- operator: Exists
      containers:
      - image: harbor250.oldboyedu.com/oldboyedu-test/stress:v0.1
        name: oldboyedu-linux-tools
        args:
        - tail
        - -f
        - /etc/hosts
        resources: 
          requests:
            cpu: 0.5
            memory: 300Mi
          limits:
            cpu: 1.5
            memory: 500Mi



[root@master231 18-scheduler]# kubectl get pods -o wide
NAME                                  READY   STATUS    RESTARTS   AGE   IP             NODE        NOMINATED NODE   READINESS GATES
deploy-tolerations-8667479cb9-85zfb   0/1     Pending   0          4s    <none>         <none>      <none>           <none>
deploy-tolerations-8667479cb9-bn7r6   1/1     Running   0          4s    10.100.2.110   worker233   <none>           <none>
deploy-tolerations-8667479cb9-d24mp   1/1     Running   0          4s    10.100.1.149   worker232   <none>           <none>
deploy-tolerations-8667479cb9-d4gqx   1/1     Running   0          4s    10.100.2.111   worker233   <none>           <none>
deploy-tolerations-8667479cb9-dk64w   1/1     Running   0          4s    10.100.2.108   worker233   <none>           <none>
deploy-tolerations-8667479cb9-gfqb9   1/1     Running   0          4s    10.100.1.148   worker232   <none>           <none>
deploy-tolerations-8667479cb9-n985j   1/1     Running   0          4s    10.100.0.26    master231   <none>           <none>
deploy-tolerations-8667479cb9-s88np   1/1     Running   0          4s    10.100.1.147   worker232   <none>           <none>
deploy-tolerations-8667479cb9-swnbk   1/1     Running   0          4s    10.100.0.25    master231   <none>           <none>
deploy-tolerations-8667479cb9-t52qj   1/1     Running   0          4s    10.100.2.109   worker233   <none>           <none>




kubectl describe nodes | grep Taints -A 2
Taints:             node-role.kubernetes.io/master:NoSchedule
                    school=oldboyedu:NoSchedule
Unschedulable:      false
--
Taints:             school=oldboyedu:NoSchedule
Unschedulable:      false
Lease:
--
Taints:             class=linux98:NoExecute
                    school=oldboyedu:NoSchedule
					
					
删除名为school 的污点
kubectl taint node --all school-

node/master231 untainted
node/worker232 untainted
node/worker233 untainted
[root@master231 18-scheduler]# 
[root@master231 18-scheduler]# kubectl taint node worker233  class-
node/worker233 untainted
[root@master231 18-scheduler]# 
[root@master231 18-scheduler]# kubectl describe nodes | grep Taints -A 2
Taints:             node-role.kubernetes.io/master:NoSchedule
Unschedulable:      false
Lease:
--
Taints:             <none>
Unschedulable:      false
Lease:
--
Taints:             <none>
Unschedulable:      false
Lease:

6、玩转Pod调度基础之cordon

cordon标记节点不可调度,一般用于集群维护。
cordon的底层实现逻辑其实就给节点打污点。

kubectl cordon worker233 
node/worker233 cordoned
[root@master231 18-scheduler]# 
[root@master231 18-scheduler]# kubectl get nodes 
NAME        STATUS                     ROLES                  AGE   VERSION
master231   Ready                      control-plane,master   13d   v1.23.17
worker232   Ready                      <none>                 13d   v1.23.17
worker233   Ready,SchedulingDisabled   <none>                 13d   v1.23.17
[root@master231 18-scheduler]# 
[root@master231 18-scheduler]# 
[root@master231 18-scheduler]# kubectl describe nodes | grep Taints -A 2
Taints:             node-role.kubernetes.io/master:NoSchedule
Unschedulable:      false
Lease:
--
Taints:             <none>
Unschedulable:      false
Lease:
--
Taints:             node.kubernetes.io/unschedulable:NoSchedule
Unschedulable:      true
Lease:

kubectl apply -f  05-scheduler-taints.yaml 
root@master231 18-scheduler]# kubectl get pods -o wide
NAME                             READY   STATUS    RESTARTS   AGE   IP             NODE        NOMINATED NODE   READINESS GATES
deploy-taints-7496f54d5b-5h2xq   1/1     Running   0          6s    10.100.1.153   worker232   <none>           <none>
deploy-taints-7496f54d5b-794cp   1/1     Running   0          6s    10.100.1.154   worker232   <none>           <none>
deploy-taints-7496f54d5b-jsk5b   1/1     Running   0          6s    10.100.1.152   worker232   <none>           <none>
deploy-taints-7496f54d5b-n7q7m   0/1     Pending   0          6s    <none>         <none>      <none>           <none>
deploy-taints-7496f54d5b-png9p   0/1     Pending   0          6s    <none>         <none>      <none>           <none>
[root@master231 18-scheduler]# 
[root@master231 18-scheduler]# kubectl delete -f 05-scheduler-taints.yaml 
deployment.apps "deploy-taints" deleted
[root@master231 18-scheduler]# 

	

7、玩转Pod调度基础之uncordon

uncordon的操作和cordon操作相反,表示取消节点不可调度功能。

kubectl describe nodes | grep Taints -A 2
Taints:             node-role.kubernetes.io/master:NoSchedule
Unschedulable:      false
Lease:
--
Taints:             <none>
Unschedulable:      false
Lease:
--
Taints:             node.kubernetes.io/unschedulable:NoSchedule
Unschedulable:      true
Lease:
[root@master231 18-scheduler]# 
[root@master231 18-scheduler]# 
[root@master231 18-scheduler]# kubectl uncordon worker233 
node/worker233 uncordoned
[root@master231 18-scheduler]# 
[root@master231 18-scheduler]# kubectl get nodes
NAME        STATUS   ROLES                  AGE   VERSION
master231   Ready    control-plane,master   13d   v1.23.17
worker232   Ready    <none>                 13d   v1.23.17
worker233   Ready    <none>                 13d   v1.23.17
[root@master231 18-scheduler]# 
[root@master231 18-scheduler]# kubectl describe nodes | grep Taints -A 2
Taints:             node-role.kubernetes.io/master:NoSchedule
Unschedulable:      false
Lease:
--
Taints:             <none>
Unschedulable:      false
Lease:
--
Taints:             <none>
Unschedulable:      false
Lease:
[root@master231 18-scheduler]# 
[root@master231 18-scheduler]# kubectl apply -f 05-scheduler-taints.yaml 
deployment.apps/deploy-taints created
[root@master231 18-scheduler]# 
[root@master231 18-scheduler]# kubectl get pods -o wide
NAME                             READY   STATUS    RESTARTS   AGE   IP             NODE        NOMINATED NODE   READINESS GATES
deploy-taints-7496f54d5b-bfnrh   1/1     Running   0          3s    10.100.2.116   worker233   <none>           <none>
deploy-taints-7496f54d5b-csds7   1/1     Running   0          3s    10.100.2.117   worker233   <none>           <none>
deploy-taints-7496f54d5b-hqrnm   1/1     Running   0          3s    10.100.2.118   worker233   <none>           <none>
deploy-taints-7496f54d5b-k8n66   1/1     Running   0          3s    10.100.1.156   worker232   <none>           <none>
deploy-taints-7496f54d5b-pdnkr   1/1     Running   0          3s    10.100.1.155   worker232   <none>           <none>
[root@master231 18-scheduler]# 

8、玩转Pod调度基础之drain
所谓drain其实就是将所在节点的pod进行驱逐的操作,说白了,就是将当前节点的Pod驱逐到其他节点运行。
在驱逐Pod时,需要忽略ds控制器创建的pod。

驱逐的主要应用场景是集群的缩容。

drain底层调用的cordon。

kubectl get pods -o wide
NAME                             READY   STATUS    RESTARTS   AGE     IP             NODE        NOMINATED NODE   READINESS GATES
deploy-taints-7496f54d5b-bfnrh   1/1     Running   0          4m19s   10.100.2.116   worker233   <none>           <none>
deploy-taints-7496f54d5b-csds7   1/1     Running   0          4m19s   10.100.2.117   worker233   <none>           <none>
deploy-taints-7496f54d5b-hqrnm   1/1     Running   0          4m19s   10.100.2.118   worker233   <none>           <none>
deploy-taints-7496f54d5b-k8n66   1/1     Running   0          4m19s   10.100.1.156   worker232   <none>           <none>
deploy-taints-7496f54d5b-pdnkr   1/1     Running   0          4m19s   10.100.1.155   worker232   <none>           <none>
[root@master231 18-scheduler]# 

		2.2 驱逐worker233节点的Pod
[root@master231 18-scheduler]# kubectl drain worker233 --ignore-daemonsets --delete-emptydir-data
node/worker233 already cordoned
WARNING: ignoring DaemonSet-managed Pods: kube-flannel/kube-flannel-ds-6k42r, kube-system/kube-proxy-g5sfd, metallb-system/speaker-8q9ft
evicting pod kube-system/metrics-server-57c6f647bb-54f7w
evicting pod default/deploy-taints-7496f54d5b-csds7
evicting pod default/deploy-taints-7496f54d5b-bfnrh
evicting pod default/deploy-taints-7496f54d5b-hqrnm
pod/metrics-server-57c6f647bb-54f7w evicted
pod/deploy-taints-7496f54d5b-bfnrh evicted
pod/deploy-taints-7496f54d5b-csds7 evicted
pod/deploy-taints-7496f54d5b-hqrnm evicted
node/worker233 drained
[root@master231 18-scheduler]# 
[root@master231 18-scheduler]# kubectl get pods -o wide  # 目前来说,其他节点无法完成调度时则处于Pending状态。
NAME                             READY   STATUS    RESTARTS   AGE     IP             NODE        NOMINATED NODE   READINESS GATES
deploy-taints-7496f54d5b-9mkj5   0/1     Pending   0          58s     <none>         <none>      <none>           <none>
deploy-taints-7496f54d5b-k8n66   1/1     Running   0          6m16s   10.100.1.156   worker232   <none>           <none>
deploy-taints-7496f54d5b-p2tfw   0/1     Pending   0          58s     <none>         <none>      <none>           <none>
deploy-taints-7496f54d5b-pdnkr   1/1     Running   0          6m16s   10.100.1.155   worker232   <none>           <none>
deploy-taints-7496f54d5b-vrmfg   1/1     Running   0          58s     10.100.1.157   worker232   <none>           <none>
[root@master231 18-scheduler]#  

		2.3 验证drain底层调用cordon
[root@master231 18-scheduler]# kubectl get nodes
NAME        STATUS                     ROLES                  AGE   VERSION
master231   Ready                      control-plane,master   13d   v1.23.17
worker232   Ready                      <none>                 13d   v1.23.17
worker233   Ready,SchedulingDisabled   <none>                 13d   v1.23.17
[root@master231 18-scheduler]# 
[root@master231 18-scheduler]# kubectl describe nodes | grep Taints -A 2
Taints:             node-role.kubernetes.io/master:NoSchedule
Unschedulable:      false
Lease:
--
Taints:             <none>
Unschedulable:      false
Lease:
--
Taints:             node.kubernetes.io/unschedulable:NoSchedule
Unschedulable:      true
Lease:
[root@master231 18-scheduler]# 
posted @ 2025-07-22 21:07  寻梦行  阅读(15)  评论(0)    收藏  举报