5.调度策略
自定义调度策略
# 自定义一个demo-scheduler的资源策略配置文件
[14:47:54 root@master1 scheduler]#mkdir /etc/kubernetes/scheduler
[14:48:39 root@master1 scheduler]#cd /etc/kubernetes/scheduler
[14:49:20 root@master1 scheduler]#cat kubeschedulerconfiguration.yaml
apiVersion: kubescheduler.config.k8s.io/v1beta1
kind: KubeSchedulerConfiguration
clientConnection:
kubeconfig: "/etc/kubernetes/scheduler.conf"
profiles:
- schedulerName: default-scheduler
- schedulerName: demo-scheduler
plugins:
filter:
disabled:
- name: NodeUnschedulable
score:
disabled:
- name: NodeResourcesBalancedAllocation
weight: 1
- name: NodeResourcesLeastAllocated
weight: 1
enabled:
- name: NodeResourcesMostAllocated
weight: 5
# 应用前面定义的配置文件,注意提前备份好原本的kube-scheduler.yaml
[15:02:09 root@master1 manifests]#diff kube-scheduler.yaml kube-scheduler.yaml-bk
16d15
< - --config=/etc/kubernetes/scheduler/kubeschedulerconfiguration.yaml
20c19
< #- --port=0
---
> - --port=0
51,53d49
< - mountPath: /etc/kubernetes/scheduler
< name: schedconf
< readOnly: true
64,67d59
< - hostPath:
< path: /etc/kubernetes/scheduler
< type: DirectoryOrCreate
< name: schedconf
# 查看结果
[15:03:56 root@master1 manifests]#kubectl get pod -n kube-system
NAME READY STATUS RESTARTS AGE
calico-kube-controllers-6fb865d84f-4lhbz 1/1 Running 15 (54m ago) 34d
calico-node-7hj44 1/1 Running 6 (5d20h ago) 34d
calico-node-hk2r2 1/1 Running 11 (5d20h ago) 34d
calico-node-kmmwm 1/1 Running 12 (5d20h ago) 34d
calico-node-ns2ff 1/1 Running 6 (5d20h ago) 34d
calico-node-qv7nn 1/1 Running 6 (5d20h ago) 34d
coredns-76b4d8bc8f-d69q9 1/1 Running 12 (5d20h ago) 34d
coredns-76b4d8bc8f-ndsg9 1/1 Running 12 (5d20h ago) 34d
etcd-master1 1/1 Running 13 (5d20h ago) 34d
etcd-master2.noisedu.cn 1/1 Running 14 (5d20h ago) 34d
etcd-master3.noisedu.cn 1/1 Running 14 (5d20h ago) 34d
kube-apiserver-master1 1/1 Running 14 (5d20h ago) 34d
kube-apiserver-master2.noisedu.cn 1/1 Running 15 (5d20h ago) 34d
kube-apiserver-master3.noisedu.cn 1/1 Running 15 (5d20h ago) 34d
kube-controller-manager-master1 1/1 Running 14 (5d20h ago) 34d
kube-controller-manager-master2.noisedu.cn 1/1 Running 6 (5d20h ago) 34d
kube-controller-manager-master3.noisedu.cn 1/1 Running 9 (5d20h ago) 34d
kube-proxy-6lw45 1/1 Running 6 (5d20h ago) 34d
kube-proxy-9bjch 1/1 Running 6 (5d20h ago) 34d
kube-proxy-b8g7m 1/1 Running 11 (5d20h ago) 34d
kube-proxy-bbrxh 1/1 Running 6 (5d20h ago) 34d
kube-proxy-pm6jk 1/1 Running 12 (5d20h ago) 34d
kube-scheduler-master1 1/1 Running 0 2m26s
kube-scheduler-master2.noisedu.cn 1/1 Running 7 (5d20h ago) 34d
kube-scheduler-master3.noisedu.cn 1/1 Running 7 (5d20h ago) 34d
# 测试,结果显示无法调度
10:33:53 root@master1 scheduler]#cat 01-scheduler-deployment-test.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: deployment-test
spec:
replicas: 5
selector:
matchLabels:
app: pod-test
template:
metadata:
labels:
app: pod-test
spec:
schedulerName: demo-scheduler
containers:
- name: nginxpod-test
image: 10.0.0.55:80/mykubernetes/pod_test:v0.1
imagePullPolicy: IfNotPresent
[10:33:58 root@master1 scheduler]#kubectl apply -f 01-scheduler-deployment-test.yaml
deployment.apps/deployment-test created
[10:34:13 root@master1 scheduler]#kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
deployment-test-84444b586-68s7c 0/1 Pending 0 6s <none> <none> <none> <none>
deployment-test-84444b586-69xfs 0/1 Pending 0 6s <none> <none> <none> <none>
deployment-test-84444b586-j2vvt 0/1 Pending 0 6s <none> <none> <none> <none>
deployment-test-84444b586-k74zt 0/1 Pending 0 6s <none> <none> <none> <none>
deployment-test-84444b586-zb8vp 0/1 Pending 0 6s <none> <none> <none> <none>
[10:34:19 root@master1 scheduler]#kubectl describe pod deployment-test-84444b586-j2vvt
Name: deployment-test-84444b586-j2vvt
Namespace: default
Priority: 0
Node: <none>
Labels: app=pod-test
pod-template-hash=84444b586
Annotations: <none>
Status: Pending
IP:
IPs: <none>
Controlled By: ReplicaSet/deployment-test-84444b586
Containers:
nginxpod-test:
Image: 10.0.0.55:80/mykubernetes/pod_test:v0.1
Port: <none>
Host Port: <none>
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-bb2zj (ro)
Volumes:
kube-api-access-bb2zj:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events: <none>
# 当不用这个自定义策略时,调度成功
[10:35:03 root@master1 scheduler]#cat 00-no-scheduler-deployment-test.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: deployment-test
spec:
replicas: 5
selector:
matchLabels:
app: pod-test
template:
metadata:
labels:
app: pod-test
spec:
containers:
- name: nginxpod-test
image: 10.0.0.55:80/mykubernetes/pod_test:v0.1
imagePullPolicy: IfNotPresent
[10:35:08 root@master1 scheduler]#kubectl apply -f 00-no-scheduler-deployment-test.yaml
deployment.apps/deployment-test created
[10:35:21 root@master1 scheduler]#kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
deployment-test-7d8cb8c5d-2dkv9 1/1 Running 0 7s 10.244.4.3 node2.noisedu.cn <none> <none>
deployment-test-7d8cb8c5d-9hx8f 1/1 Running 0 7s 10.244.3.3 node1.noisedu.cn <none> <none>
deployment-test-7d8cb8c5d-f46p5 1/1 Running 0 7s 10.244.4.4 node2.noisedu.cn <none> <none>
deployment-test-7d8cb8c5d-sjctx 1/1 Running 0 7s 10.244.3.4 node1.noisedu.cn <none> <none>
deployment-test-7d8cb8c5d-vql7n 1/1 Running 0 7s 10.244.3.5 node1.noisedu.cn <none> <none>
节点调度 - 亲和nodeAffinity
# 调度到指定节点,注意节点的name一定要正确,下面先示范错误的节点名
[10:54:46 root@master1 scheduler]#cat 02-scheduler-pod-nodename.yaml
apiVersion: v1
kind: Pod
metadata:
name: pod-nodename
spec:
nodeName: node1
containers:
- name: demoapp
image: 10.0.0.55:80/mykubernetes/pod_test:v0.1
imagePullPolicy: IfNotPresent
[13:49:01 root@master1 scheduler]#kubectl apply -f 02-scheduler-pod-nodename.yaml
pod/pod-nodename created
[13:49:28 root@master1 scheduler]#kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod-nodename 0/1 Pending 0 5s <none> node1 <none> <none>
[13:49:33 root@master1 scheduler]#kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod-nodename 0/1 Pending 0 9s <none> node1 <none> <none>
[13:49:37 root@master1 scheduler]#
[13:49:55 root@master1 scheduler]#
[13:49:55 root@master1 scheduler]#kubectl delete -f 02-scheduler-pod-nodename.yaml
pod "pod-nodename" deleted
# 改为正确的节点名
[13:50:26 root@master1 scheduler]#vim 02-scheduler-pod-nodename.yaml
[13:51:06 root@master1 scheduler]#cat 02-scheduler-pod-nodename.yaml
apiVersion: v1
kind: Pod
metadata:
name: pod-nodename
spec:
nodeName: node1.noisedu.cn
containers:
- name: demoapp
image: 10.0.0.55:80/mykubernetes/pod_test:v0.1
imagePullPolicy: IfNotPresent
[13:51:10 root@master1 scheduler]#kubectl apply -f 02-scheduler-pod-nodename.yaml
pod/pod-nodename created
[13:51:15 root@master1 scheduler]#kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod-nodename 1/1 Running 0 6s 10.244.3.6 node1.noisedu.cn <none> <none>
#------------------------------------------------------------------------------
# 调度到标签为ssd的节点
[13:53:39 root@master1 scheduler]#cat 03-scheduler-pod-nodeselector.yaml
apiVersion: v1
kind: Pod
metadata:
name: pod-nodeselector
spec:
containers:
- name: demoapp
image: 10.0.0.55:80/mykubernetes/pod_test:v0.1
imagePullPolicy: IfNotPresent
nodeSelector:
node: ssd
[13:54:05 root@master1 scheduler]#kubectl get node --show-labels
NAME STATUS ROLES AGE VERSION LABELS
master1 Ready control-plane,master 35d v1.22.1 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=master1,kubernetes.io/os=linux,node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.kubernetes.io/exclude-from-external-load-balancers=
master2.noisedu.cn Ready control-plane,master 35d v1.22.1 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=master2.noisedu.cn,kubernetes.io/os=linux,node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.kubernetes.io/exclude-from-external-load-balancers=
master3.noisedu.cn Ready control-plane,master 35d v1.22.1 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=master3.noisedu.cn,kubernetes.io/os=linux,node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.kubernetes.io/exclude-from-external-load-balancers=
node1.noisedu.cn Ready <none> 35d v1.22.1 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=node1.noisedu.cn,kubernetes.io/os=linux
node2.noisedu.cn Ready <none> 35d v1.22.1 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=node2.noisedu.cn,kubernetes.io/os=linux
[13:54:19 root@master1 scheduler]#kubectl get node --show-labels | grep ssd
[13:54:24 root@master1 scheduler]#kubectl apply -f 03-scheduler-pod-nodeselector.yaml
pod/pod-nodeselector created
# 由于没有存在为ssd标签的节点,所以pod一直处于pending状态
[13:54:42 root@master1 scheduler]#kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod-nodeselector 0/1 Pending 0 5s <none> <none> <none> <none>
[13:54:47 root@master1 scheduler]#kubectl describe pod pod-nodeselector
Name: pod-nodeselector
Namespace: default
Priority: 0
Node: <none>
Labels: <none>
Annotations: <none>
Status: Pending
IP:
IPs: <none>
Containers:
demoapp:
Image: 10.0.0.55:80/mykubernetes/pod_test:v0.1
Port: <none>
Host Port: <none>
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-mkmfp (ro)
Conditions:
Type Status
PodScheduled False
Volumes:
kube-api-access-mkmfp:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: node=ssd
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 11s default-scheduler 0/5 nodes are available: 2 node(s) didn't match Pod's node affinity/selector, 3 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate.
# 一旦给某个节点加上这个标签,pod就会成功
[13:56:58 root@master1 scheduler]#kubectl label node node2.noisedu.cn node=ssd
node/node2.noisedu.cn labeled
[13:57:22 root@master1 scheduler]#kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod-nodeselector 1/1 Running 0 30s 10.244.4.5 node2.noisedu.cn <none> <none>
[13:57:28 root@master1 scheduler]#kubectl describe pod pod-nodeselector
Name: pod-nodeselector
Namespace: default
Priority: 0
Node: node2.noisedu.cn/10.0.0.54
Start Time: Sun, 16 Jan 2022 13:57:22 +0800
Labels: <none>
Annotations: cni.projectcalico.org/containerID: 74260ea9c42767d0f0b9fde2fdd74071c58e40fdd82be1af7afb69925ea438c5
cni.projectcalico.org/podIP: 10.244.4.5/32
cni.projectcalico.org/podIPs: 10.244.4.5/32
Status: Running
IP: 10.244.4.5
IPs:
IP: 10.244.4.5
Containers:
demoapp:
Container ID: docker://d663a8a36145e37c54a9987755f95311b250c1e0f9b8137b4836aae8ba89c0a4
Image: 10.0.0.55:80/mykubernetes/pod_test:v0.1
Image ID: docker-pullable://10.0.0.55:80/mykubernetes/pod_test@sha256:54402cda2ef15f45e4aafe98a5e56d4de076e3d4100c2a1bf1b780c787372fed
Port: <none>
Host Port: <none>
State: Running
Started: Sun, 16 Jan 2022 13:57:24 +0800
Ready: True
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-tktqv (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
kube-api-access-tktqv:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: node=ssd
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 35s default-scheduler 0/5 nodes are available: 2 node(s) didn't match Pod's node affinity/selector, 3 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate.
Normal Scheduled 11s default-scheduler Successfully assigned default/pod-nodeselector to node2.noisedu.cn
Normal Pulled 9s kubelet Container image "10.0.0.55:80/mykubernetes/pod_test:v0.1" already present on machine
Normal Created 9s kubelet Created container demoapp
Normal Started 9s kubelet Started container demoapp
节点表达式匹配matchExpressions
[13:59:01 root@master1 scheduler]#cat 04-scheduler-pod-node-required-affinity.yaml
apiVersion: v1
kind: Pod
metadata:
name: node-required-affinity
spec:
containers:
- name: demoapp
image: 10.0.0.55:80/mykubernetes/pod_test:v0.1
imagePullPolicy: IfNotPresent
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: env
operator: In
values:
- dev
- test
[14:01:06 root@master1 scheduler]#kubectl apply -f 04-scheduler-pod-node-required-affinity.yaml
pod/node-required-affinity created
[14:01:33 root@master1 scheduler]#kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
node-required-affinity 0/1 Pending 0 6s <none> <none> <none> <none>
[14:01:39 root@master1 scheduler]#kubectl describe pod node-required-affinity
Name: node-required-affinity
Namespace: default
Priority: 0
Node: <none>
Labels: <none>
Annotations: <none>
Status: Pending
IP:
IPs: <none>
Containers:
demoapp:
Image: 10.0.0.55:80/mykubernetes/pod_test:v0.1
Port: <none>
Host Port: <none>
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-g52g2 (ro)
Conditions:
Type Status
PodScheduled False
Volumes:
kube-api-access-g52g2:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 12s default-scheduler 0/5 nodes are available: 2 node(s) didn't match Pod's node affinity/selector, 3 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate.
# 由于当前无任何的标签匹配到,所以pod一直处于pending
# 添加标签env=test到node1.noisedu.cn
[14:01:45 root@master1 scheduler]#kubectl label node node1.noisedu.cn env=test
node/node1.noisedu.cn labeled
[14:02:45 root@master1 scheduler]#kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
node-required-affinity 1/1 Running 0 75s 10.244.3.7 node1.noisedu.cn <none> <none>
[14:02:48 root@master1 scheduler]#kubectl describe pod node-required-affinity
Name: node-required-affinity
Namespace: default
Priority: 0
Node: node1.noisedu.cn/10.0.0.53
Start Time: Sun, 16 Jan 2022 14:02:45 +0800
Labels: <none>
Annotations: cni.projectcalico.org/containerID: 925088d019790f61bf46bf4e6949ff386bd145f53fcc328c669eaad2dcd130a1
cni.projectcalico.org/podIP: 10.244.3.7/32
cni.projectcalico.org/podIPs: 10.244.3.7/32
Status: Running
IP: 10.244.3.7
IPs:
IP: 10.244.3.7
Containers:
demoapp:
Container ID: docker://5df82eeef3c5d5d3ec1de8a36665f7f8ef5faac112c3f2fa2442fbf019d273bf
Image: 10.0.0.55:80/mykubernetes/pod_test:v0.1
Image ID: docker-pullable://10.0.0.55:80/mykubernetes/pod_test@sha256:54402cda2ef15f45e4aafe98a5e56d4de076e3d4100c2a1bf1b780c787372fed
Port: <none>
Host Port: <none>
State: Running
Started: Sun, 16 Jan 2022 14:02:46 +0800
Ready: True
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-g52g2 (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
kube-api-access-g52g2:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 78s default-scheduler 0/5 nodes are available: 2 node(s) didn't match Pod's node affinity/selector, 3 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate.
Normal Scheduled 6s default-scheduler Successfully assigned default/node-required-affinity to node1.noisedu.cn
Normal Pulled 5s kubelet Container image "10.0.0.55:80/mykubernetes/pod_test:v0.1" already present on machine
Normal Created 5s kubelet Created container demoapp
Normal Started 5s kubelet Started container demoapp
# 发现此时匹配成功,调度到node1.noisedu.cn
软亲和preferredDuringSchedulingIgnoredDuringExecution
[14:04:47 root@master1 scheduler]#cat 05-scheduler-pod-node-preferred-affinity.yaml
apiVersion: v1
kind: Pod
metadata:
name: node-preferred-affinity
spec:
containers:
- name: demoapp
image: 10.0.0.55:80/mykubernetes/pod_test:v0.1
imagePullPolicy: IfNotPresent
affinity:
nodeAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 50
preference:
matchExpressions:
- key: env
operator: In
values:
- test
- weight: 20
preference:
matchExpressions:
- key: env
operator: In
values:
- dev
[14:07:11 root@master1 scheduler]#kubectl apply -f 05-scheduler-pod-node-preferred-affinity.yaml
pod/node-preferred-affinity created
[14:07:26 root@master1 scheduler]#kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
node-preferred-affinity 1/1 Running 0 5s 10.244.3.8 node1.noisedu.cn <none> <none>
# 因为指定了preferredDuringSchedulingIgnoredDuringExecution这个属性,即使不满足任何条件也可以调度到任意节点
# 我们重新给node1.noisedu.cn和node2.noisedu.cn加上标签,然后重新创建这个pod查看效果
[14:09:19 root@master1 scheduler]#kubectl delete -f 05-scheduler-pod-node-preferred-affinity.yaml
pod "node-preferred-affinity" deleted
[14:12:46 root@master1 scheduler]#kubectl label node node1.noisedu.cn env=dev
node/node1.noisedu.cn labeled
[14:13:04 root@master1 scheduler]#kubectl label node node2.noisedu.cn env=test
node/node2.noisedu.cn labeled
[14:13:12 root@master1 scheduler]#kubectl apply -f 05-scheduler-pod-node-preferred-affinity.yaml
pod/node-preferred-affinity created
[14:13:22 root@master1 scheduler]#kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
node-preferred-affinity 1/1 Running 0 5s 10.244.4.6 node2.noisedu.cn <none> <none>
[14:13:27 root@master1 scheduler]#kubectl describe pod node-preferred-affinity
Name: node-preferred-affinity
Namespace: default
Priority: 0
Node: node2.noisedu.cn/10.0.0.54
Start Time: Sun, 16 Jan 2022 14:13:22 +0800
Labels: <none>
Annotations: cni.projectcalico.org/containerID: 113cd980d6dee836fcf307ae8214235636c7c35c4476b5a37e5a4c92693ea61b
cni.projectcalico.org/podIP: 10.244.4.6/32
cni.projectcalico.org/podIPs: 10.244.4.6/32
Status: Running
IP: 10.244.4.6
IPs:
IP: 10.244.4.6
Containers:
demoapp:
Container ID: docker://7a520abf227cdc993a2b6288ecc1b6a35da7179e1f4a9a4443e817a88baa96f9
Image: 10.0.0.55:80/mykubernetes/pod_test:v0.1
Image ID: docker-pullable://10.0.0.55:80/mykubernetes/pod_test@sha256:54402cda2ef15f45e4aafe98a5e56d4de076e3d4100c2a1bf1b780c787372fed
Port: <none>
Host Port: <none>
State: Running
Started: Sun, 16 Jan 2022 14:13:24 +0800
Ready: True
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-b7m2v (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
kube-api-access-b7m2v:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 11s default-scheduler Successfully assigned default/node-preferred-affinity to node2.noisedu.cn
Normal Pulled 10s kubelet Container image "10.0.0.55:80/mykubernetes/pod_test:v0.1" already present on machine
Normal Created 9s kubelet Created container demoapp
Normal Started 9s kubelet Started container demoapp
# 此时发现因为node2.noisedu.cn的权重比较大,所以调度到node2这个节点,权重比较也是软亲和和硬亲和的最重要区别。
# 资源调度实验,我们对CPU和内存做了资源限制,要求满足资源的node才给调度
[14:18:56 root@master1 scheduler]#cat 06-scheduler-pod-node-resourcefits-affinity.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: node-resourcefits-affinity
spec:
replicas: 2
selector:
matchLabels:
app: podtest
template:
metadata:
labels:
app: podtest
spec:
containers:
- name: podtest
image: 10.0.0.55:80/mykubernetes/pod_test:v0.1
imagePullPolicy: IfNotPresent
resources:
requests:
cpu: 2
memory: 2Gi
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: env
operator: Exists
[14:19:14 root@master1 scheduler]#kubectl apply -f 06-scheduler-pod-node-resourcefits-affinity.yaml
deployment.apps/node-resourcefits-affinity created
[14:19:19 root@master1 scheduler]#kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
node-resourcefits-affinity-84fd5f6f9c-qmmlc 0/1 Pending 0 6s <none> <none> <none> <none>
node-resourcefits-affinity-84fd5f6f9c-wpqt6 0/1 Pending 0 6s <none> <none> <none> <none>
[14:19:25 root@master1 scheduler]#kubectl describe pod
poddisruptionbudgets.policy pods podsecuritypolicies.policy podtemplates
[14:19:25 root@master1 scheduler]#kubectl describe pod node-resourcefits-affinity-84fd5f6f9c-qmmlc
Name: node-resourcefits-affinity-84fd5f6f9c-qmmlc
Namespace: default
Priority: 0
Node: <none>
Labels: app=podtest
pod-template-hash=84fd5f6f9c
Annotations: <none>
Status: Pending
IP:
IPs: <none>
Controlled By: ReplicaSet/node-resourcefits-affinity-84fd5f6f9c
Containers:
podtest:
Image: 10.0.0.55:80/mykubernetes/pod_test:v0.1
Port: <none>
Host Port: <none>
Requests:
cpu: 2
memory: 2Gi
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-ljtxk (ro)
Conditions:
Type Status
PodScheduled False
Volumes:
kube-api-access-ljtxk:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 22s default-scheduler 0/5 nodes are available: 2 Insufficient cpu, 2 Insufficient memory, 3 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate.
# 我们发现满足2 CPU和内存的node都没有,所以pod处于pending。
# 修改CPU为0.2个,内存和100Mi,重新创建pod
[14:23:17 root@master1 scheduler]#cat 06-scheduler-pod-node-resourcefits-affinity.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: node-resourcefits-affinity
spec:
replicas: 2
selector:
matchLabels:
app: podtest
template:
metadata:
labels:
app: podtest
spec:
containers:
- name: podtest
image: 10.0.0.55:80/mykubernetes/pod_test:v0.1
imagePullPolicy: IfNotPresent
resources:
requests:
cpu: 0.2
memory: 100Mi
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: env
operator: Exists
[14:23:22 root@master1 scheduler]#kubectl apply -f 06-scheduler-pod-node-resourcefits-affinity.yaml
deployment.apps/node-resourcefits-affinity created
[14:23:26 root@master1 scheduler]#kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
node-resourcefits-affinity-778bdb685-76x5s 1/1 Running 0 6s 10.244.3.9 node1.noisedu.cn <none> <none>
node-resourcefits-affinity-778bdb685-h54vp 1/1 Running 0 6s 10.244.4.7 node2.noisedu.cn <none> <none>
[14:23:32 root@master1 scheduler]#kubectl describe pod node-resourcefits-affinity-778bdb685-76x5s
Name: node-resourcefits-affinity-778bdb685-76x5s
Namespace: default
Priority: 0
Node: node1.noisedu.cn/10.0.0.53
Start Time: Sun, 16 Jan 2022 14:23:26 +0800
Labels: app=podtest
pod-template-hash=778bdb685
Annotations: cni.projectcalico.org/containerID: f615b27cbf0fd22f4b08ba1675192b789c88646e38cf8db7c7df33add1409ae4
cni.projectcalico.org/podIP: 10.244.3.9/32
cni.projectcalico.org/podIPs: 10.244.3.9/32
Status: Running
IP: 10.244.3.9
IPs:
IP: 10.244.3.9
Controlled By: ReplicaSet/node-resourcefits-affinity-778bdb685
Containers:
podtest:
Container ID: docker://5ede69880785fe10fe4e12e6c0ef88a116fff69b40fe69aa0dc7f5ff4f8a8c44
Image: 10.0.0.55:80/mykubernetes/pod_test:v0.1
Image ID: docker-pullable://10.0.0.55:80/mykubernetes/pod_test@sha256:54402cda2ef15f45e4aafe98a5e56d4de076e3d4100c2a1bf1b780c787372fed
Port: <none>
Host Port: <none>
State: Running
Started: Sun, 16 Jan 2022 14:23:28 +0800
Ready: True
Restart Count: 0
Requests:
cpu: 200m
memory: 100Mi
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-n9rk2 (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
kube-api-access-n9rk2:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 16s default-scheduler Successfully assigned default/node-resourcefits-affinity-778bdb685-76x5s to node1.noisedu.cn
Normal Pulled 15s kubelet Container image "10.0.0.55:80/mykubernetes/pod_test:v0.1" already present on machine
Normal Created 15s kubelet Created container podtest
Normal Started 15s kubelet Started container podtest
# 发现成功,由于node1和node2都存在env这个label,所以随机调度。
[14:24:41 root@master1 scheduler]#kubectl get node --show-labels | grep env
node1.noisedu.cn Ready <none> 35d v1.22.1 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,env=dev,kubernetes.io/arch=amd64,kubernetes.io/hostname=node1.noisedu.cn,kubernetes.io/os=linux
node2.noisedu.cn Ready <none> 35d v1.22.1 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,env=test,kubernetes.io/arch=amd64,kubernetes.io/hostname=node2.noisedu.cn,kubernetes.io/os=linux