ReplicaSet控制器
一、ReplicaSet概述
ReplicaSet是kubernetes中的一种副本控制器,简称rs,主要作用是控制由其管理的pod,使pod副本的数量始终维持在预设的个数。它的主要作用就是保证一定数量的Pod能够在集群中正常运行,它会持续监听这些Pod的运行状态,在Pod发生故障时重启pod,pod数量减少时重新运行新的 Pod副本。
官方推荐不要直接使用ReplicaSet,用Deployments取而代之,Deployments是比ReplicaSet更高级的概念,它会管理ReplicaSet并提供很多其它有用的特性,最重要的是Deployments支持声明式更新,声明式更新的好处是不会丢失历史变更。所以Deployment控制器不直接管理Pod对象,而是由 Deployment 管理ReplicaSet,再由ReplicaSet负责管理Pod对象。
ReplicaSet核心作用在于代用户创建指定数量的pod副本,并确保pod副本一直处于满足用户期望的数量, 起到多退少补的作用,并且还具有自动扩容缩容等机制。
ReplicaSet的副本数量,标签选择器和pod模板都可以随时按需进行修改,不过仅改动期望的副本数量会对现存的pod副本产生直接影响。修改标签选择器可能会使现有的pod副本的标签变得不再匹配,此时ReplicaSet控制器要做的不过是不再计入它们而已。另外,在创建完成后,ReplicaSet也不会再关注pod对象中的实际内容,因此pod模板的改动也只会对后来新建的pod副本产生影响。
相比较手动创建和管理pod资源来说,ReplicaSet能够实现以下功能:
1)确保pod资源对象的数量精确反映期望值:ReplicaSet需要确保由其控制运行的pod副本数量精确吻合配置中定义的期望值,否则就会自动补足所缺或终止多余的pod
2)确保pod健康运行:探测到由其管控的pod对象因其所在的工作节点故障而不可用时,自动请求由调度器于其他工作节点创建缺失的pod副本
3)弹性伸缩:业务规模因各种原因时常存在明显波动,在波峰或波谷期间,可以通过ReplicaSet控制器动态调整相关pod资源对象的数量。此外,在必要的时候还可以通过HPA控制器实现pod资源规模的自动伸缩。
二、创建ReplicaSet
类似pod资源,创建ReplicaSet控制器对象时同样使用YAML或JSON格式的清单文件定义其配置,而后使用相关的创建命令来完成资源的创建。
[root@k8s-master1 ~]# kubectl explain replicaset
KIND: ReplicaSet
VERSION: apps/v1
DESCRIPTION:
ReplicaSet ensures that a specified number of pod replicas are running at
any given time.
FIELDS:
apiVersion <string>
APIVersion defines the versioned schema of this representation of an
object. Servers should convert recognized schemas to the latest internal
value, and may reject unrecognized values. More info:
https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources
kind <string>
Kind is a string value representing the REST resource this object
represents. Servers may infer this from the endpoint the client submits
requests to. Cannot be updated. In CamelCase. More info:
https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds
metadata <Object>
If the Labels of a ReplicaSet are empty, they are defaulted to be the same
as the Pod(s) that the ReplicaSet manages. Standard object's metadata. More
info:
https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata
spec <Object>
Spec defines the specification of the desired behavior of the ReplicaSet.
More info:
https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status
status <Object>
Status is the most recently observed status of the ReplicaSet. This data
may be out of date by some window of time. Populated by the system.
Read-only. More info:
https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status
ReplicaSet也由kind,apiVersion,metadata,spec和status这5个一级字段组成,其中status为只读字段,因此需要在清单文件中配置前四个字段。
[root@k8s-master1 ~]# kubectl explain replicaset.spec
KIND: ReplicaSet
VERSION: apps/v1
RESOURCE: spec <Object>
DESCRIPTION:
Spec defines the specification of the desired behavior of the ReplicaSet.
More info:
https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status
ReplicaSetSpec is the specification of a ReplicaSet.
FIELDS:
minReadySeconds <integer>
Minimum number of seconds for which a newly created pod should be ready
without any of its container crashing, for it to be considered available.
Defaults to 0 (pod will be considered available as soon as it is ready)
replicas <integer>
Replicas is the number of desired replicas. This is a pointer to
distinguish between explicit zero and unspecified. Defaults to 1. More
info:
https://kubernetes.io/docs/concepts/workloads/controllers/replicationcontroller/#what-is-a-replicationcontroller
selector <Object> -required-
Selector is a label query over pods that should match the replica count.
Label keys and values that must match in order to be controlled by this
replica set. It must match the pod template's labels. More info:
https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/#label-selectors
template <Object>
Template is the object that describes the pod that will be created if
insufficient replicas are detected. More info:
https://kubernetes.io/docs/concepts/workloads/controllers/replicationcontroller#pod-template
ReplicaSet的spec字段一般嵌套使用minReadySeconds,replicas,selector和template 这四个字段。
1)minReadySeconds <integer>:新建pod对象,在启动后的多长时间内如果其容器未发生崩溃等异常情况即被视为“就绪”;默认为0秒,表示一旦就绪性探测成功,即被视为可用。
2)replicas <integer>:期望的pod对象副本数
3)selector <Object> -required- :当前控制器匹配pod对象副本的标签选择器,支持matchLabels和matchExpressions两种匹配机制
4)template <Object>:用于补足pod副本数量时使用的pod模板资源
1. 编写一个ReplicaSet资源清单:
[root@k8s-master1 ~]# mkdir replicaset
You have new mail in /var/spool/mail/root
[root@k8s-master1 ~]# cd replicaset/
[root@k8s-master1 replicaset]# ll
total 0
[root@k8s-master1 replicaset]# kubectl get pods
NAME READY STATUS RESTARTS AGE
myapp-pod 1/1 Running 1 23h
[root@k8s-master1 replicaset]# vim replicaset-demo.yaml
[root@k8s-master1 replicaset]# cat replicaset-demo.yaml
apiVersion: apps/v1
kind: ReplicaSet #创建的资源类型
metadata:
name: frontend #控制器名字
labels:
app: guestbook
tier: frontend
spec:
replicas: 3 #管理pod副本数量
selector:
matchLabels:
tier: frontend #管理带有tier=frontend标签的pod
template: #定义pod模板
metadata:
labels:
tier: frontend #pod标签,控制器根据这个标签找pod,管理对应的pod
spec:
containers: #定义pod中运行的容器
- name: php-redis #容器的名字
image: yecc/gcr.io-google_samples-gb-frontend:v3
imagePullPolicy: IfNotPresent
2. 创建ReplicaSet资源
[root@k8s-master1 replicaset]# kubectl apply -f replicaset-demo.yaml replicaset.apps/frontend created You have new mail in /var/spool/mail/root [root@k8s-master1 replicaset]# kubectl get rs -o wide NAME DESIRED CURRENT READY AGE CONTAINERS IMAGES SELECTOR frontend 3 3 3 16s php-redis yecc/gcr.io-google_samples-gb-frontend:v3 tier=frontend [root@k8s-master1 replicaset]# kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES frontend-5vghz 1/1 Running 0 23s 10.244.169.141 k8s-node2 <none> <none> frontend-f62dw 1/1 Running 0 23s 10.244.36.108 k8s-node1 <none> <none> frontend-z7kng 1/1 Running 0 23s 10.244.36.107 k8s-node1 <none> <none>
pod的名字是由控制器的名字-随机数组成的。经由控制器创建与用户自主创建pod对象功能一样,但其自动和解的功能在很大程度上能为用户省去不少管理精力,这也是使得k8s系统上的应用程序变得拥有自愈能力的主要保障。
三、ReplicaSet管控下的pod对象
在实际中存在不少可能导致pod对象数目与期望值不符合的可能性,如pod对象意外删除,pod对象标签的变动,控制器的标签选择器变动,甚至是工作节点故障等。ReplicaSet控制器的和解循环过程能够实时监控到这类异常,并及时启动和解操作。
1. 缺少pod副本
任何原因导致的相关pod对象丢失,都会由ReplicaSet控制器自动补足。例如,手动删除上面列出的一个pod对象。
[root@k8s-master1 replicaset]# kubectl delete pods frontend-f62dw pod "frontend-f62dw" deleted
再次罗列相关pod对象信息,可以看到删除了pod:frontend-f62dw,但是又新建了一个pod对象副本frontend-sfzjr
[root@k8s-master1 replicaset]# kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES frontend-5vghz 1/1 Running 0 14m 10.244.169.141 k8s-node2 <none> <none> frontend-sfzjr 1/1 Running 0 7s 10.244.36.109 k8s-node1 <none> <none> frontend-z7kng 1/1 Running 0 14m 10.244.36.107 k8s-node1 <none> <none>
另外,强行修改隶属于控制器frontend的某个pod资源的标签,会导致它不再被控制器作为副本技术,这也将触发控制器的pod对象副本缺失补足机制。例如,将frontend-z7kng标签tier的值置空:
[root@k8s-master1 replicaset]# kubectl label pods frontend-z7kng tier= --overwrite pod/frontend-z7kng labeled
查看pod对象信息,发现正在创建新的对象副本
[root@k8s-master1 replicaset]# kubectl get pods -w NAME READY STATUS RESTARTS AGE frontend-5vghz 1/1 Running 0 24m frontend-sfzjr 1/1 Running 0 10m frontend-z7kng 1/1 Running 0 24m frontend-z7kng 1/1 Running 0 26m frontend-z7kng 1/1 Running 0 26m frontend-4852w 0/1 Pending 0 0s frontend-4852w 0/1 Pending 0 0s frontend-4852w 0/1 ContainerCreating 0 0s frontend-4852w 0/1 ContainerCreating 0 1s frontend-4852w 1/1 Running 0 2s
列出frontend控制器下相关的pod对象信息,发现pod对象frontend-z7kng已经消失,重新创建了frontend-4852w对象副本:
[root@k8s-master1 replicaset]# kubectl get pods -l tier=frontend NAME READY STATUS RESTARTS AGE frontend-4852w 1/1 Running 0 4m12s frontend-5vghz 1/1 Running 0 30m frontend-sfzjr 1/1 Running 0 16m
由此可见,修改了pod资源的标签即可将其从控制器的管控下移出,如果,修改后的标签又能被其他控制器资源的标签选择器选中,那么此时它又隶属于另外一个控制器的副本。如果修改器标签后的pod对象不再隶属于任何控制器,那么它将成为自主式pod,即误删或者所在节点故障都会造成其永久性消失。
[root@k8s-master1 replicaset]# kubectl delete pods frontend-z7kng pod "frontend-z7kng" deleted [root@k8s-master1 replicaset]# kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES frontend-4852w 1/1 Running 0 10m 10.244.36.110 k8s-node1 <none> <none> frontend-5vghz 1/1 Running 0 37m 10.244.169.141 k8s-node2 <none> <none> frontend-sfzjr 1/1 Running 0 23m 10.244.36.109 k8s-node1 <none> <none>
2. 多出pod副本
一旦被标签选择器匹配到的pod资源数量因任何原因超出期望值,多余的部分都将被控制器自动删除。例如,创建一个标签为tier=frontend的pod对象。
[root@k8s-master1 pod]# vim pod-test.yaml
[root@k8s-master1 pod]# cat pod-test.yaml
apiVersion: v1
kind: Pod
metadata:
name: pod-test
labels:
tier: frontend
spec:
containers:
- name: nginx-test
ports:
- containerPort: 80
image: nginx:latest
imagePullPolicy: IfNotPresent
[root@k8s-master1 pod]# kubectl apply -f pod-test.yaml
pod/pod-test created
再次罗列出相关的pod对象,可以看到frontend控制器启动了删除了多余的pod操作,pod-test正处于终止的过程
[root@k8s-master1 replicaset]# kubectl get pods -w NAME READY STATUS RESTARTS AGE frontend-4852w 1/1 Running 0 17m frontend-5vghz 1/1 Running 0 43m frontend-sfzjr 1/1 Running 0 29m pod-test 0/1 Pending 0 0s pod-test 0/1 Pending 0 0s pod-test 0/1 Pending 0 0s pod-test 0/1 Terminating 0 0s pod-test 0/1 Terminating 0 0s pod-test 0/1 Terminating 0 1s pod-test 0/1 Terminating 0 3s pod-test 0/1 Terminating 0 7s pod-test 0/1 Terminating 0 7s
这就意味着,任何自主式的或本隶属于其他控制器的pod资源其标签变动的结果一旦匹配到了其他的副本数足额的控制器,就会导致这类pod资源被删除。
3. 查看pod资源变动的相关事件
使用“kubectl describe replicasets”命令可打印出控制器的详细状态,从中可以看出控制器frontend执行了pod资源的创建和删除操作,为的就是确保其数量的准确性。
[root@k8s-master1 replicaset]# kubectl describe replicasets
Name: frontend
Namespace: default
Selector: tier=frontend
Labels: app=guestbook
tier=frontend
Annotations: <none>
Replicas: 3 current / 3 desired
Pods Status: 3 Running / 0 Waiting / 0 Succeeded / 0 Failed
Pod Template:
Labels: tier=frontend
Containers:
php-redis:
Image: yecc/gcr.io-google_samples-gb-frontend:v3
Port: <none>
Host Port: <none>
Environment: <none>
Mounts: <none>
Volumes: <none>
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal SuccessfulCreate 49m replicaset-controller Created pod: frontend-z7kng
Normal SuccessfulCreate 49m replicaset-controller Created pod: frontend-5vghz
Normal SuccessfulCreate 49m replicaset-controller Created pod: frontend-f62dw
Normal SuccessfulCreate 35m replicaset-controller Created pod: frontend-sfzjr
Normal SuccessfulCreate 23m replicaset-controller Created pod: frontend-4852w
Normal SuccessfulDelete 5m40s replicaset-controller Deleted pod: pod-test
事实上,ReplicaSet控制器能对pod对象数目的异常及时作出响应,是因为它向API Server注册监听(watch)了相关资源及其列表的变动信息,于是API Server会在变动发生时立即通知给相关的监听客户端。
四、更新ReplicaSet控制器
ReplicaSet控制器的核心组成部分是标签选择器、副本数量和pod模板,但要更新操作一般是围绕replicas和template两个字段值进行的,毕竟改变标签选择器的需求几乎不存在。改动pod模板的定义对已经创建完成的活动对象无效,但在用户逐个手动关闭其旧版本的pod资源后就能以新代旧,实现控制器下应用版本的滚动升级。另外,修改副本的数量也就意味着应用规模的扩展或收缩。
1. 更改pod模板:升级应用
ReplicaSet控制器的pod模板可随时按需修改,但它仅影响这之后由其新建的pod对象,对已有的副本不会产生作用。大多数情况下,用户需要改变的通常是模板中的容器镜像文件及其相关的配置以实现应用的版本升级。如:修改镜像image: yecc/gcr.io-google_samples-gb-frontend:v3变成- image: ikubernetes/myapp:v1
[root@k8s-master1 replicaset]# vim replicaset-demo.yaml
You have new mail in /var/spool/mail/root
[root@k8s-master1 replicaset]# cat replicaset-demo.yaml
apiVersion: apps/v1
kind: ReplicaSet
metadata:
name: frontend
labels:
app: guestbook
tier: frontend
spec:
replicas: 3
selector:
matchLabels:
tier: frontend
template:
metadata:
labels:
tier: frontend
spec:
containers:
- name: myapp
image: ikubernetes/myapp:v1
imagePullPolicy: IfNotPresent
[root@k8s-master1 replicaset]# kubectl apply -f replicaset-demo.yaml
replicaset.apps/frontend configured
You have new mail in /var/spool/mail/root
[root@k8s-master1 replicaset]# kubectl get rs -o wide
NAME DESIRED CURRENT READY AGE CONTAINERS IMAGES SELECTOR
frontend 3 3 3 22h myapp ikubernetes/myapp:v1 tier=frontend
[root@k8s-master1 replicaset]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
frontend-4852w 1/1 Running 1 22h 10.244.36.112 k8s-node1 <none> <none>
frontend-5vghz 1/1 Running 1 22h 10.244.169.142 k8s-node2 <none> <none>
frontend-sfzjr 1/1 Running 1 22h 10.244.36.113 k8s-node1 <none> <none>
上面可以看到镜像变成了ikubernetes/myapp:v1,说明滚动升级成功了。
[root@k8s-master1 replicaset]# kubectl describe pod frontend-4852w
Name: frontend-4852w
Namespace: default
Priority: 0
Node: k8s-node1/10.0.0.132
Start Time: Mon, 05 Sep 2022 23:46:56 +0800
Labels: tier=frontend
Annotations: cni.projectcalico.org/podIP: 10.244.36.112/32
cni.projectcalico.org/podIPs: 10.244.36.112/32
Status: Running
IP: 10.244.36.112
IPs:
IP: 10.244.36.112
Controlled By: ReplicaSet/frontend
Containers:
php-redis:
Container ID: docker://a2f3b94bf3c08d0226b9f0e40e7ca6eac0e71f733292b372853d9122f002147f
Image: yecc/gcr.io-google_samples-gb-frontend:v3
Image ID: docker://sha256:c038466384ab3c5c743186b1b85d14d4e523c91d0f328482764c5d448689fc9b
Port: <none>
Host Port: <none>
State: Running
Started: Tue, 06 Sep 2022 21:43:33 +0800
Last State: Terminated
Reason: Completed
Exit Code: 0
Started: Mon, 05 Sep 2022 23:46:57 +0800
Finished: Tue, 06 Sep 2022 00:17:40 +0800
Ready: True
Restart Count: 1
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-5n29f (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
default-token-5n29f:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-5n29f
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 22h default-scheduler Successfully assigned default/frontend-4852w to k8s-node1
Normal Pulled 22h kubelet Container image "yecc/gcr.io-google_samples-gb-frontend:v3" already present on machine
Normal Created 22h kubelet Created container php-redis
Normal Started 22h kubelet Started container php-redis
Normal SandboxChanged 15m (x3 over 16m) kubelet Pod sandbox changed, it will be killed and re-created.
Normal Pulled 14m kubelet Container image "yecc/gcr.io-google_samples-gb-frontend:v3" already present on machine
Normal Created 14m kubelet Created container php-redis
Normal Started 14m kubelet Started container php-redis
上面可以看到虽然镜像已经更新了,但是原来的pod使用的还是之前的镜像,对已有的pod资源对象无效,新创建的pod才会使用最新的镜像。
删除frontend-4852w 这个pod,会重新生成一个新的pod:frontend-xvmmw
[root@k8s-master1 replicaset]# kubectl delete pods frontend-4852w pod "frontend-4852w" deleted [root@k8s-master1 ~]# kubectl get pods -w NAME READY STATUS RESTARTS AGE frontend-4852w 1/1 Running 1 22h frontend-5vghz 1/1 Running 1 22h frontend-sfzjr 1/1 Running 1 22h frontend-4852w 1/1 Terminating 1 22h frontend-xvmmw 0/1 Pending 0 0s frontend-xvmmw 0/1 Pending 0 0s frontend-xvmmw 0/1 ContainerCreating 0 0s frontend-4852w 1/1 Terminating 1 22h frontend-4852w 0/1 Terminating 1 22h frontend-xvmmw 0/1 ContainerCreating 0 2s frontend-xvmmw 1/1 Running 0 3s frontend-4852w 0/1 Terminating 1 22h frontend-4852w 0/1 Terminating 1 22h [root@k8s-master1 replicaset]# kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES frontend-5vghz 1/1 Running 1 22h 10.244.169.142 k8s-node2 <none> <none> frontend-sfzjr 1/1 Running 1 22h 10.244.36.113 k8s-node1 <none> <none> frontend-xvmmw 1/1 Running 0 88s 10.244.36.114 k8s-node1 <none> <none>
查看新pod:frontend-xvmmw 的信息,使用的是新镜像ikubernetes/myapp:v1
[root@k8s-master1 replicaset]# kubectl describe pod frontend-xvmmw
Name: frontend-xvmmw
Namespace: default
Priority: 0
Node: k8s-node1/10.0.0.132
Start Time: Tue, 06 Sep 2022 22:01:23 +0800
Labels: tier=frontend
Annotations: cni.projectcalico.org/podIP: 10.244.36.114/32
cni.projectcalico.org/podIPs: 10.244.36.114/32
Status: Running
IP: 10.244.36.114
IPs:
IP: 10.244.36.114
Controlled By: ReplicaSet/frontend
Containers:
myapp:
Container ID: docker://89404a0af64081d3e2e50bfa07e6f24240f1c339519d2ba1c3f1c5b405eb439a
Image: ikubernetes/myapp:v1
Image ID: docker://sha256:d4a5e0eaa84f28550cb9dd1bde4bfe63a93e3cf88886aa5dad52c9a75dd0e6a9
Port: <none>
Host Port: <none>
State: Running
Started: Tue, 06 Sep 2022 22:01:25 +0800
Ready: True
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-5n29f (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
default-token-5n29f:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-5n29f
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 2m37s default-scheduler Successfully assigned default/frontend-xvmmw to k8s-node1
Normal Pulled 2m35s kubelet Container image "ikubernetes/myapp:v1" already present on machine
Normal Created 2m35s kubelet Created container myapp
Normal Started 2m35s kubelet Started container myapp
生产环境如果升级,可以删除一个pod,观察一段时间之后没问题再删除另一个pod,但是这样需要人工干预多次;实际生产环境一般采用蓝绿发布,原来有一个rs1,再创建一个rs2(控制器),通过修改service标签,修改service可以匹配到rs2的控制器,这样才是蓝绿发布。
2. 扩容和缩容
改动ReplicaSet控制器对象配置中期望的pod副本数量(replicas字段)会由控制器实时作出响应,从而实现应用规模的水平伸缩。kubectl还提供了一个专用的子命令scale用于实现应用规模的伸缩,它支持从资源清单文件中获取新的目标副本数量,也可以直接在命令行通过--replicas选项进行读取。例如:将frontend控制器的pod副本数量提升到4个:
[root@k8s-master1 replicaset]# kubectl get rs -o wide NAME DESIRED CURRENT READY AGE CONTAINERS IMAGES SELECTOR frontend 3 3 3 22h myapp ikubernetes/myapp:v1 tier=frontend [root@k8s-master1 replicaset]# kubectl scale replicasets frontend --replicas=4 replicaset.apps/frontend scaled You have new mail in /var/spool/mail/root [root@k8s-master1 replicaset]# kubectl get rs -o wide NAME DESIRED CURRENT READY AGE CONTAINERS IMAGES SELECTOR frontend 4 4 3 22h myapp ikubernetes/myapp:v1 tier=frontend [root@k8s-master1 replicaset]# kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES frontend-5vghz 1/1 Running 1 22h 10.244.169.142 k8s-node2 <none> <none> frontend-c5m4f 1/1 Running 0 6s 10.244.36.115 k8s-node1 <none> <none> frontend-sfzjr 1/1 Running 1 22h 10.244.36.113 k8s-node1 <none> <none> frontend-xvmmw 1/1 Running 0 12m 10.244.36.114 k8s-node1 <none> <none>
从frontend资源状态可以看出,将其pod副本数量扩展到4个的操作已完成。收缩规模的方式与扩展相同,只需要明确指定目标副本数量即可。
[root@k8s-master1 replicaset]# kubectl scale replicasets frontend --replicas=2 replicaset.apps/frontend scaled You have new mail in /var/spool/mail/root [root@k8s-master1 replicaset]# kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES frontend-5vghz 1/1 Running 1 22h 10.244.169.142 k8s-node2 <none> <none> frontend-c5m4f 0/1 Terminating 0 2m39s <none> k8s-node1 <none> <none> frontend-sfzjr 1/1 Running 1 22h 10.244.36.113 k8s-node1 <none> <none> frontend-xvmmw 0/1 Terminating 0 14m 10.244.36.114 k8s-node1 <none> <none> [root@k8s-master1 replicaset]# kubectl get rs -o wide NAME DESIRED CURRENT READY AGE CONTAINERS IMAGES SELECTOR frontend 2 2 2 22h myapp ikubernetes/myapp:v1 tier=frontend [root@k8s-master1 replicaset]# kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES frontend-5vghz 1/1 Running 1 22h 10.244.169.142 k8s-node2 <none> <none> frontend-sfzjr 1/1 Running 1 22h 10.244.36.113 k8s-node1 <none> <none>
另外,kubectl scale命令还支持在现有pod副本数量符合指定的值时才执行扩展操作,这仅需要为命令使用--current-replicas选项即可。例如:下面的命令表示如果frontend控制器目前的pod副本数量为2,就将其扩展到5个。
[root@k8s-master1 replicaset]# kubectl scale replicasets frontend --current-replicas=2 --replicas=5 replicaset.apps/frontend scaled You have new mail in /var/spool/mail/root [root@k8s-master1 replicaset]# kubectl get rs -o wide NAME DESIRED CURRENT READY AGE CONTAINERS IMAGES SELECTOR frontend 5 5 2 22h myapp ikubernetes/myapp:v1 tier=frontend [root@k8s-master1 replicaset]# kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES frontend-5vghz 1/1 Running 1 22h 10.244.169.142 k8s-node2 <none> <none> frontend-f8bz8 1/1 Running 0 7s 10.244.36.116 k8s-node1 <none> <none> frontend-pkqkw 1/1 Running 0 7s 10.244.36.118 k8s-node1 <none> <none> frontend-sfzjr 1/1 Running 1 22h 10.244.36.113 k8s-node1 <none> <none> frontend-wdrzh 1/1 Running 0 7s 10.244.36.117 k8s-node1 <none> <none>
注:当控制器现存pod副本数量不满足命令给出的数量时,扩展操作将不会执行,并返回错误提示
[root@k8s-master1 replicaset]# kubectl scale replicasets frontend --current-replicas=4 --replicas=2 error: Expected replicas to be 4, was 5
动态扩缩容的另一种方法:直接修改资源清单文件。例如:当前frontend控制器有5个pod副本数量,太多,想要减少到2个,立即生效。
[root@k8s-master1 replicaset]# kubectl get rs -o wide NAME DESIRED CURRENT READY AGE CONTAINERS IMAGES SELECTOR frontend 5 5 5 23h myapp ikubernetes/myapp:v1 tier=frontend [root@k8s-master1 replicaset]# kubectl edit replicasets frontend replicaset.apps/frontend edited [root@k8s-master1 replicaset]# kubectl get rs -o wide NAME DESIRED CURRENT READY AGE CONTAINERS IMAGES SELECTOR frontend 2 2 2 23h myapp ikubernetes/myapp:v1 tier=frontend [root@k8s-master1 replicaset]# kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES frontend-5vghz 1/1 Running 1 23h 10.244.169.142 k8s-node2 <none> <none> frontend-sfzjr 1/1 Running 1 22h 10.244.36.113 k8s-node1 <none> <none>
五、删除ReplicaSet控制器资源
使用kubectl delete命令删除ReplicaSet对象时默认会一并删除其管控的各pod对象。有时,考虑到这些pod资源未必由其创建,或者即便由其创建却也并非其自身的组成部分,因此,可以为命令使用“--cascade=false”选项 或者“--cascade=orphan”选项,取消级联,删除相关pod对象。如,删除rs控制器frontend:
[root@k8s-master1 replicaset]# kubectl delete replicasets frontend --cascade=false warning: --cascade=false is deprecated (boolean value) and can be replaced with --cascade=orphan. replicaset.apps "frontend" deleted [root@k8s-master1 replicaset]# kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES frontend-5vghz 1/1 Running 1 23h 10.244.169.142 k8s-node2 <none> <none> frontend-sfzjr 1/1 Running 1 22h 10.244.36.113 k8s-node1 <none> <none> [root@k8s-master1 replicaset]# kubectl get rs -o wide No resources found in default namespace.
删除操作完成后,此前由frontend控制器管控的pod对象仍然处于活动状态,但它们变成了自主式pod资源,用户需要自行组织和维护它们。
尽管ReplicaSet控制器功能强大,但是在实践中,RS需要手动执行更新操作。因此,很少会去单独使用RS,,它主要被Deployment这个更加高层的资源对象使用,Deployment控制器能够自动实现更完善的滚动更新和回滚。除非用户需要自定义升级功能或根本不需要升级Pod,在一般情况下,推荐使用Deployment而不直接使用ReplicaSet。

浙公网安备 33010602011771号