1 RS: ReplicaSet 的目的是维护一组在任何时候都处于运行状态的 Pod 副本的稳定集合。 因此,它通常用来保证给定数量的、完全相同的 Pod 的可用性。它也是deployment资源的基础资源,来整副本的稳定性。
2
3 RS资源实例
4 [root@k8smaster01 ~]# cat nginx-rs.yaml
5 apiVersion: apps/v1
6 kind: ReplicaSet
7 metadata:
8 name: nginx-rs
9 spec:
10 # pod副本数量
11 replicas: 3
12 # 标签选择器: 决定那些标签会被rs资源进行管理
13 selector:
14 matchLabels:
15 app: nginx
16 release: stable
17 # 定义pod资源的模板
18 template:
19 metadata:
20 labels:
21 app: nginx
22 release: stable
23 spec:
24 containers:
25 - name: nginx-pod
26 image: nginx
27
28
29 扩容&&缩容
30 # 方式一
31 [root@k8smaster01 ~]# kubectl edit rs nginx-rs
32 # 方式二
33 [root@k8smaster01 ~]# vim nginx-rs.yaml
34 [root@k8smaster01 ~]# kubectl apply -f nginx-rs.yaml
35
36
37
38
39
40
41
42
43 DP: Deployment 用于管理运行一个应用负载的一组 Pod,通常适用于不保持状态的负载。
44 增加:一个 Deployment 为 Pod 和 ReplicaSet 提供声明式的更新能力。
45
46 DP资源实例
47 [root@k8smaster01 ~]# cat nginx-deployment.yaml
48 apiVersion: apps/v1
49 kind: Deployment
50 metadata:
51 name: nginx-deployment
52 labels:
53 app: nginx
54 spec:
55 replicas: 3
56 selector:
57 matchLabels:
58 app: nginx
59 template:
60 metadata:
61 labels:
62 app: nginx
63 spec:
64 containers:
65 - name: nginx
66 image: nginx:1.14.2
67 ports:
68 - containerPort: 80
69
70
71 扩容&&缩容
72 [root@k8smaster01 ~]# kubectl scale --replicas=10 deployment nginx-deployment
73 deployment.apps/nginx-deployment scaled
74 [root@k8smaster01 ~]# kubectl get deployments.apps
75 NAME READY UP-TO-DATE AVAILABLE AGE
76 nginx-deployment 10/10 10 10 50m
77 [root@k8smaster01 ~]# kubectl scale --replicas=3 deployment nginx-deployment
78 deployment.apps/nginx-deployment scaled
79 [root@k8smaster01 ~]# kubectl get deployments.apps
80 NAME READY UP-TO-DATE AVAILABLE AGE
81 nginx-deployment 3/3 3 3 50m
82
83
84
85 更新&&回滚
86 - 更新策略
87 [root@k8smaster01 ~]# kubectl explain deployment.spec.strategy.type
88 Type of deployment. Can be "Recreate" or "RollingUpdate". Default is RollingUpdate.
89 Recreate/重建更新:
90 - 在Pod资源被删除时,使用新的模板定义被补充缺失的Pod数量,完成更新
91 - 触发条件:现有Pod被删除
92
93 RollingUpdate/滚动更新:
94 - 在Pod资源被删除时,使用新的模板定义被足缺失的Pod数量,完成更新
95 - 触发条件:podTemplate的hash码变动
96
97
98
99 - 更新方式
100 方式一、通过set命令
101 [root@k8smaster01 ~]# kubectl set image deployment/nginx-deployment nginx=nginx:1.16
102 deployment.apps/nginx-deployment image updated
103 [root@k8smaster01 ~]# kubectl rollout status deployment nginx-deployment
104 Waiting for deployment "nginx-deployment" rollout to finish: 2 out of 3 new replicas have been updated...
105 Waiting for deployment "nginx-deployment" rollout to finish: 2 out of 3 new replicas have been updated...
106 Waiting for deployment "nginx-deployment" rollout to finish: 2 out of 3 new replicas have been updated...
107 Waiting for deployment "nginx-deployment" rollout to finish: 1 old replicas are pending termination...
108 Waiting for deployment "nginx-deployment" rollout to finish: 1 old replicas are pending termination...
109 deployment "nginx-deployment" successfully rolled out
110
111 方式二、通过pathch命令
112 # 定位镜像的版本
113 [root@k8smaster01 ~]# kubectl get deployment nginx-deployment -o=jsonpath="{.spec.template.spec.containers[*].image}"
114
115 # 通过补丁方式更新
116 [root@k8smaster01 ~]# kubectl patch deployment nginx-deployment --type='json' -p='[{"op": "replace", "path": "/spec/template/spec/containers/0/image", "value":"nginx:1.19"}]'
117 # "op": "replace" 表示这是一个替换操作。
118 # "path": "/spec/template/spec/containers/0/image" 指定了要更新的路径,其中 containers/0 表示第一个容器(索引从 0 开始)。
119 # "value":"nginx:1.19" 是新的镜像版本。
120
121 deployment.apps/nginx-deployment patched
122 [root@k8smaster01 ~]# kubectl rollout status deployment nginx-deployment
123 Waiting for deployment "nginx-deployment" rollout to finish: 1 old replicas are pending termination...
124 Waiting for deployment "nginx-deployment" rollout to finish: 1 old replicas are pending termination...
125 deployment "nginx-deployment" successfully rolled out
126 [root@k8smaster01 ~]# kubectl get pods -o wide
127 NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
128 nginx-deployment-7f89bcdfd4-qtwnm 1/1 Running 0 12s 10.100.1.119 k8snode01.com <none> <none>
129 nginx-deployment-7f89bcdfd4-vq224 1/1 Running 0 9s 10.100.1.120 k8snode01.com <none> <none>
130 nginx-deployment-7f89bcdfd4-zhs9p 1/1 Running 0 10s 10.100.2.39 k8snode02.com <none> <none>
131 [root@k8smaster01 ~]# curl 10.100.1.119 -I
132 HTTP/1.1 200 OK
133 Server: nginx/1.19.10
134 Date: Thu, 23 Jan 2025 05:51:50 GMT
135 Content-Type: text/html
136 Content-Length: 612
137 Last-Modified: Tue, 13 Apr 2021 15:13:59 GMT
138 Connection: keep-alive
139 ETag: "6075b537-264"
140 Accept-Ranges: bytes
141
142
143
144 方式三、直接修改资源清单apply应用
145 [root@k8smaster01 ~]# cat nginx-deployment.yaml
146 .....
147 image: nginx:1.19
148 .....
149 [root@k8smaster01 ~]# kubectl apply -f nginx-deployment.yaml
150
151
152
153
154 - 查看更新
155 更新状态:
156 [root@k8smaster01 ~]# kubectl rollout status deployment nginx-deployment
157 更新历史
158 [root@k8smaster01 ~]# kubectl rollout history deployment nginx-deployment
159
160
161 -- 回滚操作
162 回滚上一版本
163 [root@k8smaster01 ~]# kubectl rollout history deployment nginx-deployment
164 deployment.apps/nginx-deployment
165 REVISION CHANGE-CAUSE
166 1 <none>
167 3 <none>
168 4 <none>
169
170 [root@k8smaster01 ~]# kubectl rollout undo deployment nginx-deployment
171 deployment.apps/nginx-deployment rolled back
172
173
174 回滚指定版本
175 [root@k8smaster01 ~]# kubectl rollout undo deployment nginx-deployment --to-revision=1
176 deployment.apps/nginx-deployment rolled back
177
178
179
180
181
182
183
184
185
186
187
188 DS资源:
189 DS资源和deployment结构类似,唯独没有副本数里,DS的数量均匀分布在Node节点运行,适合日志收集、监控agent的服务选择类型。
190
191
192
193 DS资源实例
194 [root@k8smaster01 ~]# cat node-exporter.yaml
195 apiVersion: apps/v1
196 kind: DaemonSet
197 metadata:
198 name: daemonset-demo
199 namespace: default
200 spec:
201 selector:
202 matchLabels:
203 app: prometheus
204 component: node-exporter
205 template:
206 metadata:
207 labels:
208 app: prometheus
209 component: node-exporter
210 spec:
211 hostNetwork: true
212 hostPID: true
213 containers:
214 - image: prom/node-exporter:v1.2.0
215 name: prometheus-node-exporter
216 ports:
217 - name: prom-node-exp
218 containerPort: 9100
219 hostPort: 9100