21-kubeadm底层实现之静态Pod
一、静态Pod概述
所谓的静态pod就是kubelet自己监视的一个目录,如果这个目录有Pod资源清单,就直接会在当前节点上创建该Pod。也就是说不基于APIServer就可以直接创建Pod。
静态Pod仅对Pod类型的资源有效,其他资源无视。
静态Pod创建的资源,后缀都会加一个当前节点的名称
二、静态Pod路径概述
vim /var/lib/kubelet/config.yaml
...
staticPodPath: /etc/kubernetes/manifests
温馨提示:
(1)静态Pod是由kubelet启动时通过"staticPodPath"配置参数指定路径
(2)静态Pod创建的Pod名称会自动加上kubelet节点的主机名,比如"-k8s231.oldboyedu.com",会忽略"nodeName"字段哟;
(3)静态Pod的创建并不依赖API-Server,而是直接基于kubelet所在节点来启动Pod;
(4)静态Pod的删除只需要将其从staticPodPath指定的路径移除即可;
(5)静态Pod路径仅对Pod资源类型有效,其他类型资源将不被创建哟
(6)咱们的kubeadm部署方式就是基于静态Pod部署的哟;
三、参考案例
#查看静态资源文件的路径
[root@master231 ~]# grep staticPodPath /var/lib/kubelet/config.yaml
staticPodPath: /etc/kubernetes/manifests
#进入目录
[root@master231 ~]# cd /etc/kubernetes/manifests/
[root@master231 manifests]#
[root@master231 manifests]# ll
total 24
drwxr-xr-x 2 root root 4096 Apr 7 11:00 ./
drwxr-xr-x 4 root root 4096 Apr 7 11:00 ../
-rw------- 1 root root 2280 Apr 7 11:00 etcd.yaml
-rw------- 1 root root 4025 Apr 7 11:00 kube-apiserver.yaml
-rw------- 1 root root 3546 Apr 7 11:00 kube-controller-manager.yaml
-rw------- 1 root root 1465 Apr 7 11:00 kube-scheduler.yaml
四、查看
我们不难发现,此目录下为k8s架构的各个组件,我们可以查看一番,其中tier 是一个标签(label),用于对资源进行分类和分层。
[root@master231 manifests]# head *
==> etcd.yaml <==
apiVersion: v1
kind: Pod
metadata:
annotations:
kubeadm.kubernetes.io/etcd.advertise-client-urls: https://10.0.0.231:2379
creationTimestamp: null
labels:
component: etcd
tier: control-plane
name: etcd
==> kube-apiserver.yaml <==
apiVersion: v1
kind: Pod
metadata:
annotations:
kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 10.0.0.231:6443
creationTimestamp: null
labels:
component: kube-apiserver
tier: control-plane
name: kube-apiserver
==> kube-controller-manager.yaml <==
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
component: kube-controller-manager
tier: control-plane
name: kube-controller-manager
namespace: kube-system
spec:
==> kube-scheduler.yaml <==
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
component: kube-scheduler
tier: control-plane
name: kube-scheduler
namespace: kube-system
spec:
我们指定名称空间和标签来查看下都运行了哪些Pod
[root@master231 manifests]# kubectl get pods -o wide -n kube-system -l tier=control-plane
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
etcd-master231 1/1 Running 1 (2d4h ago) 2d5h 10.0.0.231 master231 <none> <none>
kube-apiserver-master231 1/1 Running 1 (2d4h ago) 2d5h 10.0.0.231 master231 <none> <none>
kube-controller-manager-master231 1/1 Running 1 (2d4h ago) 2d5h 10.0.0.231 master231 <none> <none>
kube-scheduler-master231 1/1 Running 1 (2d4h ago) 2d5h 10.0.0.231 master231 <none> <none>
五、基于静态Pod修改svc的NodePort端口范围
1.默认的端口范围是30000~32767
[root@master231 service]# cat 02-svc-NodePort-xiuxian.yaml
apiVersion: v1
kind: Service
metadata:
labels:
apps: xiuxian
name: svc-xiuxian-nodeport
spec:
type: NodePort
ports:
- port: 90
protocol: TCP
targetPort: 80
# 声明worker节点的转发的端口,默认的有效范围是: 30000-32767
# nodePort: 30080
nodePort: 8080
selector:
version: v1
#报错提示我们了
[root@master231 service]# kubectl apply -f 02-svc-NodePort-xiuxian.yaml
The Service "svc-xiuxian-nodeport" is invalid: spec.ports[0].nodePort: Invalid value: 8080: provided port is not in the valid range. The range of valid ports is 30000-32767
2.修改NodePort的默认端口范围
推荐阅读:
https://kubernetes.io/zh-cn/docs/reference/command-line-tools-reference/kube-apiserver/
[root@master231 ~]# vim /etc/kubernetes/manifests/kube-apiserver.yaml
...
spec:
containers:
- command:
- kube-apiserver
- --service-node-port-range=3000-50000 # 进行添加这一行即可
...
3.移动镜像Pod目录的资源清单使Pod重启,配置生效
[root@master231 manifests]# pwd
/etc/kubernetes/manifests
[root@master231 manifests]#
[root@master231 manifests]# mv kube-apiserver.yaml /opt/
[root@master231 manifests]#
[root@master231 manifests]# mv /opt/kube-apiserver.yaml ./
4.再次查看api-server的Pod启动时间
#发现kube-apiserver-master231已重新运行
[root@master231 manifests]# kubectl get pods -o wide -n kube-system -l tier=control-plane
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
etcd-master231 1/1 Running 1 (2d5h ago) 2d5h 10.0.0.231 master231 <none> <none>
kube-apiserver-master231 1/1 Running 0 57s 10.0.0.231 master231 <none> <none>
kube-controller-manager-master231 1/1 Running 2 (85s ago) 2d5h 10.0.0.231 master231 <none> <none>
kube-scheduler-master231 1/1 Running 2 (85s ago) 2d5h 10.0.0.231 master231 <none> <none>
5.再次测试验证svc的NodePort端口范围
[root@master231 service]# cat 02-svc-NodePort-xiuxian.yaml
apiVersion: v1
kind: Service
metadata:
labels:
apps: xiuxian
name: svc-xiuxian-nodeport
spec:
type: NodePort
ports:
- port: 90
protocol: TCP
targetPort: 80
# 声明worker节点的转发的端口,默认的有效范围是: 30000-32767
# nodePort: 30080
nodePort: 8080
selector:
version: v1
[root@master231 service]#
[root@master231 service]# kubectl apply -f 02-svc-NodePort-xiuxian.yaml
service/svc-xiuxian-nodeport configured
[root@master231 service]#
[root@master231 service]# kubectl get -f 02-svc-NodePort-xiuxian.yaml
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
svc-xiuxian-nodeport NodePort 10.200.21.38 <none> 90:8080/TCP 23h
本文来自博客园,作者:丁志岩,转载请注明原文链接:https://www.cnblogs.com/dezyan/p/18817405

浙公网安备 33010602011771号