第六次作业

一、总结pod基于coredns进行域名解析流程

1、查看当前集群中的服务

root@k8s-master1-120:~# kubectl get services -A
NAMESPACE     NAME         TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)                                        AGE
default       kubernetes   ClusterIP   10.100.0.1       <none>        443/TCP                                        12d
kube-system   kube-dns     ClusterIP   10.100.0.2       <none>        53/UDP,53/TCP,9153/TCP                         49m
kuboard       kuboard-v3   NodePort    10.100.110.110   <none>        80:30080/TCP,10081:30081/TCP,10081:30081/UDP   10d

2、查找同命名空间中的服务

[root@net-test2 /]# nslookup kubernetes
Server:		10.100.0.2
Address:	10.100.0.2#53

Name:	kubernetes.default.svc.cluster.local
Address: 10.100.0.1

3、查找非同命名空间中的服务

[root@net-test2 /]# nslookup kube-dns
Server:		10.100.0.2
Address:	10.100.0.2#53

** server can't find kube-dns: NXDOMAIN

[root@net-test2 /]# nslookup kube-dns.kube-system
Server:		10.100.0.2
Address:	10.100.0.2#53

Name:	kube-dns.kube-system.svc.cluster.local
Address: 10.100.0.2

4、查找非集群中的服务时

[root@net-test2 /]# nslookup www.baidu.com
Server:		10.100.0.2
Address:	10.100.0.2#53

Non-authoritative answer:
www.baidu.com	canonical name = www.a.shifen.com.
Name:	www.a.shifen.com
Address: 14.119.104.254
Name:	www.a.shifen.com
Address: 14.119.104.189

5、总结

[root@net-test2 /]# cat /etc/resolv.conf 
search default.svc.cluster.local svc.cluster.local cluster.local
nameserver 10.100.0.2
options ndots:5

- pod 节点进行域名解析时,不管是否是集群内部的服务,首先都会转发到resolv.conf文件中的nameserver中定义的coredns集群IP地址
- 如果访问的域名是集群内部服务,将域名带入search域中依次进行解析,如 kubernetes.default.svc.cluster.local -> kubernetes.svc.cluster.local -> kubernetes.cluster.local
- 如果访问的域名不是集群内服务,则根据配置文件中的 forward 字段定义的地址进行外部的转发,从而得到解析的地址

二、总结rc、rs及deployment控制器的使用

1、ReplicationController(https://kubernetes.io/zh-cn/docs/concepts/workloads/controllers/replicationcontroller/)

ReplicationController 确保在任何时候都有特定数量的 Pod 副本处于运行状态。

当 Pod 数量过多时,ReplicationController(简称 rc)会终止多余的 Pod。当 Pod 数量太少时,rc 将会启动新的 Pod。与手动创建的 Pod 不同,由 rc 创建的 Pod 在失败、被删除或被终止时会被自动替换,并且 rc 不是监控单个节点上的单个进程,而是监控跨多个节点的多个 Pod。同时 ,rc 控制器基于等值基于不等值的需求允许按标签键和值进行过滤。

标签和选择算符https://kubernetes.io/zh-cn/docs/concepts/overview/working-with-objects/labels/

示例:

root@k8s-ha2-deploy-239:~/case-example# cat rc.yaml 
apiVersion: v1
kind: ReplicationController
metadata:
  name: nginx-rc
spec:
  replicas: 2
  selector:
    app: ng-rc-80
  template:
    metadata:
      labels:
        app: ng-rc-80
    spec:
      containers:
        - name: ng-rc-80
          image: nginx
          ports:
            - containerPort: 80
root@k8s-ha2-deploy-239:~/case-example# kubectl apply -f rc.yaml 
replicationcontroller/nginx-rc created

root@k8s-ha2-deploy-239:~/case-example# kubectl get pods -o wide | grep nginx
nginx-rc-2gqfk         1/1     Running     0               83s     10.200.19.18   10.243.20.240   <none>           <none>
nginx-rc-kfcw2         1/1     Running     0               83s     10.200.21.7    10.243.20.242   <none>           <none>

root@k8s-ha2-deploy-239:~/case-example# kubectl get rc
NAME       DESIRED   CURRENT   READY   AGE
nginx-rc   2         2         2       14m

2、ReplicaSet (https://kubernetes.io/zh-cn/docs/concepts/workloads/controllers/replicaset/)

ReplicaSet 的目的是维护一组在任何时候都处于运行状态的 Pod 副本的稳定集合。

ReplicaSet 除了具有同 ReplicationController 的功能外,还可以通过选择算符(in 或 not in)来识别可获得的 Pod。

标签和选择算符https://kubernetes.io/zh-cn/docs/concepts/overview/working-with-objects/labels/

示例:

root@k8s-ha2-deploy-239:~/case-example# cat rs.yaml 
apiVersion: apps/v1
kind: ReplicaSet
metadata:
  name: nginx-rs
spec:
  replicas: 2
  selector:
    matchLabels:
      app: ng-rs-80
  template:
    metadata:
      labels:
        app: ng-rs-80
    spec:
      containers:
        - name: ng-rs-80
          image: nginx
          ports:
            - containerPort: 80
root@k8s-ha2-deploy-239:~/case-example# kubectl apply -f rs.yaml
replicaset.apps/nginx-rs created

root@k8s-ha2-deploy-239:~/case-example# kubectl get rs
NAME       DESIRED   CURRENT   READY   AGE
nginx-rs   2         2         2       29s


root@k8s-ha2-deploy-239:~/case-example# kubectl get pods -o wide| grep nginx
nginx-rs-gfst7    1/1     Running     0     51s     10.200.19.20   10.243.20.240   <none>       <none>
nginx-rs-xbmbp    1/1     Running     0     51s     10.200.21.8    10.243.20.242   <none>       <none>

3、Deployment 控制器

参考文档:https://kubernetes.io/zh-cn/docs/concepts/workloads/controllers/deployment/

Deployment 控制器是比 rs 更高级的控制,除了有 rs 的功能外,还有滚动升级、回滚等高级功能。

Deployment 控制器是调用 ReplicaSet 控制器去创建、维护 Pod,当涉及到镜像更新、代码回滚和金丝雀部署等功能时,再由 Deployment 控制器进行相关操作。如:当前节点中由 Deployment 控制器调用了 ReplicaSet 去创建了 Pod 并维护相关3个副本信息,此时需要进行代码升级,Deployment 控制器会调用 ReplicaSet 控制器根据新的镜像信息来创建 Pod 并维护副本数量,之前的 Pod 将会被删除,旧的 ReplicaSet 依然会被保留下来用于后面可能得代码回滚。如果需要进行回滚操作,那么 Deployment 控制器调用之前的 ReplicaSet 去创建 Pod ,并维护 Pod 副本。
image
示例:

root@k8s-ha2-deploy-239:~/case-example# cat deployment.yaml 
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
spec:
  replicas: 2
  selector:
    matchLabels:
      app: ng-deploy-80
  template:
    metadata:
      labels:
        app: ng-deploy-80
    spec:
      containers:
        - name: ng-deploy-80
          image: nginx:1.20.2
          ports:
            - containerPort: 80
root@k8s-ha2-deploy-239:~/case-example# kubectl apply -f deployment.yaml 
deployment.apps/nginx-deployment created
root@k8s-ha2-deploy-239:~/case-example# kubectl get deployments.apps 
NAME               READY   UP-TO-DATE   AVAILABLE   AGE
nginx-deployment   0/2     2            0           10s
root@k8s-ha2-deploy-239:~/case-example# kubectl get rs
NAME                          DESIRED   CURRENT   READY   AGE
nginx-deployment-577df84c7c   2         2         0       19s

# 更新 nginx 的镜像版本,将原有镜像升级到最新版本
root@k8s-ha2-deploy-239:~/case-example# kubectl set image deployment/nginx-deployment ng-deploy-80=nginx
deployment.apps/nginx-deployment image updated
# 查看 rs,发现一个是新创建的,旧的 rs 控制器依然存在
root@k8s-ha2-deploy-239:~/case-example# kubectl get rs
NAME                          DESIRED   CURRENT   READY   AGE
nginx-deployment-577df84c7c   0         0         0       7m12s
nginx-deployment-6c64cd96cb   2         2         2       49s

# 进行回滚操作
root@k8s-ha2-deploy-239:~/case-example# kubectl rollout undo deployment/nginx-deployment -n default
deployment.apps/nginx-deployment rolled back
root@k8s-ha2-deploy-239:~/case-example# kubectl get rs
NAME                          DESIRED   CURRENT   READY   AGE
nginx-deployment-577df84c7c   2         2         2       10m
nginx-deployment-6c64cd96cb   0         0         0       4m15s

三、总结nodeport类型的service访问流程(画图说明)

官方文档https://kubernetes.io/zh-cn/docs/concepts/services-networking/service/

在 k8s 中,pod 重建后 IP 地址就发生变化,如果 pod 之间直接使用 IP 进行通信就会出现无法访问的问题,而 service 则解耦了服务和应用,service 的实现方式就是通过 label 标签动态匹配后端 endpoint。

kube-proxy 会监听 k8s-apiserver,当 service 资源发生变化时(调用 k8s-api 修改 service 信息),kube-proxy 就会对负载调度进行调整,保证 service 的最新状态。

service 类型示例

(1)ClusterIP:用于内部服务器基于 service name 访问

root@k8s-ha2-deploy-239:~/case-example# cat service-clusterip.yaml 
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-clusterip
spec:
  replicas: 2
  selector:
    matchLabels:
      app: ng-deploy-80
  template:
    metadata:
      labels:
        app: ng-deploy-80
    spec:
      containers:
        - name: ng-deploy-80
          image: nginx:1.20.2
          ports:
            - containerPort: 80

---
kind: Service
apiVersion: v1
metadata:
  name: ng-service-80
spec:
  ports:
    - name: http
      port: 80
      targetPort: 80
      protocol: TCP
  type: ClusterIP
  selector:
    app: ng-deploy-80
root@k8s-ha2-deploy-239:~/case-example# kubectl apply -f service-clusterip.yaml 
deployment.apps/nginx-clusterip created
service/ng-service-80 created

# 查看endpoint和service信息
root@k8s-ha2-deploy-239:~/case-example# kubectl get ep | grep ng
ng-service-80   10.200.19.26:80,10.200.21.13:80                            2m49s
root@k8s-ha2-deploy-239:~/case-example# kubectl get svc | grep ng
ng-service-80   ClusterIP   10.100.239.200   <none>        80/TCP    3m8s

# 测试,ClusterIP 只能在集群服务内部使用
[root@test-centos /]# curl -I 10.100.239.200
HTTP/1.1 200 OK
Server: nginx/1.20.2
Date: Mon, 15 Jan 2024 07:13:02 GMT
Content-Type: text/html
Content-Length: 612
Last-Modified: Tue, 16 Nov 2021 14:44:02 GMT
Connection: keep-alive
ETag: "6193c3b2-264"
Accept-Ranges: bytes

[root@test-centos /]# curl -I ng-service-80 
HTTP/1.1 200 OK
Server: nginx/1.20.2
Date: Mon, 15 Jan 2024 07:13:10 GMT
Content-Type: text/html
Content-Length: 612
Last-Modified: Tue, 16 Nov 2021 14:44:02 GMT
Connection: keep-alive
ETag: "6193c3b2-264"
Accept-Ranges: bytes

(2)NodePort:用于 kubernetes 集群以外的服务主动访问运行在 kubernetes 集群内部的服务

在Kubernetes中,当你创建一个NodePort类型的服务时,会在每个节点上打开一个指定的端口(NodePort)。外部客户端可以通过节点的IP和该端口访问服务。当流量到达NodePort时,Kubernetes会将流量转发到Service Port。Service Port 是该服务在内部的端口,这就是你在定义服务时为其指定的 'port' 字段。然后,kube-proxy会介入到这个过程中。kube-proxy是Kubernetes的网络代理,它在每个节点上都有运行。kube-proxy 会将流量从 Service Port 转发到指定的Pod的TargetPort。整个过程为:外部请求 -> NodePort -> Service Port -> kube-proxy -> Pod的 TargetPort。

image

示例:

root@k8s-ha2-deploy-239:~/case-example# cat service-nodeport.yaml 
apiVersion: apps/v1                                                                                                                                             
kind: Deployment
metadata:
  name: nginx-nodeport-80
spec:
  replicas: 2
  selector:
    matchLabels:
      app: ng-deploy-80
  template:
    metadata:
      labels:
        app: ng-deploy-80
    spec:
      containers:
        - name: ng-deploy-80
          image: registry.cn-hangzhou.aliyuncs.com/wuhaolam/myserver:nginx_1.18_20231128
          ports:
            - containerPort: 80

---
apiVersion: v1
kind: Service
metadata:
  name: ng-service-80
spec:
  ports:
    - name: http
      port: 80
      targetPort: 80
      nodePort: 31080
      protocol: TCP
  type: NodePort
  selector:
    app: ng-deploy-80
root@k8s-ha2-deploy-239:~/case-example# kubectl apply -f service-nodeport.yaml 
deployment.apps/nginx-nodeport-80 created
service/ng-service-80 created

root@k8s-ha2-deploy-239:~/case-example# kubectl get ep | grep ng
ng-service-80   10.200.19.29:80,10.200.21.16:80                            18m
root@k8s-ha2-deploy-239:~/case-example# kubectl get svc | grep ng
ng-service-80   NodePort    10.100.102.254   <none>        80:31080/TCP   18m

# 访问集群中任意一个 IP:PORT 都会被成功调度到后端 pod 服务中

image

四、掌握pod挂载nfs的使用

nfs 卷允许将现有的 NFS(网络文件系统)挂载到容器中,且不想 emptyDir 卷那样会丢失数据,当删除 Pod 时,nfs 卷的内容会被保留,卷仅仅是被卸载,这意味着 NFS 卷可以预先上传好数据,待 Pod 启动成功后即可直接使用。网络存储可以在多 Pod 之间共享同一份数据,即 NFS 可以被多个 Pod 同时挂载和读写。
image

示例:

1、创建 NFS 服务器

root@k8s-ha1-238:~# apt update && apt -y install nfs-server
root@k8s-ha1-238:~# mkdir -p /data/k8sdata/
root@k8s-ha1-238:~# echo '/data/k8sdata *(rw,no_root_squash)' >> /etc/exports
root@k8s-ha1-238:~# systemctl restart nfs-server.service

2、创建 nginx 挂载到 nfs 的 yaml 文件

root@k8s-ha2-deploy-239:~/case-example# cat deploy-nfs.yaml 
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
spec:
  replicas: 1
  selector:
    matchLabels:
      app: ng-deploy-880
  template:
    metadata:
      labels:
        app: ng-deploy-880
    spec:
      containers:
        - name: ng-deploy-880
          image: nginx
          ports:
            - containerPort: 80
          volumeMounts:
            - mountPath: /usr/share/nginx/html/mysite      # 容器中的挂载点
              name: my-nfs-volume
      volumes:
        - name: my-nfs-volume
          nfs:
            server: 10.243.20.238
            path: /data/k8sdata              # 共享服务器的共享目录


---
apiVersion: v1
kind: Service
metadata:
  name: ng-deploy-880
spec:
  ports:
    - name: http
      port: 880
      targetPort: 80
      nodePort: 31880
      protocol: TCP
  type: NodePort
  selector:
    app: ng-deploy-880

3、部署服务并验证结果

root@k8s-ha2-deploy-239:~/case-example# kubectl apply -f deploy-nfs.yaml

# 在共享存储服务器写入内容并验证结果
root@k8s-ha1-238:~# echo "test pod" >> index.html
root@k8s-ha2-deploy-239:~/case-example# curl 10.243.20.240:31880/mysite/index.html
test pod

查看容器信息
image

在容器节点查看挂载状态

image

五、总结基于nfs实现静态pvc的使用

image

1、准备 NFS 存储

root@k8s-ha1-238:~# apt update && apt -y install nfs-server
root@k8s-ha1-238:~# mkdir -p /data/k8sdata/myserver/myappdata
root@k8s-ha1-238:~# echo '/data/k8sdata *(rw,no_root_squash)' >> /etc/exports
root@k8s-ha1-238:~# systemctl restart nfs-server.service

2、创建PV/PVC

# 创建 PV
root@k8s-ha2-deploy-239:~/case-example# cat myapp-persistentvolume.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
  name: myserver-myapp-static-pv
spec:
  capacity:
    storage: 10Gi
  accessMode:
    - ReadWriteOnce
  nfs:
    path: /data/k8sdata/myserver/myappdata
    server: 10.243.20.238

root@k8s-ha2-deploy-239:~/case-example# kubectl apply -f myapp-persistentvolume.yaml
root@k8s-ha2-deploy-239:~/case-example# kubectl get pv
NAME              CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM   STORAGECLASS   REASON   AGE
myserver-myapp-static-pv   10Gi       RWO      Retain        Available                                   39s

# 创建 PVC
root@k8s-ha2-deploy-239:~/case-example# cat myapp-persistentvolumeclaim.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: myserver-myapp-static-pvc
  namespace: myserver
spec:
  volumeName: myserver-myapp-static-pv
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 10Gi

root@k8s-ha2-deploy-239:~/case-example# kubectl apply -f myapp-persistentvolumeclaim.yaml
root@k8s-ha2-deploy-239:~/case-example# kubectl get pvc -n myserver
NAME                        STATUS   VOLUME                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
myserver-myapp-static-pvc   Bound    myserver-myapp-static-pv   10Gi       RWO                           45s

3、部署 web 服务

(1)准备 yaml 文件

root@k8s-ha2-deploy-239:~/case-example# cat myapp-webserver.yaml 
kind: Deployment
apiVersion: apps/v1
metadata:
  labels:
    app: myserver-myapp
  name: myserver-myapp-deployment-name
  namespace: myserver
spec:
  replicas: 1
  selector:
    matchLabels:
      app: myserver-myapp-frontend
    template:
      metadata:
        labels:
          app: myserver-myapp-frontend
      spec:
        containers:
          - name: myserver-myapp-container
            image: nginx:1.20.0
            volumeMounts:
              - mountPath: "/usr/share/nginx/html/statics"
                name: statics-datadir
        volumes:
          - name: statics-datadir
            persistentVolumeClaim:
              claimName: myserver-myapp-static-pvc

---
kind: Service
apiVersion: v1
metadata:
  labels:
    app: myserver-myapp-service
  name: myserver-myapp-service-name
  namespace: myserver
spec:
  type: NodePort
  ports:
    - name: http
      port: 80
      targetPort: 80
      nodePort: 31009
  selector:
    app: myserver-myapp-frontend

(2)部署web服务

root@k8s-ha2-deploy-239:~/case-example# kubectl apply -f myapp-webserver.yaml
root@k8s-ha2-deploy-239:~/case-example# kubectl get pods -n myserver | grep myapp
myserver-myapp-deployment-name-6c445c87f8-c4tbv   1/1     Running   0              76s

(3)查看挂载情况

root@k8s-ha2-deploy-239:~/case-example# kubectl exec -it myserver-myapp-deployment-name-6c445c87f8-c4tbv bash -n myserver
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
root@myserver-myapp-deployment-name-6c445c87f8-c4tbv:/# df -Th
Filesystem                                     Type     Size  Used Avail Use% Mounted on
overlay                                        overlay   97G   12G   80G  13% /
tmpfs                                          tmpfs     64M     0   64M   0% /dev
/dev/mapper/ubuntu--vg-ubuntu--lv              ext4      97G   12G   80G  13% /etc/hosts
shm                                            tmpfs     64M     0   64M   0% /dev/shm
10.243.20.238:/data/k8sdata/myserver/myappdata nfs4      97G   13G   79G  15% /usr/share/nginx/html/statics
tmpfs                                          tmpfs    9.5G   12K  9.5G   1% /run/secrets/kubernetes.io/serviceaccount
tmpfs                                          tmpfs    4.9G     0  4.9G   0% /proc/acpi
tmpfs                                          tmpfs    4.9G     0  4.9G   0% /proc/scsi
tmpfs                                          tmpfs    4.9G     0  4.9G   0% /sys/firmware
root@myserver-myapp-deployment-name-6c445c87f8-c4tbv:/# ls /usr/share/nginx/html/statics/ -l
total 0

(4)测试

# 写入数据
root@myserver-myapp-deployment-name-6c445c87f8-c4tbv:/# echo "static pv test" >> /usr/share/nginx/html/statics/index.html

# 查看后端数据写入情况
root@k8s-ha1-238:/data/k8sdata/myserver/myappdata# ls
index.html

# 页面测试

image

六、总结基于nfs及storageclass实现动态pvc的使用

存储类官方文档:https://kubernetes.io/zh-cn/docs/concepts/storage/storage-classes/

nfs 对接:https://github.com/kubernetes-sigs/nfs-subdir-external-provisioner

image

注:创建 pv 及绑定 PVC 的操作都将由 storageclass 实现

1、创建账号,分配相应的权限,以便可以在 k8s 环境中进行相应的操作

root@k8s-ha2-deploy-239:~/case-example# cat rbac.yaml 
apiVersion: v1
kind: Namespace
metadata:
  name: nfs

---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: nfs-client-provisioner
  namespace: nfs

---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: nfs-client-provisioner-runner
rules:
  - apiGroups: [""]
    resources: ["nodes"]
    verbs: ["get", "list", "watch"]
  - apiGroups: [""]
    resources: ["persistentvolumes"]
    verbs: ["get", "list", "watch", "create", "delete"]
  - apiGroups: [""]
    resources: ["persistentvolumeclaims"]
    verbs: ["get", "list", "watch", "update"]
  - apiGroups: ["storage.k8s.io"]
    resources: ["storageclasses"]
    verbs: ["get", "list", "watch"]
  - apiGroups: [""]
    resources: ["events"]
    verbs: ["create", "update", "patch"]

---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: run-nfs-client-provisioner
subjects:
  - kind: ServiceAccount
    name: nfs-client-provisioner
    namespace: nfs
roleRef:
  kind: ClusterRole
  name: nfs-client-provisioner-runner
  apiGroup: rbac.authorization.k8s.io

---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: leader-locking-nfs-client-provisioner
  namespace: nfs
rules:
  - apiGroups: [""]
    resources: ["endpoints"]
    verbs: ["get", "list", "watch", "create", "update", "patch"]

---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: leader-locking-nfs-client-provisioner
  namespace: nfs
subjects:
  - kind: ServiceAccount
    name: nfs-client-provisioner
    namespace: nfs
roleRef:
  kind: Role
  name: leader-locking-nfs-client-provisioner
  apiGroup: rbac.authorization.k8s.io
  
  
root@k8s-ha2-deploy-239:~/case-example# kubectl apply -f rbac.yaml

2、创建 storageclass

root@k8s-ha2-deploy-239:~/case-example# cat storageclass.yaml 
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: managed-nfs-storage
provisioner: k8s-sigs.io/nfs-subdir-external-provisioner # 需要和创建nfs storageclass中env中的PROVISIONER_NAME变量的值一致
reclaimPolicy: Retain # PV 的删除策略,默认为delete
mountOptions:
  - noresvport # NFS 有抖动,pod不会重新挂载NFS,加上此参数,客户端会重新挂载NFS服务
  - noatime # 访问文件时不更新文件inode时间戳,高并发环境可提高性能
parameters:
  archiveOnDelete: "true" # 删除Pod时保留Pod数据,默认为false
  

root@k8s-ha2-deploy-239:~/case-example# kubectl apply -f storageclass.yaml 
storageclass.storage.k8s.io/managed-nfs-storage created

3、创建 NFS provisioner

(1)创建 NFS Server 的挂载路径

root@k8s-ha1-238:~# mkdir /data/volumes
root@k8s-ha1-238:~# cat /etc/exports
/data/k8sdata *(rw,no_root_squash)
/data/volumes *(rw,no_root_squash)
root@k8s-ha1-238:~# systemctl restart nfs-server.service

(2)创建 NFS provisioner

root@k8s-ha2-deploy-239:~/case-example# cat nfs-provisioner.yaml 
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nfs-client-provisioner
  labels:
    app: nfs-client-provisioner
  namespace: nfs
spec:
  replicas: 1
  strategy:
    type: Recreate
  selector:
    matchLabels:
      app: nfs-client-provisioner
  template:
    metadata:
      labels:
        app: nfs-client-provisioner
    spec:
      serviceAccountName: nfs-client-provisioner
      containers:
        - name: nfs-client-provisioner
          image: registry.cn-qingdao.aliyuncs.com/zhangshijie/nfs-subdir-external-provisioner:v4.0.2
          volumeMounts:
            - name: nfs-client-root
              mountPath: /persistentvolumes
          env:
            - name: PROVISIONER_NAME
              value: k8s-sigs.io/nfs-subdir-external-provisioner
            - name: NFS_SERVER
              value: 10.243.20.238
            - name: NFS_PATH
              value: /data/volumes
      volumes:
        - name: nfs-client-root
          nfs:
            server: 10.243.20.238
            path: /data/volumes
            

(3)部署 nfs provisioner

root@k8s-ha2-deploy-239:~/case-example# kubectl apply -f nfs-provisioner.yaml 
deployment.apps/nfs-client-provisioner created

root@k8s-ha2-deploy-239:~/case-example# kubectl get storageclasses.storage.k8s.io 
NAME                  PROVISIONER                                   RECLAIMPOLICY   VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   AGE
managed-nfs-storage   k8s-sigs.io/nfs-subdir-externel-provisioner   Retain          Immediate           false                  24h

4、申请和创建PVC(无需自己创建 PV 和绑定 PVC,storageclass 会自动帮我们完成)

root@k8s-ha2-deploy-239:~/case-example# cat create-pvc.yaml 
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: myserver-myapp-dynamic-pvc
  namespace: myserver
spec:
  storageClassName: managed-nfs-storage
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 500Mi
root@k8s-ha2-deploy-239:~/case-example# kubectl apply -f create-pvc.yaml 
persistentvolumeclaim/myserver-myapp-dynamic-pvc created

# 查看pv的创建
root@k8s-ha2-deploy-239:~/case-example# kubectl get pv
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                                 STORAGECLASS          REASON   AGE
pvc-27812b07-3cca-47b1-a867-26207049803b   500Mi      RWX            Retain           Bound    myserver/myserver-myapp-dynamic-pvc   managed-nfs-storage            118s


# 查看PVC的绑定
root@k8s-ha2-deploy-239:~/case-example# kubectl get pvc -n myserver
NAME                         STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS          AGE
myserver-myapp-dynamic-pvc   Bound    pvc-27812b07-3cca-47b1-a867-26207049803b   500Mi      RWX            managed-nfs-storage   2m3s


# 在 nfs Server 的映射路径中会自动创建一个以 PV 名称的路径
root@k8s-ha1-238:/data/volumes# ll
total 12
drwxr-xr-x 3 root root 4096 Jan 30 10:52 ./
drwxr-xr-x 4 root root 4096 Jan 29 16:33 ../
drwxrwxrwx 2 root root 4096 Jan 30 10:52 myserver-myserver-myapp-dynamic-pvc-pvc-27812b07-3cca-47b1-a867-26207049803b/

5、创建 web 服务

root@k8s-ha2-deploy-239:~/case-example# cat myapp-webserver-storageclass.yaml 
kind: Deployment
apiVersion: apps/v1
metadata:
  labels:
    app: myserver-myapp
  name: myserver-myapp-deployment-name
  namespace: myserver
spec:
  replicas: 1
  selector:
    matchLabels:
      app: myserver-myapp-frontend
  template:
    metadata:
      labels:
        app: myserver-myapp-frontend
    spec:
      containers:
        - name: myserver-myapp-container
          image: nginx:1.20.0
          volumeMounts:
            - mountPath: "/usr/share/nginx/html/statics"
              name: statics-datadir
      volumes:
        - name: statics-datadir
          persistentVolumeClaim:
            claimName: myserver-myapp-dynamic-pvc

---
kind: Service
apiVersion: v1
metadata:
  labels:
    app: myserver-myapp-service
  name: myserver-myapp-service-name
  namespace: myserver
spec:
  type: NodePort
  ports:
    - name: http
      port: 80
      targetPort: 80
      nodePort: 30010
  selector:
    app: myserver-myapp-frontend
root@k8s-ha2-deploy-239:~/case-example# kubectl apply -f myapp-webserver-storageclass.yaml 
deployment.apps/myserver-myapp-deployment-name created
service/myserver-myapp-service-name created

root@k8s-ha2-deploy-239:~/case-example# kubectl get pods -n myserver
NAME                                              READY   STATUS    RESTARTS        AGE
myserver-myapp-deployment-name-75f78dd557-46hvl   1/1     Running   0               55s

在存储服务器对应的路径中写入数据,使用 curl 命令验证

# nfs 服务器中的映射路径写入数据
root@k8s-ha1-238:/data/volumes/myserver-myserver-myapp-dynamic-pvc-pvc-27812b07-3cca-47b1-a867-26207049803b# echo "storageclass web page" >> index.html

# 测试
root@k8s-ha1-238:/data# curl http://10.243.20.240:30010/statics/index.html
storageclass web page

七、pod基于configmap实现配置挂载和环境变量

Configmap将非机密性信息(如配置信息)和镜像解耦,实现方式为将配置信息放到 Configmap 对象中,然后在 Pod 中作为 Volume 挂载到 Pod 中,从而实现导入配置的目的。

使用场景:

  • 通过 Configmap 给 Pod 中的容器服务提供配置文件,配置文件以挂载到容器的形式使用
  • 通过 Configmap 给 Pod 定义全局环境变量
  • 通过 Configmap 给 Pod 传递命令行参数,如 mysql -u -p中的账户名密码可以通过 Configmap 传递

注意事项

  • Configmap 需要在 Pod 使用它之前创建
  • Pod 只能使用位于同一个 namespace 的 Configmap,即 Configmap 不能跨 namespace 使用
  • 通常用于非安全加密的配置场景
  • Configmap 通常是小于 1MB 的配置

7.1 pod基于configmap实现配置挂载

1、准备配置挂载的部署文件

root@k8s-ha2-deploy-239:~/case-example# cat deploy_configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
  name: nginx-config
data:
  mysite1: |
    server {
      listen 80;
      server_name www.mysite1.com;
      index    index.html index.php index.htm;

      location / {
        root /data/nginx/mysite1;
        if (!-e $request_filename) {
          rewrite ^/(.*) /index.html last;
        }
      }
    }

  mysite2: |
    server {
      listen 80;
      server_name www.mysite2.com;
      index    index.html index.php index.htm;

      location / {
        root /data/nginx/mysite2;
        if (!-e $request_filename) {
          rewrite ^/(.*) /index.html last;
        }
      }
    }

---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
spec:
  replicas: 1
  selector:
    matchLabels:
      app: ng-deploy-80
  template:
    metadata:
      labels:
        app: ng-deploy-80
    spec:
      containers:
        - name: ng-deploy-80
          image: nginx:1.20.0
          ports:
            - containerPort: 80
          volumeMounts:
            - mountPath: /data/nginx/mysite1      # 数据目录挂载到nfs存储中
              name: nginx-mysite1-statics
            - mountPath: /data/nginx/mysite2
              name: nginx-mysite2-statics
            - mountPath: /etc/nginx/conf.d/mysite1/              # 配置文件使用 configmap 定义的
              name: nginx-mysite1-config
            - mountPath: /etc/nginx/conf.d/mysite2/
              name: nginx-mysite2-config
            - mountPath: /etc/localtime
              name: timefile
              readOnly: true
      volumes:
        - name: nginx-mysite1-config
          configMap: 
            name: nginx-config
            items:
              - key: mysite1               # 使用 ConfigMap 中定义的 mysite1
                path: mysite1.conf         # 内容加载到 mysite1.conf 这个文件中
        - name: nginx-mysite2-config
          configMap:
            name: nginx-config
            items:
              - key: mysite2
                path: mysite2.conf
        - name: nginx-mysite1-statics
          nfs:
            server: 10.243.20.238
            path: /data/k8sdata/mysite1          # mysite1 和 mysite2 目录需要先在nfs Server中创建
        - name: nginx-mysite2-statics
          nfs:
            server: 10.243.20.238
            path: /data/k8sdata/mysite2
        - name: timefile
          hostPath:
            path: /etc/localtime


---
apiVersion: v1
kind: Service
metadata:
  name: ng-deploy-80
spec:
  ports:
    - name: http
      port: 82
      targetPort: 80
      nodePort: 30012
      protocol: TCP
  type: NodePort
  selector:
    app: ng-deploy-80

2、部署 Pod

root@k8s-ha2-deploy-239:~/case-example# kubectl apply -f deploy_configmap.yaml
configmap/nginx-config created
deployment.apps/nginx-deployment created
service/ng-deploy-80 created
root@k8s-ha2-deploy-239:~/case-example# kubectl get pods
NAME                                          READY   STATUS    RESTARTS       AGE
nginx-deployment-configmap-5bc84578c7-wnjtk   1/1     Running   0              78s

3、修改 nginx 配置文件

root@k8s-ha2-deploy-239:~/case-example# kubectl exec -it nginx-deployment-configmap-5bc84578c7-wnjtk bash
root@nginx-deployment-configmap-5bc84578c7-wnjtk:/# apt update
root@nginx-deployment-configmap-5bc84578c7-wnjtk:/# apt -y install vim
root@nginx-deployment-configmap-5bc84578c7-wnjtk:/etc/nginx/conf.d# ls
default.conf  mysite1  mysite2
# 编辑配置文件,添加读取配置文件项,否则nginx不会读取两个站点下的配置文件
root@nginx-deployment-configmap-5bc84578c7-wnjtk:/etc/nginx# vim nginx.conf

image

root@nginx-deployment-configmap-5bc84578c7-wnjtk:/# nginx -t   
nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf test is successful
root@nginx-deployment-configmap-5bc84578c7-wnjtk:/# nginx -s reload
2024/01/30 17:48:25 [notice] 781#781: signal process started

4、测试

在 Windows 的 hosts 文件中增加如下两条域名解析

10.243.20.240 www.mysite1.com
10.243.20.240 www.mysite2.com

写入数据

root@k8s-ha1-238:/data/k8sdata# echo "mysite2 web page" >> mysite2/index.html
root@k8s-ha1-238:/data/k8sdata# echo "mysite1 web page" >> mysite1/index.html

浏览器测试

image

image

7.2 pod基于configmap实现环境变量

准备 yaml 文件

root@k8s-ha2-deploy-239:~/case-example# cat deploy-configmap-env.yaml 
apiVersion: v1
kind: ConfigMap
metadata:
  name: nginx-config-env
data:
  host: "10.243.20.222"
  username: "user1"
  password: "123456"


---
kind: Deployment
apiVersion: apps/v1
metadata:
  name: nginx-deployment-env
spec:
  replicas: 1
  selector:
    matchLabels:
      app: ng-deploy-80
  template:
    metadata:
      labels:
        app: ng-deploy-80
    spec:
      containers:
        - name: ng-deploy-80
          image: nginx
          env:
            - name: HOST                
              valueFrom:
                configMapKeyRef:
                  name: nginx-config-env
                  key: host                     # 引用Configmap中定义的host所对应的值
            - name: USERNAME
              valueFrom:
                configMapKeyRef:
                  name: nginx-comfig-env
                  key: username
            - name: PASSWORD
              valueFrom:
                configMapKeyRef:
                  name: nginx-config-env
                  key: password
            #####                                # 使用上述变量引用较为麻烦
            - name: "MYSQLPASS"                  # 直接在env中定义key的名称和值更方便
              value: "123456"
          ports:
            - containerPort: 80

部署并验证环境变量

root@k8s-ha2-deploy-239:~/case-example# kubectl apply -f deploy-configmap-env.yaml 
configmap/nginx-config-env created
deployment.apps/nginx-deployment-env created

root@k8s-ha2-deploy-239:~/case-example# kubectl get pods
NAME                                          READY   STATUS    RESTARTS        AGE
nginx-deployment-env-c46988d7-pv9vt           1/1     Running   0               44s

# 通过 kuboard 直接进入该 pod 的终端中,查看环境变量

image

八、总结secret简介及常见类型、基于Secret实现Nginx tls认证

secret 文档说明:https://kubernetes.io/zh-cn/docs/concepts/configuration/secret/#alternatives-to-secrets

Secret 的功能类似于 ConfigMap 给 Pod 提供额外的配置信息,但是 secret 是一种包含少量敏感信息(如密码、令牌或秘钥)的对象。

Secret 的名称必须是合法的 DNS 子域名。

每个 Secret 的大小最多为 1MiB,主要是为了避免用户创建过大的 Secret 导致 api-Server 和 kubelet 内存耗尽,不过创建很多小的 Secret 也可能耗尽内存,可以使用资源配额来约束每个 namespace 中 Secret 的个数。

在通过 yaml 文件创建 Secret 时,可以设置 data 或 stringData 字段,data 和 stringData 字段都是可选的。data 字段中所有键值都必须是 base64 编码后的字符串,如果不希望执行这种 base64 字符串的转换操作,可以使用stringData 字段,该字段可以使用任何非加密的字符串作为其取值。

Pod 可以用三种方式的任意一种来使用 Secret:

  • 作为挂载到一个或多个容器上的卷中的文件(crt 文件、key 文件)
  • 作为容器的环境变量
  • 由 kubelet 在为 Pod 拉取镜像时使用(从镜像仓库拉取镜像进行认证使用)

image

Secret 常见类型

Secret 类型 使用场景
Opaque 用户定义的任意数据(两种类型 1、data 基于 base64 加密 2、stringData 明文)
kubernetes.io/service-account-token ServiceAccount 令牌
kubernetes.io/dockercfg ~/.dockercfg 文件的序列化形式
kubernetes.io/dockerconfigjson ~/.docker/config.json 文件的序列化形式
kubernetes.io/basic-auth 用于基本身份认证的凭据
kubernetes.io/ssh-auth 用于 SSH 身份认证的凭据
kubernetes.io/tls 用户 TLS 环境,保存 crt 证书和 key 证书
bootstrap.kubernetes.io/token 启动引导令牌数据

基于Secret实现Nginx tls认证

1、生成自签名证书

root@k8s-ha2-deploy-239:~/secret# declare -A CERT_INFO
root@k8s-ha2-deploy-239:~/secret# CERT_INFO=([subject0]="/C=CN/ST=anhui/L=hefei/O=Universal/OU=tech/CN=www.hefei.com" \
           [keyfile0]="cakey.pem" \
           [certfile0]="cacert.pem" \
           [expire0]=3650 \
           [keybit0]=2048 \
           [serial0]=0 )
root@k8s-ha2-deploy-239:~/secret# openssl req -utf8 -newkey rsa:${CERT_INFO[keybit0]} -set_serial 0 -subj "${CERT_INFO[subject0]}" -days ${CERT_INFO[expire0]} -keyout ${CERT_INFO[keyfile0]} -nodes -x509 -out ${CERT_INFO[certfile0]}
root@k8s-ha2-deploy-239:~/secret# ls
cacert.pem  cakey.pem

2、创建 Secret

root@k8s-ha2-deploy-239:~/secret# kubectl create secret tls myserver-tls-key --cert=./cacert.pem --key=./cakey.pem -n myserver
secret/myserver-tls-key created

root@k8s-ha2-deploy-239:~/secret# kubectl describe secrets myserver-tls-key -n myserver
Name:         myserver-tls-key
Namespace:    myserver
Labels:       <none>
Annotations:  <none>

Type:  kubernetes.io/tls

Data
====
tls.crt:  1314 bytes
tls.key:  1704 bytes

3、创建 web 服务使用证书

root@k8s-ha2-deploy-239:~/secret# cat secret-tls.yaml 
kind: ConfigMap
apiVersion: v1
metadata:
  name: nginx-config-tls
  namespace: myserver
data:
  default: |
    server {
      listen          80;
      server_name     www.hefei.com;
      listen 443 ssl;
      ssl_certificate /etc/nginx/conf.d/certs/tls.crt;
      ssl_certificate_key /etc/nginx/conf.d/certs/tls.key;

      location / {
        root /usr/share/nginx/html;
        index index.html;
        if ($scheme = http){
          rewrite / https://www.hefei.com permanent;
        }

        if (!-e $request_filename){
          rewrite ^/(.*) /index.html last;
        }
      }
    }


---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: myserver-myapp-frontend-deployment
  namespace: myserver
spec:
  replicas: 1
  selector:
    matchLabels:
      app: myserver-myapp-frontend
  template:
    metadata:
      labels:
        app: myserver-myapp-frontend
    spec:
      containers:
        - name: myserver-myapp-frontend
          image: nginx:1.20.2-alpine
          ports:
            - containerPort: 80
          volumeMounts:
            - name: nginx-config
              mountPath: /etc/nginx/conf.d/myserver
            - name: myserver-tls-key
              mountPath: /etc/nginx/conf.d/certs
      volumes:
        - name: nginx-config
          configMap:
            name: nginx-config-tls
            items:
              - key: default
                path: hefei.conf
        - name: myserver-tls-key
          secret:
            secretName: myserver-tls-key


---
apiVersion: v1
kind: Service
metadata:
  name: myserver-myapp-frontend
  namespace: myserver
spec:
  type: NodePort
  ports:
    - name: http
      port: 80
      targetPort: 802
      nodePort: 30018
      protocol: TCP
    - name: https
      port: 443
      targetPort: 443
      nodePort: 30443
      protocol: TCP
  selector:
    app: myserver-myapp-frontend
    

root@k8s-ha2-deploy-239:~/secret# kubectl apply -f secret-tls.yaml

4、编辑 NGINX 配置文件

root@k8s-ha2-deploy-239:~/secret# kubectl get pods -n myserver
NAME                                                 READY   STATUS    RESTARTS        AGE
myserver-myapp-frontend-deployment-b6fb5767f-wfdx6   1/1     Running   0               3m49s

root@k8s-ha2-deploy-239:~/secret# kubectl exec -it -n myserver myserver-myapp-frontend-deployment-b6fb5767f-wfdx6 sh
/ # cd /etc/apk
/etc/apk # cat /etc/alpine-release 
3.14.3
/etc/apk # cat > repositories << EOF
> https://mirrors.ustc.edu.cn/alpine/v3.14/main
> https://mirrors.ustc.edu.cn/alpine/v3.14/community
> EOF
/etc/apk # apk update
/etc/apk # apk add vim
/etc/nginx # vim /etc/nginx/nginx.conf
# 在 http 语句块中添加一行
...
include /etc/nginx/conf.d/*/*.conf
...
/etc/nginx # nginx -t
nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf test is successful
/etc/nginx # nginx -s reload
2024/02/02 03:21:11 [notice] 68#68: signal process started

5、测试

# 在Windows主机添加hosts解析
10.243.20.240 www.hefei.com

image

image

九、基于Secret实现私有镜像仓库的镜像下载认证

私有镜像仓库中的镜像在被拉取时,需要先登录到私有仓库。存储 Docker registry 的认证信息,在下载镜像时使用,这样每一个 node 节点就可以不登录也可以下载私有仓库中的镜像了。

image

# 私有仓库不进行登录直接拉取镜像会失败
root@k8s-master1-230:~# nerdctl pull registry.cn-hangzhou.aliyuncs.com/wuhaolam/baseimage:ubuntu18.04
WARN[0000] skipping verifying HTTPS certs for "registry.cn-hangzhou.aliyuncs.com" 
INFO[0000] trying next host                              error="pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed" host=registry.cn-hangzhou.aliyuncs.com
FATA[0000] failed to resolve reference "registry.cn-hangzhou.aliyuncs.com/wuhaolam/baseimage:ubuntu18.04": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed 

1、登录到阿里云的私有仓库生成认证文件

root@k8s-ha2-deploy-239:~/secret# docker login --username=wuhaolam registry.cn-hangzhou.aliyuncs.com
Password: 
WARNING! Your password will be stored unencrypted in /root/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-store

Login Succeeded

2、创建 Secret

root@k8s-ha2-deploy-239:~/secret# kubectl create secret generic aliyun-registry-image-pull-key \
> --from-file=.dockerconfigjson=/root/.docker/config.json \
> --type=kubernetes.io/dockerconfigjson \
> -n myserver
secret/aliyun-registry-image-pull-key created

root@k8s-ha2-deploy-239:~/secret# kubectl get secrets -n myserver 
NAME                             TYPE                             DATA   AGE
aliyun-registry-image-pull-key   kubernetes.io/dockerconfigjson   1      29m
myserver-tls-key                 kubernetes.io/tls                2      6h6m

3、创建 yaml 文件

root@k8s-ha2-deploy-239:~/secret# cat secret-imagePull.yaml 
apiVersion: apps/v1
kind: Deployment
metadata:
  name: ubuntu-image-pull-deployment
  namespace: myserver
spec:
  replicas: 1
  selector:
    matchLabels:
      app: ubuntu-image-pull
  template:
    metadata:
      labels:
        app: ubuntu-image-pull
    spec:
      containers:
        - name: ubuntu-image-pull
          image: registry.cn-hangzhou.aliyuncs.com/wuhaolam/baseimage:ubuntu18.04
          imagePullPolicy: Always
          command: ["/usr/bin/tail","-f","/etc/hosts"]
      imagePullSecrets:
        - name: aliyun-registry-image-pull-key

4、验证

root@k8s-ha2-deploy-239:~/secret# kubectl apply -f secret-imagePull.yaml 
deployment.apps/ubuntu-image-pull-deployment created

root@k8s-ha2-deploy-239:~/secret# kubectl get pods -n myserver -o wide
NAME                                                 READY   STATUS    RESTARTS       AGE     IP              NODE            NOMINATED NODE   READINESS GATES
ubuntu-image-pull-deployment-6fc54d4c5d-csjp5        1/1     Running   0              2m40s   10.200.143.26   10.243.20.241   <none>           <none>

root@k8s-ha2-deploy-239:~/secret# kubectl describe pod ubuntu-image-pull-deployment-6fc54d4c5d-csjp5 -n myserver

image

十、总结StatefulSet、DaemonSet的特点及使用

10.1 StatefulSet

文档:https://kubernetes.io/zh-cn/docs/concepts/workloads/controllers/statefulset/

特点:

  • StatefulSet 可以解决有状态服务的集群部署、集群之间的数据同步问题(如 MySQL 主从、Redis Cluster、ES 集群等)

  • StatefulSet 管理的 Pod 拥有唯一且固定的名称(会在每个副本的名称后面添加数字区分,从 0 开始编号)

  • StatefulSet 会按照顺序对 Pod 进行启动暂停、伸缩和回收(创建容器时,从前向后创建;删除容器时,从后往前删除)

  • Headless Service (无头服务,请求的解析直接解析到 Pod IP)

image

示例:

root@k8s-ha2-deploy-239:~/case-example# cat statefulset.yaml 
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: myserver-myapp
  namespace: myserver
spec:
  replicas: 3
  serviceName: "myserver-myapp-service"
  selector:
    matchLabels:
      app: myserver-myapp-frontend
  template:
    metadata:
      labels:
        app: myserver-myapp-frontend
    spec:
      containers:
        - name: myserver-myapp-frontend
          image: registry.cn-hangzhou.aliyuncs.com/wuhaolam/myserver:nginx_1.22.0
          ports:
            - containerPort: 80


---
apiVersion: v1
kind: Service
metadata:
  name: myserver-myapp-service
  namespace: myserver
spec:
  clusterIP: None
  ports:
    - name: http
      port: 80
  selector:
    app: myserver-myapp-frontend
root@k8s-ha2-deploy-239:~/case-example# kubectl apply -f statefulset.yaml 
statefulset.apps/myserver-myapp created
service/myserver-myapp-service created

# 按照顺序创建容器
root@k8s-ha2-deploy-239:~/case-example# kubectl get pods -n myserver
NAME                                                 READY   STATUS    RESTARTS     AGE
myserver-myapp-0                                     1/1     Running   0            64s
myserver-myapp-1                                     1/1     Running   0            35s
myserver-myapp-2                                     1/1     Running   0            22s

10.2 DaemonSet

文档:https://kubernetes.io/zh-cn/docs/concepts/workloads/controllers/daemonset/

DaemonSet 会在当前集群中每个节点上创建同一个 Pod,当有新的节点加入集群时也会为新的节点配置相同的 Pod;当节点从集群中移除时其 Pod 也会被 kubernets 回收,删除 DaemonSet 控制器也会将其创建的 Pod 删除。

当需要再每一个节点上都部署相同的服务时,可使用 DaemonSet(如日志收集,Prometheus 监控等)。

示例:

root@k8s-ha2-deploy-239:~/case-example# cat DaemonSet-webserver.yaml 
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: myserver-myapp
  namespace: myserver
spec:
  selector:
    matchLabels:
      app: myserver-myapp-frontend
  template:
    metadata:
      labels:
        app: myserver-myapp-frontend
    spec:
#      tolerations:    # 如果希望Master节点被调度创建Pod,移除此选项
#        - key: node.kubernetes.io/unschedulable
#          operator: Exists
#          effect: NoSchedule
      hostNetwork: true     # 使用宿主机网络,确保宿主机的80端口没有被占用
      hostPID: true
      containers:
        - name: myserver-myapp-frontend
          image: nginx:1.20.2-alpine
          ports:
            - containerPort: 80
# 每个节点都成功创建了 Pod
root@k8s-ha2-deploy-239:~/case-example# kubectl get pods -n myserver -o wide
NAME     READY   STATUS    RESTARTS     AGE    IP    NODE       NOMINATED NODE   READINESS GATES
myserver-myapp-fcn4l    1/1     Running   0   61s    10.243.20.240    10.243.20.240   <none>    <none>
myserver-myapp-knp5s    1/1     Running   0   61s    10.243.20.231    10.243.20.231   <none>    <none>
myserver-myapp-lwtbq    1/1     Running   0   61s    10.243.20.230    10.243.20.230   <none>    <none>
myserver-myapp-vpf5d    1/1     Running   0   61s    10.243.20.241    10.243.20.241   <none>    <none>
myserver-myapp-wf692    1/1     Running   0   61s    10.243.20.232    10.243.20.232   <none>    <none>
myserver-myapp-xdzp2    1/1     Running   0   61s    10.243.20.242    10.243.20.242   <none>    <none>
posted @ 2024-02-04 17:19  wuhaolam  阅读(5)  评论(0编辑  收藏  举报