30-sts控制器

参考链接:
https://kubernetes.io/zh-cn/docs/concepts/workloads/controllers/statefulset/
https://kubernetes.io/zh-cn/docs/tutorials/stateful-application/basic-stateful-set/

一、StatefulSets概述

  • 以Nginx的为例,当任意一个Nginx挂掉,其处理的逻辑是相同的,即仅需重新创建一个Pod副本即可,这类服务我们称之为无状态服务。

  • 以MySQL主从同步为例,master,slave两个库任意一个库挂掉,其处理逻辑是不相同的,这类服务我们称之为有状态服务。

  • 有状态服务面临的难题:

(1)启动/停止顺序;
(2)pod实例的数据是独立存储;
(3)需要固定的IP地址或者主机名;

  • StatefulSet一般用于有状态服务,StatefulSets对于需要满足以下一个或多个需求的应用程序很有价值。

(1)稳定唯一的网络标识符。
(2)稳定独立持久的存储。
(3)有序优雅的部署和缩放。
(4)有序自动的滚动更新。

  • 稳定的网络标识:

​ 其本质对应的是一个service资源,只不过这个service没有定义VIP,我们称之为headless service,即"无头服务"。
​ 通过"headless service"来维护Pod的网络身份,会为每个Pod分配一个数字编号并且按照编号顺序部署。
综上所述,无头服务("headless service")要求满足以下两点:
​ (1)将svc资源的clusterIP字段设置None,即"clusterIP: None";
​ (2)将sts资源的serviceName字段声明为无头服务的名称;

  • 独享存储:

​ Statefulset的存储卷使用VolumeClaimTemplate创建,称为"存储卷申请模板"。

​ 当sts资源使用VolumeClaimTemplate创建一个PVC时,同样也会为每个Pod分配并创建唯一的pvc编号,每个pvc绑定对应pv,从而保证每个Pod都有独立的存储。

二、StatefulSets控制器-网络唯一标识之headless

1.编写资源清单

[root@master231 sts]# cat 01-statefulset-headless-network.yaml 
apiVersion: v1
kind: Service
metadata:
  name: dingzhiyan-linux-headless
spec:
  ports:
  - port: 80
    name: web
  # 将clusterIP字段设置为None表示为一个无头服务,即svc将不会分配VIP。
  clusterIP: None
  selector:
    app: nginx


---

apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: dingzhiyan-linux-web
spec:
  selector:
    matchLabels:
      app: nginx
  # 声明无头服务    
  serviceName: dingzhiyan-linux-headless
  replicas: 3 
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: registry.cn-hangzhou.aliyuncs.com/yinzhengjie-k8s/apps:v1
        imagePullPolicy: Always
[root@master231 sts]# 
[root@master231 sts]# 
[root@master231 sts]# kubectl apply -f  01-statefulset-headless-network.yaml 
service/dingzhiyan-linux-headless created
statefulset.apps/dingzhiyan-linux-web created
[root@master231 sts]# 
[root@master231 sts]# kubectl get -f 01-statefulset-headless-network.yaml
NAME                               TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
service/dingzhiyan-linux-headless   ClusterIP   None         <none>        80/TCP    6s

NAME                                   READY   AGE
statefulset.apps/dingzhiyan-linux-web   2/3     6s
[root@master231 sts]# 
[root@master231 sts]# 
[root@master231 sts]# kubectl get pods -o wide
NAME                    READY   STATUS    RESTARTS   AGE   IP               NODE        NOMINATED NODE   READINESS GATES
dingzhiyan-linux-web-0   1/1     Running   0          18s   10.100.140.113   worker233   <none>           <none>
dingzhiyan-linux-web-1   1/1     Running   0          16s   10.100.203.179   worker232   <none>           <none>
dingzhiyan-linux-web-2   1/1     Running   0          14s   10.100.140.115   worker233   <none>           <none>

2.使用响应式API创建测试Pod

[root@master231 sts]# kubectl run -it dns-test --rm --image=registry.cn-hangzhou.aliyuncs.com/yinzhengjie-k8s/apps:v1 -- sh
If you don't see a command prompt, try pressing enter.
/ # for i in `seq 0 2`;do ping dingzhiyan-linux-web-${i}.dingzhiyan-linux-headless.default.svc.dingzhiyan.com  -c 3;done
PING dingzhiyan-linux-web-0.dingzhiyan-linux-headless.default.svc.dingzhiyan.com (10.100.140.113): 56 data bytes
64 bytes from 10.100.140.113: seq=0 ttl=63 time=1.075 ms
64 bytes from 10.100.140.113: seq=1 ttl=63 time=0.074 ms
64 bytes from 10.100.140.113: seq=2 ttl=63 time=0.078 ms

--- dingzhiyan-linux-web-0.dingzhiyan-linux-headless.default.svc.dingzhiyan.com ping statistics ---
3 packets transmitted, 3 packets received, 0% packet loss
round-trip min/avg/max = 0.074/0.409/1.075 ms
PING dingzhiyan-linux-web-1.dingzhiyan-linux-headless.default.svc.dingzhiyan.com (10.100.203.179): 56 data bytes
64 bytes from 10.100.203.179: seq=0 ttl=62 time=1.425 ms
64 bytes from 10.100.203.179: seq=1 ttl=62 time=0.324 ms
64 bytes from 10.100.203.179: seq=2 ttl=62 time=0.306 ms

--- dingzhiyan-linux-web-1.dingzhiyan-linux-headless.default.svc.dingzhiyan.com ping statistics ---
3 packets transmitted, 3 packets received, 0% packet loss
round-trip min/avg/max = 0.306/0.685/1.425 ms
PING dingzhiyan-linux-web-2.dingzhiyan-linux-headless.default.svc.dingzhiyan.com (10.100.140.115): 56 data bytes
64 bytes from 10.100.140.115: seq=0 ttl=63 time=1.201 ms
64 bytes from 10.100.140.115: seq=1 ttl=63 time=0.112 ms
64 bytes from 10.100.140.115: seq=2 ttl=63 time=0.122 ms

--- dingzhiyan-linux-web-2.dingzhiyan-linux-headless.default.svc.dingzhiyan.com ping statistics ---
3 packets transmitted, 3 packets received, 0% packet loss
round-trip min/avg/max = 0.112/0.478/1.201 ms
/ # 

三、StatefulSets控制器-独享存储

1.编写资源清单

cat > 02-statefulset-headless-volumeClaimTemplates.yaml <<'EOF'
apiVersion: v1
kind: Service
metadata:
  name: dingzhiyan-linux-headless
spec:
  ports:
  - port: 80
    name: web
  clusterIP: None
  selector:
    app: nginx
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: dingzhiyan-linux-web
spec:
  selector:
    matchLabels:
      app: nginx
  serviceName: dingzhiyan-linux-headless
  replicas: 3 
  # 卷申请模板,会为每个Pod去创建唯一的pvc并与之关联哟!
  volumeClaimTemplates:
  - metadata:
      name: data
    spec:
      accessModes: [ "ReadWriteOnce" ]
      # 声明咱们自定义的动态存储类,即sc资源。
      storageClassName: "dingzhiyan-sc-xixi"
      resources:
        requests:
          storage: 2Gi
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: registry.cn-hangzhou.aliyuncs.com/yinzhengjie-k8s/apps:v1
        volumeMounts:
        - name: data
          mountPath: /usr/share/nginx/html
---
apiVersion: v1
kind: Service
metadata:
  name: dingzhiyan-linux-sts-svc
spec:
  selector:
     app: nginx
  ports:
  - port: 80
    targetPort: 80
EOF

2.连接到Pod逐个修改nginx首页文件

[root@master231 sts]# kubectl apply -f 02-statefulset-headless-volumeClaimTemplates.yaml
service/dingzhiyan-linux-headless created
statefulset.apps/dingzhiyan-linux-web created
service/dingzhiyan-linux-sts-svc created
[root@master231 sts]#
[root@master231 sts]# kubectl get pods -o wide
NAME                    READY   STATUS    RESTARTS   AGE   IP               NODE        NOMINATED NODE   READINESS GATES
dingzhiyan-linux-web-0   1/1     Running   0          10s   10.100.140.112   worker233   <none>           <none>
dingzhiyan-linux-web-1   1/1     Running   0          7s    10.100.203.183   worker232   <none>           <none>
dingzhiyan-linux-web-2   1/1     Running   0          3s    10.100.140.110   worker233   <none>           <none>
[root@master231 sts]# 
[root@master231 sts]# kubectl get -f 02-statefulset-headless-volumeClaimTemplates.yaml 
NAME                               TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
service/dingzhiyan-linux-headless   ClusterIP   None         <none>        80/TCP    83s

NAME                                   READY   AGE
statefulset.apps/dingzhiyan-linux-web   3/3     83s

NAME                              TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)   AGE
service/dingzhiyan-linux-sts-svc   ClusterIP   10.200.118.253   <none>        80/TCP    106s
[root@master231 sts]# 
[root@master231 sts]# kubectl exec -it dingzhiyan-linux-web-0 -- sh
/ #  echo 111111111111111 > /usr/share/nginx/html/index.html
/ # 
[root@master231 sts]# 
[root@master231 sts]# kubectl exec -it dingzhiyan-linux-web-1 -- sh
/ # echo 2222222222222222 > /usr/share/nginx/html/index.html
/ # 
[root@master231 sts]# 
[root@master231 sts]# kubectl exec -it dingzhiyan-linux-web-2 -- sh
/ # echo 333333333333 > /usr/share/nginx/html/index.html
/ # 
[root@master231 sts]# 

3.测试SVC访问

[root@master231 statefulsets]# vim /etc/resolv.conf   # 不修改宿主机的配置文件的话,可以直接启动pod进行测试即可。
nameserver 10.200.0.10
....
[root@master231 statefulsets]# for i in `seq 10`; do curl dingzhiyan-linux-sts-svc.default.svc.dingzhiyan.com;done
111111111111111
333333333333
2222222222222222
111111111111111
333333333333
2222222222222222
111111111111111
333333333333
2222222222222222
111111111111111
[root@master231 statefulsets]# 

4.验证后端存储

[root@master231 sts]# kubectl get pods -o wide
NAME                    READY   STATUS    RESTARTS   AGE     IP               NODE        NOMINATED NODE   READINESS GATES
dingzhiyan-linux-web-0   1/1     Running   0          3m45s   10.100.140.112   worker233   <none>           <none>
dingzhiyan-linux-web-1   1/1     Running   0          3m42s   10.100.203.183   worker232   <none>           <none>
dingzhiyan-linux-web-2   1/1     Running   0          3m38s   10.100.140.110   worker233   <none>           <none>
[root@master231 sts]#
[root@master231 sts]# kubectl describe pod dingzhiyan-linux-web-0  | grep ClaimName
    ClaimName:  data-dingzhiyan-linux-web-0
[root@master231 sts]# 
[root@master231 sts]# kubectl describe pod dingzhiyan-linux-web-1  | grep ClaimName
    ClaimName:  data-dingzhiyan-linux-web-1
[root@master231 sts]# 
[root@master231 sts]# kubectl describe pod dingzhiyan-linux-web-2  | grep ClaimName
    ClaimName:  data-dingzhiyan-linux-web-2
[root@master231 sts]# 
[root@master231 sts]# kubectl get pvc data-dingzhiyan-linux-web-0  data-dingzhiyan-linux-web-1 data-dingzhiyan-linux-web-2
NAME                         STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS        AGE
data-dingzhiyan-linux-web-0   Bound    pvc-717f244d-ee7d-42ea-9b0e-46bb102c2900   2Gi        RWO            dingzhiyan-sc-xixi   4m49s
data-dingzhiyan-linux-web-1   Bound    pvc-b3d26fa5-07c6-4be1-9a83-9cac3564ea6e   2Gi        RWO            dingzhiyan-sc-xixi   4m46s
data-dingzhiyan-linux-web-2   Bound    pvc-01cd17f3-5ddb-4d58-9875-afcd4f968755   2Gi        RWO            dingzhiyan-sc-xixi   4m42s
[root@master231 sts]# 
[root@master231 sts]# 
[root@master231 sts]# kubectl get pvc data-dingzhiyan-linux-web-0  data-dingzhiyan-linux-web-1 data-dingzhiyan-linux-web-2 | awk 'NR>=2{print $3}' | xargs kubectl describe pv  | grep VolumeHandle
    VolumeHandle:      10.0.0.231#dingzhiyan/data/nfs-server/sc-xixi#pvc-717f244d-ee7d-42ea-9b0e-46bb102c2900##
    VolumeHandle:      10.0.0.231#dingzhiyan/data/nfs-server/sc-xixi#pvc-b3d26fa5-07c6-4be1-9a83-9cac3564ea6e##
    VolumeHandle:      10.0.0.231#dingzhiyan/data/nfs-server/sc-xixi#pvc-01cd17f3-5ddb-4d58-9875-afcd4f968755##
[root@master231 sts]# 
[root@master231 sts]# cat /dingzhiyan/data/nfs-server/sc-xixi/pvc-717f244d-ee7d-42ea-9b0e-46bb102c2900/index.html 
111111111111111
[root@master231 sts]# 
[root@master231 sts]# cat /dingzhiyan/data/nfs-server/sc-xixi/pvc-b3d26fa5-07c6-4be1-9a83-9cac3564ea6e/index.html 
2222222222222222
[root@master231 sts]# 
[root@master231 sts]# cat /dingzhiyan/data/nfs-server/sc-xixi/pvc-01cd17f3-5ddb-4d58-9875-afcd4f968755/index.html 
333333333333

四、sts的分段更新

1.编写资源清单

[root@master231 statefulsets]# cat 03-statefuleset-updateStrategy-partition.yaml 
apiVersion: v1
kind: Service
metadata:
  name: sts-headless
spec:
  ports:
  - port: 80
    name: web
  clusterIP: None
  selector:
    app: web

---

apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: dingzhiyan-sts-web
spec:
  # 指定sts资源的更新策略
  updateStrategy:
    # 配置滚动更新
    rollingUpdate:
      # 当编号小于3时不更新,说白了,就是Pod编号大于等于3的Pod会被更新!
      partition: 3
  selector:
    matchLabels:
      app: web
  serviceName: sts-headless
  replicas: 5
  template:
    metadata:
      labels:
        app: web
    spec:
      containers:
      - name: c1
        image: registry.cn-hangzhou.aliyuncs.com/yinzhengjie-k8s/apps:v1
        #image: registry.cn-hangzhou.aliyuncs.com/yinzhengjie-k8s/apps:v2
---
apiVersion: v1
kind: Service
metadata:
  name: dingzhiyan-sts-svc
spec:
  selector:
     app: web
  ports:
  - port: 80
    targetPort: 80

2.验证

[root@master231 statefulsets]# kubectl get pods -l app=web -o yaml | grep "\- image:"
    - image: registry.cn-hangzhou.aliyuncs.com/yinzhengjie-k8s/apps:v1
    - image: registry.cn-hangzhou.aliyuncs.com/yinzhengjie-k8s/apps:v1
    - image: registry.cn-hangzhou.aliyuncs.com/yinzhengjie-k8s/apps:v1
    - image: registry.cn-hangzhou.aliyuncs.com/yinzhengjie-k8s/apps:v1
    - image: registry.cn-hangzhou.aliyuncs.com/yinzhengjie-k8s/apps:v1
[root@master231 statefulsets]# 
[root@master231 statefulsets]# kubectl apply -f 03-statefuleset-updateStrategy-partition.yaml
service/sts-headless unchanged
statefulset.apps/dingzhiyan-sts-web configured
service/dingzhiyan-sts-svc unchanged
[root@master231 statefulsets]# 
[root@master231 statefulsets]# kubectl get pods -o wide
NAME                  READY   STATUS              RESTARTS   AGE    IP             NODE        NOMINATED NODE   READINESS GATES
dingzhiyan-sts-web-0   1/1     Running             0          103s   10.100.1.176   worker232   <none>           <none>
dingzhiyan-sts-web-1   1/1     Running             0          102s   10.100.2.152   worker233   <none>           <none>
dingzhiyan-sts-web-2   1/1     Running             0          100s   10.100.2.153   worker233   <none>           <none>
dingzhiyan-sts-web-3   0/1     ContainerCreating   0          1s     <none>         worker232   <none>           <none>
dingzhiyan-sts-web-4   1/1     Running             0          2s     10.100.2.155   worker233   <none>           <none>
[root@master231 statefulsets]# 
[root@master231 statefulsets]# 
[root@master231 statefulsets]# kubectl get pods -o wide
NAME                  READY   STATUS    RESTARTS   AGE    IP             NODE        NOMINATED NODE   READINESS GATES
dingzhiyan-sts-web-0   1/1     Running   0          111s   10.100.1.176   worker232   <none>           <none>
dingzhiyan-sts-web-1   1/1     Running   0          110s   10.100.2.152   worker233   <none>           <none>
dingzhiyan-sts-web-2   1/1     Running   0          108s   10.100.2.153   worker233   <none>           <none>
dingzhiyan-sts-web-3   1/1     Running   0          9s     10.100.1.178   worker232   <none>           <none>
dingzhiyan-sts-web-4   1/1     Running   0          10s    10.100.2.155   worker233   <none>           <none>
[root@master231 statefulsets]# 
[root@master231 statefulsets]# kubectl get pods -l app=web -o yaml | grep "\- image:"
    - image: registry.cn-hangzhou.aliyuncs.com/yinzhengjie-k8s/apps:v1
    - image: registry.cn-hangzhou.aliyuncs.com/yinzhengjie-k8s/apps:v1
    - image: registry.cn-hangzhou.aliyuncs.com/yinzhengjie-k8s/apps:v1
    - image: registry.cn-hangzhou.aliyuncs.com/yinzhengjie-k8s/apps:v2
    - image: registry.cn-hangzhou.aliyuncs.com/yinzhengjie-k8s/apps:v2
[root@master231 statefulsets]# 

五、基于sts部署zookeeper集群

参考链接:
https://kubernetes.io/zh-cn/docs/tutorials/stateful-application/zookeeper/

1.K8S所有节点导入镜像

2.编写资源清单

[root@master231 sts]# cat 04-sts-zookeeper.yaml 
apiVersion: v1
kind: Service
metadata:
  name: zk-hs
  labels:
    app: zk
spec:
  ports:
  - port: 2888
    name: server
  - port: 3888
    name: leader-election
  clusterIP: None
  selector:
    app: zk
---
apiVersion: v1
kind: Service
metadata:
  name: zk-cs
  labels:
    app: zk
spec:
  ports:
  - port: 2181
    name: client
  selector:
    app: zk
---
apiVersion: policy/v1
# 此类型用于定义可以对一组Pod造成的最大中断,说白了就是最大不可用的Pod数量。
# 一般情况下,对于分布式集群而言,假设集群故障容忍度为N,则集群最少需要2N+1个Pod。
kind: PodDisruptionBudget
metadata:
  name: zk-pdb
spec:
  # 匹配Pod
  selector:
    matchLabels:
      app: zk
  # 最大不可用的Pod数量。这意味着将来zookeeper集群,最少要2*1 +1 = 3个Pod数量。
  maxUnavailable: 1
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: zk
spec:
  selector:
    matchLabels:
      app: zk
  serviceName: zk-hs
  replicas: 3
  updateStrategy:
    type: RollingUpdate
  podManagementPolicy: OrderedReady
  template:
    metadata:
      labels:
        app: zk
    spec:
      tolerations:
      - key: node-role.kubernetes.io/master
        operator: Exists
        effect: NoSchedule
      affinity:
        podAntiAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            - labelSelector:
                matchExpressions:
                  - key: "app"
                    operator: In
                    values:
                    - zk
              topologyKey: "kubernetes.io/hostname"
      containers:
      - name: kubernetes-zookeeper
        imagePullPolicy: Always
        image: "registry.k8s.io/kubernetes-zookeeper:1.0-3.4.10"
        resources:
          requests:
            memory: "1Gi"
            cpu: "0.5"
        ports:
        - containerPort: 2181
          name: client
        - containerPort: 2888
          name: server
        - containerPort: 3888
          name: leader-election
        command:
        - sh
        - -c
        - "start-zookeeper \
          --servers=3 \
          --data_dir=/var/lib/zookeeper/data \
          --data_log_dir=/var/lib/zookeeper/data/log \
          --conf_dir=/opt/zookeeper/conf \
          --client_port=2181 \
          --election_port=3888 \
          --server_port=2888 \
          --tick_time=2000 \
          --init_limit=10 \
          --sync_limit=5 \
          --heap=512M \
          --max_client_cnxns=60 \
          --snap_retain_count=3 \
          --purge_interval=12 \
          --max_session_timeout=40000 \
          --min_session_timeout=4000 \
          --log_level=INFO"
        readinessProbe:
          exec:
            command:
            - sh
            - -c
            - "zookeeper-ready 2181"
          initialDelaySeconds: 10
          timeoutSeconds: 5
        livenessProbe:
          exec:
            command:
            - sh
            - -c
            - "zookeeper-ready 2181"
          initialDelaySeconds: 10
          timeoutSeconds: 5
        volumeMounts:
        - name: datadir
          mountPath: /var/lib/zookeeper
      securityContext:
        runAsUser: 1000
        fsGroup: 1000
  volumeClaimTemplates:
  - metadata:
      name: datadir
    spec:
      accessModes: [ "ReadWriteOnce" ]
      resources:
        requests:
          storage: 10Gi
[root@master231 sts]# 
[root@master231 sts]# 
[root@master231 sts]# kubectl apply -f  04-sts-zookeeper.yaml 
service/zk-hs created
service/zk-cs created
poddisruptionbudget.policy/zk-pdb created
statefulset.apps/zk created
[root@master231 sts]# 
[root@master231 sts]# 

3.实时观察Pod状态

[root@master231 ~]# kubectl get pods -o wide -w -l app=zk
NAME   READY   STATUS    RESTARTS   AGE   IP       NODE     NOMINATED NODE   READINESS GATES
zk-0   0/1     Pending   0          0s    <none>   <none>   <none>           <none>
zk-0   0/1     Pending   0          1s    <none>   worker233   <none>           <none>
zk-0   0/1     ContainerCreating   0          1s    <none>   worker233   <none>           <none>
zk-0   0/1     ContainerCreating   0          3s    <none>   worker233   <none>           <none>
zk-0   0/1     Running             0          7s    10.100.140.125   worker233   <none>           <none>
zk-0   1/1     Running             0          22s   10.100.140.125   worker233   <none>           <none>
zk-1   0/1     Pending             0          0s    <none>           <none>      <none>           <none>
zk-1   0/1     Pending             0          0s    <none>           master231   <none>           <none>
zk-1   0/1     ContainerCreating   0          0s    <none>           master231   <none>           <none>
zk-1   0/1     ContainerCreating   0          1s    <none>           master231   <none>           <none>
zk-1   0/1     Running             0          5s    10.100.160.189   master231   <none>           <none>
zk-1   1/1     Running             0          21s   10.100.160.189   master231   <none>           <none>
zk-2   0/1     Pending             0          0s    <none>           <none>      <none>           <none>
zk-2   0/1     Pending             0          0s    <none>           worker232   <none>           <none>
zk-2   0/1     ContainerCreating   0          0s    <none>           worker232   <none>           <none>
zk-2   0/1     ContainerCreating   0          1s    <none>           worker232   <none>           <none>
zk-2   0/1     Running             0          5s    10.100.203.188   worker232   <none>           <none>
zk-2   1/1     Running             0          21s   10.100.203.188   worker232   <none>           <none>
...

4.检查后端的存储

[root@master231 ~]# kubectl get pods -o wide 
NAME   READY   STATUS    RESTARTS   AGE   IP               NODE        NOMINATED NODE   READINESS GATES
zk-0   1/1     Running   0          85s   10.100.140.125   worker233   <none>           <none>
zk-1   1/1     Running   0          63s   10.100.160.189   master231   <none>           <none>
zk-2   1/1     Running   0          42s   10.100.203.188   worker232   <none>           <none>
[root@master231 ~]# 
[root@master231 ~]# kubectl get pvc -l app=zk
NAME           STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
datadir-zk-0   Bound    pvc-b6072f27-637a-4c5d-9604-7095c8143f15   10Gi       RWO            nfs-csi        43m
datadir-zk-1   Bound    pvc-10fdeb29-70b9-41a6-ae8c-f3b540ffcbdc   10Gi       RWO            nfs-csi        42m
datadir-zk-2   Bound    pvc-db936b79-be79-4155-b2d0-ccc05a7e4531   10Gi       RWO            nfs-csi        37m
[root@master231 ~]# 

5.验证集群是否正常

[root@master231 sts]# for i in 0 1 2; do kubectl exec zk-$i -- hostname; done
zk-0
zk-1
zk-2
[root@master231 sts]# 
[root@master231 sts]# for i in 0 1 2; do echo "myid zk-$i";kubectl exec zk-$i -- cat /var/lib/zookeeper/data/myid; done
myid zk-0
1
myid zk-1
2
myid zk-2
3
[root@master231 sts]# 
[root@master231 sts]# for i in 0 1 2; do kubectl exec zk-$i -- hostname -f; done
zk-0.zk-hs.default.svc.dingzhiyan.com
zk-1.zk-hs.default.svc.dingzhiyan.com
zk-2.zk-hs.default.svc.dingzhiyan.com
[root@master231 sts]# 
[root@master231 sts]# kubectl exec zk-0 -- cat /opt/zookeeper/conf/zoo.cfg
#This file was autogenerated DO NOT EDIT
clientPort=2181
dataDir=/var/lib/zookeeper/data
dataLogDir=/var/lib/zookeeper/data/log
tickTime=2000
initLimit=10
syncLimit=5
maxClientCnxns=60
minSessionTimeout=4000
maxSessionTimeout=40000
autopurge.snapRetainCount=3
autopurge.purgeInteval=12
server.1=zk-0.zk-hs.default.svc.dingzhiyan.com:2888:3888
server.2=zk-1.zk-hs.default.svc.dingzhiyan.com:2888:3888
server.3=zk-2.zk-hs.default.svc.dingzhiyan.com:2888:3888
[root@master231 sts]# 
[root@master231 sts]# kubectl exec zk-1 -- cat /opt/zookeeper/conf/zoo.cfg
#This file was autogenerated DO NOT EDIT
clientPort=2181
dataDir=/var/lib/zookeeper/data
dataLogDir=/var/lib/zookeeper/data/log
tickTime=2000
initLimit=10
syncLimit=5
maxClientCnxns=60
minSessionTimeout=4000
maxSessionTimeout=40000
autopurge.snapRetainCount=3
autopurge.purgeInteval=12
server.1=zk-0.zk-hs.default.svc.dingzhiyan.com:2888:3888
server.2=zk-1.zk-hs.default.svc.dingzhiyan.com:2888:3888
server.3=zk-2.zk-hs.default.svc.dingzhiyan.com:2888:3888
[root@master231 sts]# 
[root@master231 sts]# 
[root@master231 sts]# kubectl exec zk-2 -- cat /opt/zookeeper/conf/zoo.cfg
#This file was autogenerated DO NOT EDIT
clientPort=2181
dataDir=/var/lib/zookeeper/data
dataLogDir=/var/lib/zookeeper/data/log
tickTime=2000
initLimit=10
syncLimit=5
maxClientCnxns=60
minSessionTimeout=4000
maxSessionTimeout=40000
autopurge.snapRetainCount=3
autopurge.purgeInteval=12
server.1=zk-0.zk-hs.default.svc.dingzhiyan.com:2888:3888
server.2=zk-1.zk-hs.default.svc.dingzhiyan.com:2888:3888
server.3=zk-2.zk-hs.default.svc.dingzhiyan.com:2888:3888
[root@master231 sts]# 

6.创建数据测试

[root@master231 sts]# kubectl exec -it zk-0 -- zkCli.sh
....
[zk: localhost:2181(CONNECTED) 0] ls /
[zookeeper]
[zk: localhost:2181(CONNECTED) 1] 
[zk: localhost:2181(CONNECTED) 1] create /school dingzhiyan
Created /school
[zk: localhost:2181(CONNECTED) 2] 
[zk: localhost:2181(CONNECTED) 2] create /class linux96   
Created /class
[zk: localhost:2181(CONNECTED) 3] 
[zk: localhost:2181(CONNECTED) 3] ls /
[zookeeper, school, class]
[zk: localhost:2181(CONNECTED) 4] 
...



[root@master231 sts]# kubectl exec -it zk-1 -- zkCli.sh
...
[zk: localhost:2181(CONNECTED) 0] 
[zk: localhost:2181(CONNECTED) 0] ls /
[zookeeper, school, class]
[zk: localhost:2181(CONNECTED) 1] 
[zk: localhost:2181(CONNECTED) 1] get /school
dingzhiyan
cZxid = 0x200000005
ctime = Sun Apr 20 03:03:53 UTC 2025
mZxid = 0x200000005
mtime = Sun Apr 20 03:03:53 UTC 2025
pZxid = 0x200000005
cversion = 0
dataVersion = 0
aclVersion = 0
ephemeralOwner = 0x0
dataLength = 9
numChildren = 0
[zk: localhost:2181(CONNECTED) 2] 
[zk: localhost:2181(CONNECTED) 2] get /class 
linux96
cZxid = 0x200000006
ctime = Sun Apr 20 03:03:59 UTC 2025
mZxid = 0x200000006
mtime = Sun Apr 20 03:03:59 UTC 2025
pZxid = 0x200000006
cversion = 0
dataVersion = 0
aclVersion = 0
ephemeralOwner = 0x0
dataLength = 7
numChildren = 0
[zk: localhost:2181(CONNECTED) 3] 
[zk: localhost:2181(CONNECTED) 3] set /class LINUX96
cZxid = 0x200000006
ctime = Sun Apr 20 03:03:59 UTC 2025
mZxid = 0x200000009
mtime = Sun Apr 20 03:04:51 UTC 2025
pZxid = 0x200000006
cversion = 0
dataVersion = 1
aclVersion = 0
ephemeralOwner = 0x0
dataLength = 7
numChildren = 0
[zk: localhost:2181(CONNECTED) 4] 
[zk: localhost:2181(CONNECTED) 4] delete /school
[zk: localhost:2181(CONNECTED) 5] 
[zk: localhost:2181(CONNECTED) 5] ls /
[zookeeper, class]
[zk: localhost:2181(CONNECTED) 6] 
...



[root@master231 sts]# kubectl exec -it zk-2 -- zkCli.sh
...
[zk: localhost:2181(CONNECTED) 0] ls /
[zookeeper, class]
[zk: localhost:2181(CONNECTED) 1] 
[zk: localhost:2181(CONNECTED) 1] get /class
LINUX96
cZxid = 0x200000006
ctime = Sun Apr 20 03:03:59 UTC 2025
mZxid = 0x200000009
mtime = Sun Apr 20 03:04:51 UTC 2025
pZxid = 0x200000006
cversion = 0
dataVersion = 1
aclVersion = 0
ephemeralOwner = 0x0
dataLength = 7
numChildren = 0
[zk: localhost:2181(CONNECTED) 2] 

7.查看start-zookeeper 脚本逻辑

[root@master231 ~]# kubectl exec -it zk-0 -- bash
zookeeper@zk-0:/$ 
zookeeper@zk-0:/$ which start-zookeeper
/usr/bin/start-zookeeper
zookeeper@zk-0:/$ 
zookeeper@zk-0:/$ wc -l /usr/bin/start-zookeeper 
320 /usr/bin/start-zookeeper
zookeeper@zk-0:/$ 
zookeeper@zk-0:/$ cat /usr/bin/start-zookeeper ;echo
#!/usr/bin/env bash
# Copyright 2017 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
#
#Usage: start-zookeeper [OPTIONS]
# Starts a ZooKeeper server based on the supplied options.
#     --servers           The number of servers in the ensemble. The default 
#                         value is 1.

#     --data_dir          The directory where the ZooKeeper process will store its
#                         snapshots. The default is /var/lib/zookeeper/data.

#     --data_log_dir      The directory where the ZooKeeper process will store its 
#                         write ahead log. The default is 
#                         /var/lib/zookeeper/data/log.

#     --conf_dir          The directoyr where the ZooKeeper process will store its
#                         configuration. The default is /opt/zookeeper/conf.

#     --client_port       The port on which the ZooKeeper process will listen for 
#                         client requests. The default is 2181.

#     --election_port     The port on which the ZooKeeper process will perform 
#                         leader election. The default is 3888.

#     --server_port       The port on which the ZooKeeper process will listen for 
#                         requests from other servers in the ensemble. The 
#                         default is 2888. 

#     --tick_time         The length of a ZooKeeper tick in ms. The default is 
#                         2000.

#     --init_limit        The number of Ticks that an ensemble member is allowed 
#                         to perform leader election. The default is 10.

#     --sync_limit        The maximum session timeout that the ensemble will 
#                         allows a client to request. The default is 5.

#     --heap              The maximum amount of heap to use. The format is the 
#                         same as that used for the Xmx and Xms parameters to the 
#                         JVM. e.g. --heap=2G. The default is 2G.

#     --max_client_cnxns  The maximum number of client connections that the 
#                         ZooKeeper process will accept simultaneously. The 
#                         default is 60.

#     --snap_retain_count The maximum number of snapshots the ZooKeeper process 
#                         will retain if purge_interval is greater than 0. The 
#                         default is 3.

#     --purge_interval    The number of hours the ZooKeeper process will wait 
#                         between purging its old snapshots. If set to 0 old 
#                         snapshots will never be purged. The default is 0.

#     --max_session_timeout The maximum time in milliseconds for a client session 
#                         timeout. The default value is 2 * tick time.

#     --min_session_timeout The minimum time in milliseconds for a client session 
#                         timeout. The default value is 20 * tick time.

#     --log_level         The log level for the zookeeeper server. Either FATAL,
#                         ERROR, WARN, INFO, DEBUG. The default is INFO.


USER=`whoami`
HOST=`hostname -s`
DOMAIN=`hostname -d`
LOG_LEVEL=INFO
DATA_DIR="/var/lib/zookeeper/data"
DATA_LOG_DIR="/var/lib/zookeeper/log"
LOG_DIR="/var/log/zookeeper"
CONF_DIR="/opt/zookeeper/conf"
CLIENT_PORT=2181
SERVER_PORT=2888
ELECTION_PORT=3888
TICK_TIME=2000
INIT_LIMIT=10
SYNC_LIMIT=5
HEAP=2G
MAX_CLIENT_CNXNS=60
SNAP_RETAIN_COUNT=3
PURGE_INTERVAL=0
SERVERS=1

function print_usage() {
echo "\
Usage: start-zookeeper [OPTIONS]
Starts a ZooKeeper server based on the supplied options.
    --servers           The number of servers in the ensemble. The default 
                        value is 1.

    --data_dir          The directory where the ZooKeeper process will store its
                        snapshots. The default is /var/lib/zookeeper/data.

    --data_log_dir      The directory where the ZooKeeper process will store its 
                        write ahead log. The default is 
                        /var/lib/zookeeper/data/log.

    --conf_dir          The directoyr where the ZooKeeper process will store its
                        configuration. The default is /opt/zookeeper/conf.

    --client_port       The port on which the ZooKeeper process will listen for 
                        client requests. The default is 2181.

    --election_port     The port on which the ZooKeeper process will perform 
                        leader election. The default is 3888.

    --server_port       The port on which the ZooKeeper process will listen for 
                        requests from other servers in the ensemble. The 
                        default is 2888. 

    --tick_time         The length of a ZooKeeper tick in ms. The default is 
                        2000.

    --init_limit        The number of Ticks that an ensemble member is allowed 
                        to perform leader election. The default is 10.

    --sync_limit        The maximum session timeout that the ensemble will 
                        allows a client to request. The default is 5.

    --heap              The maximum amount of heap to use. The format is the 
                        same as that used for the Xmx and Xms parameters to the 
                        JVM. e.g. --heap=2G. The default is 2G.

    --max_client_cnxns  The maximum number of client connections that the 
                        ZooKeeper process will accept simultaneously. The 
                        default is 60.

    --snap_retain_count The maximum number of snapshots the ZooKeeper process 
                        will retain if purge_interval is greater than 0. The 
                        default is 3.

    --purge_interval    The number of hours the ZooKeeper process will wait 
                        between purging its old snapshots. If set to 0 old 
                        snapshots will never be purged. The default is 0.

    --max_session_timeout The maximum time in milliseconds for a client session 
                        timeout. The default value is 2 * tick time.

    --min_session_timeout The minimum time in milliseconds for a client session 
                        timeout. The default value is 20 * tick time.

    --log_level         The log level for the zookeeeper server. Either FATAL,
                        ERROR, WARN, INFO, DEBUG. The default is INFO.
"
}

function create_data_dirs() {
    if [ ! -d $DATA_DIR  ]; then
        mkdir -p $DATA_DIR
        chown -R $USER:$USER $DATA_DIR
    fi

    if [ ! -d $DATA_LOG_DIR  ]; then
        mkdir -p $DATA_LOG_DIR
        chown -R $USER:USER $DATA_LOG_DIR
    fi

    if [ ! -d $LOG_DIR  ]; then
        mkdir -p $LOG_DIR
        chown -R $USER:$USER $LOG_DIR
    fi
    if [ ! -f $ID_FILE ] && [ $SERVERS -gt 1 ]; then
        echo $MY_ID >> $ID_FILE
    fi
}

function print_servers() {
    for (( i=1; i<=$SERVERS; i++ ))
    do
        echo "server.$i=$NAME-$((i-1)).$DOMAIN:$SERVER_PORT:$ELECTION_PORT"
    done
}

function create_config() {
    rm -f $CONFIG_FILE
    echo "#This file was autogenerated DO NOT EDIT" >> $CONFIG_FILE
    echo "clientPort=$CLIENT_PORT" >> $CONFIG_FILE
    echo "dataDir=$DATA_DIR" >> $CONFIG_FILE
    echo "dataLogDir=$DATA_LOG_DIR" >> $CONFIG_FILE
    echo "tickTime=$TICK_TIME" >> $CONFIG_FILE
    echo "initLimit=$INIT_LIMIT" >> $CONFIG_FILE
    echo "syncLimit=$SYNC_LIMIT" >> $CONFIG_FILE
    echo "maxClientCnxns=$MAX_CLIENT_CNXNS" >> $CONFIG_FILE
    echo "minSessionTimeout=$MIN_SESSION_TIMEOUT" >> $CONFIG_FILE
    echo "maxSessionTimeout=$MAX_SESSION_TIMEOUT" >> $CONFIG_FILE
    echo "autopurge.snapRetainCount=$SNAP_RETAIN_COUNT" >> $CONFIG_FILE
    echo "autopurge.purgeInteval=$PURGE_INTERVAL" >> $CONFIG_FILE
     if [ $SERVERS -gt 1 ]; then
        print_servers >> $CONFIG_FILE
    fi
    cat $CONFIG_FILE >&2
}

function create_jvm_props() {
    rm -f $JAVA_ENV_FILE
    echo "ZOO_LOG_DIR=$LOG_DIR" >> $JAVA_ENV_FILE
    echo "JVMFLAGS=\"-Xmx$HEAP -Xms$HEAP\"" >> $JAVA_ENV_FILE
}

function create_log_props() {
    rm -f $LOGGER_PROPS_FILE
    echo "Creating ZooKeeper log4j configuration"
    echo "zookeeper.root.logger=CONSOLE" >> $LOGGER_PROPS_FILE
    echo "zookeeper.console.threshold="$LOG_LEVEL >> $LOGGER_PROPS_FILE
    echo "log4j.rootLogger=\${zookeeper.root.logger}" >> $LOGGER_PROPS_FILE
    echo "log4j.appender.CONSOLE=org.apache.log4j.ConsoleAppender" >> $LOGGER_PROPS_FILE
    echo "log4j.appender.CONSOLE.Threshold=\${zookeeper.console.threshold}" >> $LOGGER_PROPS_FILE
    echo "log4j.appender.CONSOLE.layout=org.apache.log4j.PatternLayout" >> $LOGGER_PROPS_FILE
    echo "log4j.appender.CONSOLE.layout.ConversionPattern=%d{ISO8601} [myid:%X{myid}] - %-5p [%t:%C{1}@%L] - %m%n" >> $LOGGER_PROPS_FILE
}

optspec=":hv-:"
while getopts "$optspec" optchar; do

    case "${optchar}" in
        -)
            case "${OPTARG}" in
                servers=*)
                    SERVERS=${OPTARG##*=}
                    ;;
                data_dir=*)
                    DATA_DIR=${OPTARG##*=}
                    ;;
                data_log_dir=*)
                    DATA_LOG_DIR=${OPTARG##*=}
                    ;;
                log_dir=*)
                    LOG_DIR=${OPTARG##*=}
                    ;;
                conf_dir=*)
                    CONF_DIR=${OPTARG##*=}
                    ;;
                client_port=*)
                    CLIENT_PORT=${OPTARG##*=}
                    ;;
                election_port=*)
                    ELECTION_PORT=${OPTARG##*=}
                    ;;
                server_port=*)
                    SERVER_PORT=${OPTARG##*=}
                    ;;
                tick_time=*)
                    TICK_TIME=${OPTARG##*=}
                    ;;
                init_limit=*)
                    INIT_LIMIT=${OPTARG##*=}
                    ;;
                sync_limit=*)
                    SYNC_LIMIT=${OPTARG##*=}
                    ;;
                heap=*)
                    HEAP=${OPTARG##*=}
                    ;;
                max_client_cnxns=*)
                    MAX_CLIENT_CNXNS=${OPTARG##*=}
                    ;;
                snap_retain_count=*)
                    SNAP_RETAIN_COUNT=${OPTARG##*=}
                    ;;
                purge_interval=*)
                    PURGE_INTERVAL=${OPTARG##*=}
                    ;;
                max_session_timeout=*)
                    MAX_SESSION_TIMEOUT=${OPTARG##*=}
                    ;;
                min_session_timeout=*)
                    MIN_SESSION_TIMEOUT=${OPTARG##*=}
                    ;;
                log_level=*)
                    LOG_LEVEL=${OPTARG##*=}
                    ;;
                *)
                    echo "Unknown option --${OPTARG}" >&2
                    exit 1
                    ;;
            esac;;
        h)
            print_usage
            exit
            ;;
        v)
            echo "Parsing option: '-${optchar}'" >&2
            ;;
        *)
            if [ "$OPTERR" != 1 ] || [ "${optspec:0:1}" = ":" ]; then
                echo "Non-option argument: '-${OPTARG}'" >&2
            fi
            ;;
    esac
done

MIN_SESSION_TIMEOUT=${MIN_SESSION_TIMEOUT:- $((TICK_TIME*2))}
MAX_SESSION_TIMEOUT=${MAX_SESSION_TIMEOUT:- $((TICK_TIME*20))}
ID_FILE="$DATA_DIR/myid"
CONFIG_FILE="$CONF_DIR/zoo.cfg"
LOGGER_PROPS_FILE="$CONF_DIR/log4j.properties"
JAVA_ENV_FILE="$CONF_DIR/java.env"
if [[ $HOST =~ (.*)-([0-9]+)$ ]]; then
    NAME=${BASH_REMATCH[1]}
    ORD=${BASH_REMATCH[2]}
else
    echo "Fialed to parse name and ordinal of Pod"
    exit 1
fi

MY_ID=$((ORD+1))

create_config && create_jvm_props && create_log_props && create_data_dirs && exec zkServer.sh start-foreground
zookeeper@zk-0:/$ 
zookeeper@zk-0:/$ 

温馨提示:

​ 业界对于sts控制器有点望而却步,我们知道这个控制器用做有状态服务部署,但是我们不用~

​ 于是coreOS公司有研发出来了Operator(sts+crd)框架,大家可以基于该框架部署各种服务。

六、sts的pod管理策略


[root@master231 sts]# cat 05-sts-podManagementPolicy.yaml 
apiVersion: v1
kind: Service
metadata:
  name: dingzhiyan-linux-headless
spec:
  ports:
  - port: 80
    name: web
  clusterIP: None
  selector:
    app: nginx


---

apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: dingzhiyan-podmanagementpolicy
spec:
  selector:
    matchLabels:
      app: nginx
  serviceName: dingzhiyan-linux-headless
  replicas: 3 
  # 管理Pod的策略,有效值为: OrderedReady, Parallel。
  #   OrderedReady:
  #	    默认就是这种策略,即有序的进行Pod创建,比如pod-0,pod-1,pod-2,pod-3,...
  #   Parallel:
  #     并行启动,同时启动多个Pod副本。
  podManagementPolicy: Parallel
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: registry.cn-hangzhou.aliyuncs.com/yinzhengjie-k8s/apps:v1
        imagePullPolicy: Always
posted @ 2025-05-21 13:17  丁志岩  阅读(23)  评论(0)    收藏  举报