7.2 k8s 基于PV、PVC搭建zookeeper 3节点集群

1、PV,PVC介绍

1.1、StorageClass & PV & PVC关系图

image

  • Volumes 是最基础的存储抽象,其支持多种类型,包括本地存储、NFS、FC以及众多的云存储,我们也可以编写自己的存储插件来支持特定的存储系统。Volume可以被Pod直接使用,也可以被PV使用。普通的Volume和Pod之间是一种静态的绑定关系,在定义Pod的同时,通过volume属性来定义存储的类型,通过volumeMount来定义容器内的挂载点。

  • PersistentVolume(PV) 与普通的Volume不同,PV是Kubernetes中的一个资源对象,创建一个PV相当于创建了一个存储资源对象,这个资源的使用要通过PVC来请求.
    PV是集群中由管理员配置的一段网络存储。 它是集群中的资源,就像节点是集群资源一样。 PV是容量插件,如Volumes,但其生命周期独立于使用PV的任何单个pod。 此API对象捕获存储实现的详细信息,包括NFS,iSCSI或特定于云提供程序的存储系统。

  • PersistentVolumeClaim(PVC) 是用户对存储资源PV的请求,根据PVC中指定的条件Kubernetes动态的寻找系统中的PV资源并进行绑定。
    是由用户进行存储的请求。 它类似于pod。 Pod消耗节点资源,PVC消耗PV资源。Pod可以请求特定级别的资源(CPU和内存)。声明可以请求特定的大小和访问模式(例如,可以一次读/写或多次只读)。
    目前PVC与PV匹配可以通过StorageClassName、matchLabels或者matchExpressions三种方式。

  • StorageClass 存储类,目前kubernetes支持很多存储,例如ceph,nfs,glusterfs等等,StorageClass为管理员提供了一种描述他们提供的存储的“类”的方法。 不同的类可能映射到服务质量级别,或备份策略,或者由群集管理员确定的任意策略。


1.2、生命周期

PV是群集中的资源。PVC是对这些资源的请求,并且还充当对资源的检查。PV和PVC之间的相互作用遵循以下生命周期:

Provisioning ——-> Binding ——–>Using——>Releasing——>Recycling

  • 供应准备Provisioning---通过集群外的存储系统或者云平台来提供存储持久化支持。

    • 静态提供Static:集群管理员创建多个PV。 它们携带可供集群用户使用的真实存储的详细信息。 它们存在于Kubernetes API中,可用于消费
    • 动态提供Dynamic:当管理员创建的静态PV都不匹配用户的PersistentVolumeClaim时,集群可能会尝试为PVC动态配置卷。 此配置基于StorageClasses:PVC必须请求一个类,并且管理员必须已创建并配置该类才能进行动态配置。 要求该类的声明有效地为自己禁用动态配置。
  • 绑定Binding -- 用户创建pvc并指定需要的资源和访问模式。在找到可用pv之前,pvc会保持未绑定状态。

  • 使用Using -- 用户可在pod中像volume一样使用pvc。

  • 释放Releasing -- 用户删除pvc来回收存储资源,pv将变成“released”状态。由于还保留着之前的数据,这些数据需要根据不同的策略来处理,否则这些存储资源无法被其他pvc使用。

  • 回收Recycling -- pv可以设置三种回收策略:保留(Retain),回收(Recycle)和删除(Delete)。

    • 保留策略:允许人工处理保留的数据。
    • 删除策略:将删除pv和外部关联的存储资源,需要插件支持。
    • 回收策略:将执行清除操作,之后可以被新的pvc使用,需要插件支持。

注:目前只有NFS和HostPath类型卷支持回收策略,AWS EBS,GCE PD,Azure Disk和Cinder支持删除(Delete)策略。

参考:
https://zhuanlan.zhihu.com/p/299242718
https://www.cnblogs.com/along21/p/10342788.html

1.3 静态供给的pv/pvc

创建pvc时需指定映射的pv,否则无法创建。pv需事先创建好。

下面的示例是以NFS作为持久存储,给zookeeper 3节点集群的/data、/datalog目录创建的静态pv/pvc

1.3.1 部署NFS server

# 安装
apt install nfs-kernel-server

# 配置文件
/data/nfs_data *(rw,sync,no_root_squash)
/data/k8s-data *(rw,sync,no_root_squash)

# 启动并设置开机启动
systemctl restart nfs-kernel-server &&  systemctl enable nfs-kernel-server 

# 检查共享的目录
showmount -e
Export list for k8-deploy:
/data/k8s-data *
/data/nfs_data *

# 客户端安装
apt install nfs-common

# 客户端挂载测试
mount -t nfs 192.168.2.10:/data/k8s-data /mnt/

1.3.2 pv yaml

root@k8-deploy:~/k8s-yaml/web/zookeeper_v2# cat zookeeper-datadir-pv.yml 
apiVersion: v1
kind: PersistentVolume
metadata:
  name: zookeeper-datadir-pv-1
spec:
  capacity:
    storage: 20Gi
  accessModes:
    - ReadWriteOnce
  nfs:
    server: 192.168.2.10
    path: /data/k8s-data/zookeeper/datadir-pv-1
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: zookeeper-datadir-pv-2
spec:
  capacity:
    storage: 20Gi
  accessModes:
    - ReadWriteOnce
  nfs:
    server: 192.168.2.10
    path: /data/k8s-data/zookeeper/datadir-pv-2
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: zookeeper-datadir-pv-3
spec:
  capacity:
    storage: 20Gi
  accessModes:
    - ReadWriteOnce
  nfs:
    server: 192.168.2.10
    path: /data/k8s-data/zookeeper/datadir-pv-3
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: zookeeper-datalog-pv-1
spec:
  capacity:
    storage: 20Gi
  accessModes:
    - ReadWriteOnce
  nfs:
    server: 192.168.2.10
    path: /data/k8s-data/zookeeper/datalog-pv-1
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: zookeeper-datalog-pv-2
spec:
  capacity:
    storage: 20Gi
  accessModes:
    - ReadWriteOnce
  nfs:
    server: 192.168.2.10
    path: /data/k8s-data/zookeeper/datalog-pv-2
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: zookeeper-datalog-pv-3
spec:
  capacity:
    storage: 20Gi
  accessModes:
    - ReadWriteOnce
  nfs:
    server: 192.168.2.10
    path: root@k8-deploy:~/k8s-yaml/web/zookeeper_v2# cat zookeeper-datadir-pv.yml 
apiVersion: v1
kind: PersistentVolume
metadata:
  name: zookeeper-datadir-pv-1
spec:
  capacity:
    storage: 20Gi
  accessModes:
    - ReadWriteOnce
  nfs:
    server: 192.168.2.10
    path: /data/k8s-data/zookeeper/datadir-pv-1
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: zookeeper-datadir-pv-2
spec:
  capacity:
    storage: 20Gi
  accessModes:
    - ReadWriteOnce
  nfs:
    server: 192.168.2.10
    path: /data/k8s-data/zookeeper/datadir-pv-2
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: zookeeper-datadir-pv-3
spec:
  capacity:
    storage: 20Gi
  accessModes:
    - ReadWriteOnce
  nfs:
    server: 192.168.2.10
    path: /data/k8s-data/zookeeper/datadir-pv-3
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: zookeeper-datalog-pv-1
spec:
  capacity:
    storage: 20Gi
  accessModes:
    - ReadWriteOnce
  nfs:
    server: 192.168.2.10
    path: /data/k8s-data/zookeeper/datalog-pv-1
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: zookeeper-datalog-pv-2
spec:
  capacity:
    storage: 20Gi
  accessModes:
    - ReadWriteOnce
  nfs:
    server: 192.168.2.10
    path: /data/k8s-data/zookeeper/datalog-pv-2
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: zookeeper-datalog-pv-3
spec:
  capacity:
    storage: 20Gi
  accessModes:
    - ReadWriteOnce
  nfs:
    server: 192.168.2.10
    path: /data/k8s-data/zookeeper/datalog-pv-3

1.3.3 创建pv

# 需要在nfs server服务器上创建使用的目录
mkdir -p /data/k8s-data/zookeeper/datadir-pv-{1..3}
mkdir -p /data/k8s-data/zookeeper/datalog-pv-{1..3}

# 在k8s进群中创建pv
root@k8-deploy:~/k8s-yaml/web/zookeeper_v2# kubectl apply -f zookeeper-datadir-pv.yml 
persistentvolume/zookeeper-datadir-pv-1 created
persistentvolume/zookeeper-datadir-pv-2 created
persistentvolume/zookeeper-datadir-pv-3 created
persistentvolume/zookeeper-datalog-pv-1 created
persistentvolume/zookeeper-datalog-pv-2 created
persistentvolume/zookeeper-datalog-pv-3 created

#查看pv是否创建成功
root@k8-deploy:~# kubectl get pv -n zk-ns
NAME                     CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                           STORAGECLASS   REASON   AGE
zookeeper-datadir-pv-1   20Gi       RWO            Retain           Bound    zk-ns/zookeeper-datadir-pvc-1                           20s
zookeeper-datadir-pv-2   20Gi       RWO            Retain           Bound    zk-ns/zookeeper-datadir-pvc-2                           20s
zookeeper-datadir-pv-3   20Gi       RWO            Retain           Bound    zk-ns/zookeeper-datadir-pvc-3                           20s
zookeeper-datalog-pv-1   20Gi       RWO            Retain           Bound    zk-ns/zookeeper-datalog-pvc-1                           20s
zookeeper-datalog-pv-2   20Gi       RWO            Retain           Bound    zk-ns/zookeeper-datalog-pvc-2                           20s
zookeeper-datalog-pv-3   20Gi       RWO            Retain           Bound    zk-ns/zookeeper-datalog-pvc-3                           20s

1.3.4 pvc yaml

root@k8-deploy:~/k8s-yaml/web/zookeeper_v2# cat zookeeper-datadir-pvc.yml 
apiVersion: v1
kind: Namespace
metadata:
  name: zk-ns
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: zookeeper-datadir-pvc-1
  namespace: zk-ns
spec:
  accessModes:
    - ReadWriteOnce
  volumeName: zookeeper-datadir-pv-1
  resources:
    requests:
      storage: 10Gi

---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: zookeeper-datadir-pvc-2
  namespace: zk-ns
spec:
  accessModes:
    - ReadWriteOnce
  volumeName: zookeeper-datadir-pv-2
  resources:
    requests:
      storage: 10Gi
 
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: zookeeper-datadir-pvc-3
  namespace: zk-ns
spec:
  accessModes:
    - ReadWriteOnce
  volumeName: zookeeper-datadir-pv-3
  resources:
    requests:
      storage: 10Gi
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: zookeeper-datalog-pvc-1
  namespace: zk-ns
spec:
  accessModes:
    - ReadWriteOnce
  volumeName: zookeeper-datalog-pv-1
  resources:
    requests:
      storage: 10Gi

---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: zookeeper-datalog-pvc-2
  namespace: zk-ns
spec:
  accessModes:
    - ReadWriteOnce
  volumeName: zookeeper-datalog-pv-2
  resources:
    requests:
      storage: 10Gi
 
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: zookeeper-datalog-pvc-3
  namespace: zk-ns
spec:
  accessModes:
    - ReadWriteOnce
  volumeName: zookeeper-datalog-pv-3
  resources:
    requests:
      storage: 10Gi

1.3.5 创建pvc

root@k8-deploy:~/k8s-yaml/web/zookeeper_v2# kubectl apply -f zookeeper-datadir-pvc.yml 
namespace/zk-ns created
persistentvolumeclaim/zookeeper-datadir-pvc-1 created
persistentvolumeclaim/zookeeper-datadir-pvc-2 created
persistentvolumeclaim/zookeeper-datadir-pvc-3 created
persistentvolumeclaim/zookeeper-datalog-pvc-1 created
persistentvolumeclaim/zookeeper-datalog-pvc-2 created
persistentvolumeclaim/zookeeper-datalog-pvc-3 created

# 查看是否创建成功
root@k8-deploy:~/k8s-yaml/web/zookeeper_v2# kubectl get pvc -A
NAMESPACE   NAME                      STATUS   VOLUME                   CAPACITY   ACCESS MODES   STORAGECLASS   AGE
zk-ns       zookeeper-datadir-pvc-1   Bound    zookeeper-datadir-pv-1   20Gi       RWO                           14s
zk-ns       zookeeper-datadir-pvc-2   Bound    zookeeper-datadir-pv-2   20Gi       RWO                           14s
zk-ns       zookeeper-datadir-pvc-3   Bound    zookeeper-datadir-pv-3   20Gi       RWO                           14s
zk-ns       zookeeper-datalog-pvc-1   Bound    zookeeper-datalog-pv-1   20Gi       RWO                           14s
zk-ns       zookeeper-datalog-pvc-2   Bound    zookeeper-datalog-pv-2   20Gi       RWO                           14s
zk-ns       zookeeper-datalog-pvc-3   Bound    zookeeper-datalog-pv-3   20Gi       RWO                           14s

2.使用pv/pvc做为zookeeper集群的持久存储,搭建集群

2.1 zookeeper介绍

ZooKeeper 是一个典型的分布式数据一致性解决方案,分布式应用程序可以基于 ZooKeeper 实现诸如数据发布/订阅、负载均衡、命名服务、分布式协调/通知、集群管理、Master 选举、分布式锁和分布式队列等功能。

ZooKeeper 一个最常用的使用场景就是用于担任服务生产者和服务消费者的注册中心。

服务生产者将自己提供的服务注册到 ZooKeeper 中心,服务的消费者在进行服务调用的时候先到 ZooKeeper 中查找服务,获取到服务生产者的详细信息之后,再去调用服务生产者的内容与数据。

2.2 准备zookeeper docker镜像

# 从dokerhub拉去镜像,版本为3.4.14
docker pull zookeeper:3.4.14

# 更改tag并上传到本地harbor
docker tag zookeeper:3.4.14 192.168.1.110/zookeeper/zookeeper:3.4.14

docker push 192.168.1.110/zookeeper/zookeeper:3.4.14

2.3 k8s zookeeper集群 yaml

root@k8-deploy:~/k8s-yaml/web/zookeeper_v2# cat zookeeper-cluster.yml 
apiVersion: v1
kind: Service
metadata:
  name: zookeeper
  namespace: zk-ns
spec:
  ports:
    - name: client
      port: 2181
  selector: 
    app: zookeeper
---

apiVersion: v1
kind: Service
metadata:
  name: zookeeper1
  namespace: zk-ns
spec:
  type: NodePort
  ports:
    - name: client
      port: 2181
      nodePort: 42181
    - name: followers
      port: 2888
    - name: election
      port: 3888
  selector:
    app: zookeeper
    server-id: "1"
---
apiVersion: v1
kind: Service
metadata:
  name: zookeeper2
  namespace: zk-ns
spec:
  type: NodePort
  ports:
    - name: client
      port: 2181
      nodePort: 42182
    - name: followers
      port: 2888
    - name: election
      port: 3888
  selector:
    app: zookeeper
    server-id: "2"
--- 
apiVersion: v1
kind: Service
metadata:
  name: zookeeper3
  namespace: zk-ns
spec:
  type: NodePort
  ports:
    - name: client
      port: 2181
      nodePort: 42183
    - name: followers
      port: 2888
    - name: election
      port: 3888
  selector:
    app: zookeeper
    server-id: "3"
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: zookeeper1
  namespace: zk-ns
spec:
  replicas: 1
  selector:
    matchLabels:
      app: zookeeper
  template:
    metadata:
      labels:
        app: zookeeper
        server-id: "1"
    spec:
      containers:
        - name: server
          image: 192.168.1.110/zookeeper/zookeeper:3.4.14
          imagePullPolicy: Always
          env:
            - name: ZOO_MY_ID
              value: "1"
            - name: SERVERS
              value: "zookeeper1"
            - name: ZOO_SERVERS
              value: server.1=0.0.0.0:2888:3888 server.2=zookeeper2:2888:3888 server.3=zookeeper3:2888:3888
          ports:
            - containerPort: 2181
            - containerPort: 2888
            - containerPort: 3888
          volumeMounts:
          - mountPath: "/data"
            name: zookeeper-datadir-pvc-1
          - mountPath: "/datalog"
            name: zookeeper-datalog-pvc-1
      volumes:
        - name: zookeeper-datadir-pvc-1
          persistentVolumeClaim:
            claimName: zookeeper-datadir-pvc-1
        - name: zookeeper-datalog-pvc-1
          persistentVolumeClaim:
            claimName: zookeeper-datalog-pvc-1
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: zookeeper2
  namespace: zk-ns
spec:
  replicas: 1
  selector:
    matchLabels:
      app: zookeeper
  template:
    metadata:
      labels:
        app: zookeeper
        server-id: "2"
    spec:
      containers:
        - name: server
          image: 192.168.1.110/zookeeper/zookeeper:3.4.14
          imagePullPolicy: Always
          env:
            - name: ZOO_MY_ID
              value: "2"
            - name: SERVERS
              value: "zookeeper2"
            - name: ZOO_SERVERS
              value: server.1=zookeeper1:2888:3888 server.2=0.0.0.0:2888:3888 server.3=zookeeper3:2888:3888
          ports:
            - containerPort: 2181
            - containerPort: 2888
            - containerPort: 3888
          volumeMounts:
          - mountPath: "/data"
            name: zookeeper-datadir-pvc-2
          - mountPath: "/datalog"
            name: zookeeper-datalog-pvc-2
      volumes:
        - name: zookeeper-datadir-pvc-2
          persistentVolumeClaim:
            claimName: zookeeper-datadir-pvc-2
        - name: zookeeper-datalog-pvc-2
          persistentVolumeClaim:
            claimName: zookeeper-datalog-pvc-2
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: zookeeper3
  namespace: zk-ns
spec:
  replicas: 1
  selector:
    matchLabels:
      app: zookeeper
  template:
    metadata:
      labels:
        app: zookeeper
        server-id: "3"
    spec:
      containers:
        - name: server
          image: 192.168.1.110/zookeeper/zookeeper:3.4.14
          imagePullPolicy: Always
          env:
            - name: ZOO_MY_ID
              value: "3"
            - name: SERVERS
              value: "zookeeper3"
            - name: ZOO_SERVERS
              value: server.1=zookeeper1:2888:3888 server.2=zookeeper2:2888:3888 server.3=0.0.0.0:2888:3888
          ports:
            - containerPort: 2181
            - containerPort: 2888
            - containerPort: 3888
          volumeMounts:
          - mountPath: "/data"
            name: zookeeper-datadir-pvc-3
          - mountPath: "/datalog"
            name: zookeeper-datalog-pvc-3
      volumes:
        - name: zookeeper-datadir-pvc-3
          persistentVolumeClaim:
            claimName: zookeeper-datadir-pvc-3
        - name: zookeeper-datalog-pvc-3
          persistentVolumeClaim:
            claimName: zookeeper-datalog-pvc-3

2.4 使用yaml创建zookeeper集群

# kubectl apply -f zookeeper-cluster.yml 

2.5 查看service,pod,deployment创建是否成功

root@k8-deploy:~/k8s-yaml/web/zookeeper_v2# kubectl get svc -n zk-ns
NAME         TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)                                        AGE
zookeeper    ClusterIP   10.0.225.221   <none>        2181/TCP                                       21h
zookeeper1   NodePort    10.0.198.188   <none>        2181:42181/TCP,2888:30109/TCP,3888:50986/TCP   21h
zookeeper2   NodePort    10.0.38.33     <none>        2181:42182/TCP,2888:48760/TCP,3888:57228/TCP   21h
zookeeper3   NodePort    10.0.164.42    <none>        2181:42183/TCP,2888:41871/TCP,3888:37373/TCP   21h

root@k8-deploy:~/k8s-yaml/web/zookeeper_v2# kubectl get pod -n zk-ns
NAME                          READY   STATUS    RESTARTS   AGE
zookeeper1-5668b88966-plks2   1/1     Running   0          21h
zookeeper2-978dfb9cb-bz67x    1/1     Running   0          21h
zookeeper3-69c77fdcc4-djfs6   1/1     Running   0          21h

root@k8-deploy:~/k8s-yaml/web/zookeeper_v2# kubectl get deploy -n zk-ns
NAME         READY   UP-TO-DATE   AVAILABLE   AGE
zookeeper1   1/1     1            1           21h
zookeeper2   1/1     1            1           21h
zookeeper3   1/1     1            1           21h

2.6 查看zookeeper集群状态

# kubectl exec zookeeper1-5668b88966-plks2 -n zk-ns -it -- bin/zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /conf/zoo.cfg
Mode: follower

# kubectl exec zookeeper2-978dfb9cb-bz67x -n zk-ns -it -- bin/zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /conf/zoo.cfg
Mode: leader

# kubectl exec zookeeper3-69c77fdcc4-djfs6 -n zk-ns -it -- bin/zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /conf/zoo.cfg
Mode: follower

2.7 测试集群节点故障后选主是否正常

# 删除leader所在的pod
root@k8-deploy:~# kubectl delete pod zookeeper2-978dfb9cb-bz67x -n zk-ns
pod "zookeeper2-978dfb9cb-bz67x" deleted

# pod 自动重建
root@k8-deploy:~# kubectl get pod -n zk-ns   
NAME                          READY   STATUS    RESTARTS   AGE
zookeeper1-5668b88966-plks2   1/1     Running   0          21h
zookeeper2-978dfb9cb-jmxr5    1/1     Running   0          21s
zookeeper3-69c77fdcc4-djfs6   1/1     Running   0          21h

# 查看选主,可以看到leader变成了zookeeper3
root@k8-deploy:~# kubectl exec zookeeper1-5668b88966-plks2 -n zk-ns -it -- bin/zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /conf/zoo.cfg
Mode: follower

root@k8-deploy:~# kubectl exec zookeeper2-978dfb9cb-jmxr5 -n zk-ns -it -- bin/zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /conf/zoo.cfg
Mode: follower

root@k8-deploy:~# kubectl exec zookeeper3-69c77fdcc4-djfs6 -n zk-ns -it -- bin/zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /conf/zoo.cfg
Mode: leader

2.8 使用ZooInspector工具连接zookeeper集群

登录到和k8s node节点在同一网段的windows机器上,通过node节点IP上暴露的Nodeport端口访问zookeeper集群。
image

2.9 查看NFS提供的pv的目录中的zookeeper数据是否正常存储

# 查看NFS中pv所在的目录
root@k8-deploy:~# ls -l /data/k8s-data/zookeeper/data*
/data/k8s-data/zookeeper/datadir-pv-1:
总用量 8
-rw-r--r-- 1 user user    2 10月 18 16:19 myid
drwxr-xr-x 2 user user 4096 10月 19 14:06 version-2

/data/k8s-data/zookeeper/datadir-pv-2:
总用量 8
-rw-r--r-- 1 user user    2 10月 18 16:19 myid
drwxr-xr-x 2 user user 4096 10月 19 14:06 version-2

/data/k8s-data/zookeeper/datadir-pv-3:
总用量 8
-rw-r--r-- 1 user user    2 10月 18 16:19 myid
drwxr-xr-x 2 user user 4096 10月 19 14:06 version-2

/data/k8s-data/zookeeper/datalog-pv-1:
总用量 4
drwxr-xr-x 2 user user 4096 10月 18 16:25 version-2

/data/k8s-data/zookeeper/datalog-pv-2:
总用量 4
drwxr-xr-x 2 user user 4096 10月 19 14:12 version-2

/data/k8s-data/zookeeper/datalog-pv-3:
总用量 4
drwxr-xr-x 2 user user 4096 10月 18 16:25 version-2

# 查看zookeeper集群节点生成的ID
root@k8-deploy:~# cat /data/k8s-data/zookeeper/datadir-pv-1/myid 
1
root@k8-deploy:~# cat /data/k8s-data/zookeeper/datadir-pv-2/myid  
2
root@k8-deploy:~# cat /data/k8s-data/zookeeper/datadir-pv-3/myid  
3

# 查看日志文件是否可以正常写入存储
root@k8-deploy:~# ls -l /data/k8s-data/zookeeper/datalog-pv-1/version-2/log.200000001 
-rw-r--r-- 1 user user 67108880 10月 19 14:14 /data/k8s-data/zookeeper/datalog-pv-1/version-2/log.200000001
root@k8-deploy:~# ls -l /data/k8s-data/zookeeper/datalog-pv-2/version-2/log.200000001  
-rw-r--r-- 1 user user 67108880 10月 19 09:04 /data/k8s-data/zookeeper/datalog-pv-2/version-2/log.200000001
root@k8-deploy:~# ls -l /data/k8s-data/zookeeper/datalog-pv-3/version-2/log.200000001  
-rw-r--r-- 1 user user 67108880 10月 19 14:14 /data/k8s-data/zookeeper/datalog-pv-3/version-2/log.200000001
posted @ 2021-10-20 10:32  yanql  阅读(906)  评论(0)    收藏  举报