(一)Kubernetes部署Zookeeper

由于pod退出后数据就消失了,因此需要持久化存储。首先需要搭建nfs服务。nfs(Network File System)是一个网络文件系统,使用该文件系统可以将数据存储在特定存储服务器上,以便pod重启后恢复原来的数据。

NFS搭建

1、安装nfs

sudo apt-get install nfs-kernel-server

此外集群中其余节点需要安装nfs客户端 apt install nfs-common

2、配置nfs

创建需要作为nfs共享的目录 mkdir /home/nfs/zookeeper-0

添加权限 chmod 777 /home/nfs/zookeeper-0

编辑配置

root@ubuntu:# vim /etc/export
# /etc/exports: the access control list for filesystems which may be exported
#               to NFS clients.  See exports(5).
#
# Example for NFSv2 and NFSv3:
# /srv/homes       hostname1(rw,sync,no_subtree_check) hostname2(ro,sync,no_subtree_check)
#
# Example for NFSv4:
# /srv/nfs4        gss/krb5i(rw,sync,fsid=0,crossmnt,no_subtree_check)
# /srv/nfs4/homes  gss/krb5i(rw,sync,no_subtree_check)
#
/home/nfs/zookeeper-0 *(rw,sync,no_root_squash)
/home/nfs/zookeeper-1 *(rw,sync,no_root_squash)
/home/nfs/zookeeper-2 *(rw,sync,no_root_squash)

/home/nfs/zookeeper-0:指示要共享的目录

*:代表允许所有的网络段访问。

rw:指示具有可读写的权限。

sync:指示资料同步写入内存和硬盘

no_root_squash:是 Ubuntu nfs 客户端分享目录使用者的权限。例如:如果客户端使用的是 root 用户,那么对于该共享目录而言,该客户端就具有 root 权限。

重启服务使配置生效
root@ubuntu:# /etc/init.d/rpcbind restart
[ ok ] Restarting rpcbind (via systemctl): rpcbind.service.
root@ubuntu:# /etc/init.d/nfs-kernel-server restart
[ ok ] Restarting nfs-kernel-server (via systemctl): nfs-kernel-server.service.
root@ubuntu:# showmount -e
Export list for ubuntu:
/home/nfs/zookeeper-2 *
/home/nfs/zookeeper-1 *
/home/nfs/zookeeper-0 *

看到以上信息就表示nfs的服务已经搭建好了。

创建命名空间

后续所有服务都将部署在名为storm的命名空间内,因此先创建好相应的namespace

---
apiVersion: v1
kind: Namespace
metadata:
  name: storm
  labels:
    name: storm

部署Zookeeper

Apache ZooKeeper 是一个分布式的开源协调服务,用于分布式系统。ZooKeeper 允许你读取、写入数据和发现数据更新。数据按层次结构组织在文件系统中,并复制到 ensemble(一个 ZooKeeper 服务的集合) 中所有的 ZooKeeper 服务。对数据的所有操作都是原子的和顺序一致的。

以下是部署的Zookeeper的pv.yaml,需要注意server的字段填充部署了nfs服务的主机ip

apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv-zookeeper-0
  namespace: storm
spec:
  capacity:
    storage: 20Gi
  accessModes:
    - ReadWriteOnce
  persistentVolumeReclaimPolicy: Retain
  storageClassName: nfs-storage
  nfs:
    path: /home/nfs/zookeeper-0
    server: 192.168.45.227
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv-zookeeper-1
  namespace: storm
spec:
  capacity:
    storage: 20Gi
  accessModes:
    - ReadWriteOnce
  persistentVolumeReclaimPolicy: Retain
  storageClassName: nfs-storage
  nfs:
    path: /home/nfs/zookeeper-1
    server: 192.168.45.227
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv-zookeeper-2
  namespace: storm
spec:
  capacity:
    storage: 20Gi
  accessModes:
    - ReadWriteOnce
  persistentVolumeReclaimPolicy: Retain
  storageClassName: nfs-storage
  nfs:
    path: /home/nfs/zookeeper-2
    server: 192.168.45.227

以下是部署Zookeeper的zookeeper.yaml,由于zookeeeper需要选主,因此需要通过一个Headless Service让zookeeper之间进行通信。此外通过部署zk-client-service允许外部网络访问zookeeper。

此外由于每个zookeeper都需要绑定自己的pvc,因此是以StatefulSet对象部署。

apiVersion: v1
kind: Service
metadata:
  name: zk-inner-service
  namespace: storm
  labels:
    app: zk
spec:
  ports:
  - port: 2888
    name: server
  - port: 3888
    name: leader-election
  clusterIP: None
  selector:
    app: zk
---
apiVersion: v1
kind: Service
metadata:
  name: zk-client-service
  namespace: storm
  labels:
    app: zk
spec:
  type: NodePort
  ports:
  - port: 2181
    nodePort: 31811
    name: client
  selector:
    app: zk
---
apiVersion: policy/v1beta1
kind: PodDisruptionBudget
metadata:
  name: zk-pdb
  namespace: storm
spec:
  selector:
    matchLabels:
      app: zk
  maxUnavailable: 1
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: zookeeper
  namespace: storm
spec:
  selector:
    matchLabels:
      app: zk
  serviceName: zk-inner-service
  replicas: 3
  updateStrategy:
    type: RollingUpdate
  podManagementPolicy: OrderedReady
  template:
    metadata:
      labels:
        app: zk
    spec:
      containers:
      - name: kubernetes-zookeeper
        imagePullPolicy: Always
        image: "registry.k8s.io/kubernetes-zookeeper:1.0-3.4.10"
        resources:
          requests:
            memory: "1Gi"
            cpu: "0.5"
        ports:
        - containerPort: 2181
          name: client
        - containerPort: 2888
          name: server
        - containerPort: 3888
          name: leader-election
        command:
        - sh
        - -c
        - "start-zookeeper \
          --servers=3 \
          --data_dir=/var/lib/zookeeper/data \
          --data_log_dir=/var/lib/zookeeper/data/log \
          --conf_dir=/opt/zookeeper/conf \
          --client_port=2181 \
          --election_port=3888 \
          --server_port=2888 \
          --tick_time=2000 \
          --init_limit=10 \
          --sync_limit=5 \
          --heap=512M \
          --max_client_cnxns=60 \
          --snap_retain_count=3 \
          --purge_interval=12 \
          --max_session_timeout=40000 \
          --min_session_timeout=4000 \
          --log_level=INFO"
        readinessProbe:
          exec:
            command:
            - sh
            - -c
            - "zookeeper-ready 2181"
          initialDelaySeconds: 10
          timeoutSeconds: 5
        livenessProbe:
          exec:
            command:
            - sh
            - -c
            - "zookeeper-ready 2181"
          initialDelaySeconds: 10
          timeoutSeconds: 5
        volumeMounts:
        - name: datadir
          mountPath: /var/lib/zookeeper
      securityContext:
        runAsUser: 1000
        fsGroup: 1000
  volumeClaimTemplates:
  - metadata:
      name: datadir
    spec:
      storageClassName: nfs-storage
      accessModes: [ "ReadWriteOnce" ]
      resources:
        requests:
          storage: 10Gi

在终端执行部署这两个yaml文件,需要确保集群中zookeeper相应的镜像已存在或者是能够连接网络,因为部署过程中docker回去拉取对应tag的镜像。

root@ubuntu:# kubectl apply -f zookeeper-pv.yaml
root@ubuntu:# kubectl apply -f zookeeper.yaml
root@ubuntu:# kubectl -n storm get pv
root@ubuntu:/home/nfs# kubectl -n storm get pv
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                                   STORAGECLASS        REASON   AGE
pv-zookeeper-0                             20Gi       RWO            Retain           Bound    storm/datadir-zookeeper-2               nfs-storage                  52s
pv-zookeeper-1                             20Gi       RWO            Retain           Bound    storm/datadir-zookeeper-0               nfs-storage                  55s
pv-zookeeper-2                             20Gi       RWO            Retain           Bound    storm/datadir-zookeeper-1               nfs-storage                  56s
root@ubuntu:# kubectl -n storm get pod 
NAME                        READY   STATUS             RESTARTS   AGE
zookeeper-0                 1/1     Running            0          25s
zookeeper-1                 1/1     Running            0          19s
zookeeper-2                 1/1     Running            0          7s
root@ubuntu:# for i in  0 1 2; do kubectl exec -it zookeeper-$i  -n storm -- zkServer.sh  status; done
ZooKeeper JMX enabled by default
Using config: /usr/bin/../etc/zookeeper/zoo.cfg
Mode: follower
ZooKeeper JMX enabled by default
Using config: /usr/bin/../etc/zookeeper/zoo.cfg
Mode: follower
ZooKeeper JMX enabled by default
Using config: /usr/bin/../etc/zookeeper/zoo.cfg
Mode: leader

如果zookeeper无法正常启动或者是都处于standalone状态,则需要查看zoo.cfg是否正常以及相应目录下的日志。正常来说zookeeper之间是通过Headless服务互相发现的,Headless服务为所有的pod创建了各自的domain:zk-inner-server.storm.svc.cluster.local,然后zookeeper之间通过Kubernetes的coredns找到hostname对应的ip。

可以通过以下命令查看每个pod的FQDN(Fully Qualified Domain Name,正式域名)

root@ubuntu:# for i in 0 1 2; do kubectl -n storm exec zookeeper-$i -- hostname -f; done
zookeeper-0.zk-inner-service.storm.svc.cluster.local
zookeeper-1.zk-inner-service.storm.svc.cluster.local
zookeeper-2.zk-inner-service.storm.svc.cluster.local
root@ubuntu:# kubectl -n storm exec zookeeper-0 -- cat /usr/bin/../etc/zookeeper/zoo.cfg
#This file was autogenerated DO NOT EDIT
clientPort=2181
dataDir=/var/lib/zookeeper/data
dataLogDir=/var/lib/zookeeper/data/log
tickTime=2000
initLimit=10
syncLimit=5
maxClientCnxns=60
minSessionTimeout=4000
maxSessionTimeout=40000
autopurge.snapRetainCount=3
autopurge.purgeInteval=12
server.1=zookeeper-0.zk-inner-service.storm.svc.cluster.local:2888:3888
server.2=zookeeper-1.zk-inner-service.storm.svc.cluster.local:2888:3888
server.3=zookeeper-2.zk-inner-service.storm.svc.cluster.local:2888:3888

测试从外部网络访问zookeeper服务

$ echo ruok | ./nc  192.168.45.227 31811; echo
imok
posted @ 2023-07-13 15:27  Modest-Hamilton  阅读(343)  评论(0)    收藏  举报