kubernete上zookeeper的搭建
zookeeper科普
zooKeeper 是一个开源的分布式协调服务,由雅虎创建,是 Google Chubby 的开源实现。 分布式应用程序可以基于 ZooKeeper 实现诸如数据发布/订阅、负载均衡、命名服务、分布式协 调/通知、集群管理、Master 选举、分布式锁和分布式队列等功能。
基本概念:
集群角色:Leader, Follower, Observer
Leader:选举产生,读/写;
Follower:参与选举,可被选举,读服务;
Observer:参与选举,不可被选举,提供读服务;
会话:ZK中,客户端<-->服务端,TCP长连接;
默认端口概念:代码访问client的端口号: 2181
leader和flower通信的端口号: 2888
选举leader时通信的端口号: 3888
那我们开始在k8s搭建zookeeper集群。
step1 编写相关组件yaml文件
以下yaml文件中包括 pv,Headless Service,Service,PodDisruptionBudget,StatefulSet
新建文件 zookeeper.yaml
apiVersion: v1 kind: PersistentVolume metadata: name: zookeeper-pv labels: pv: zookeeper-pv00 spec: capacity: storage: 10Gi accessModes: - ReadWriteOnce nfs: path: /zookeeper server: *.*.*.* ---
apiVersion: v1 kind: PersistentVolume metadata: name: zookeeper-pv01 labels: pv: zookeeper-pv spec: capacity: storage: 10Gi accessModes: - ReadWriteOnce nfs: path: /zookeeper server: *.*.*.* ---
apiVersion: v1 kind: PersistentVolume metadata: name: zookeeper-pv02 labels: pv: zookeeper-pv spec: capacity: storage: 10Gi accessModes: - ReadWriteOnce nfs: path: /zookeeper server: *.*.*.*
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: zk
namespace: tools
spec:
selector:
matchLabels:
app: zk
serviceName: zk-hs
replicas: 3
updateStrategy:
type: RollingUpdate
podManagementPolicy: OrderedReady
template:
metadata:
labels:
app: zk
spec:
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: "app"
operator: In
values:
- zk
topologyKey: "kubernetes.io/hostname"
containers:
- name: kubernetes-zookeeper
imagePullPolicy: Always
image: "mirrorgooglecontainers/kubernetes-zookeeper:1.0-3.4.10" ##该镜像位于dockerhub 可自行下载
resources:
requests:
memory: "5Gi"
cpu: "2"
ports:
- containerPort: 2181
name: client
- containerPort: 2888
name: server
- containerPort: 3888
name: leader-election
command:
- sh
- -c
- "start-zookeeper \
--servers=3 \
--data_dir=/var/lib/zookeeper/data \
--data_log_dir=/var/lib/zookeeper/data/log \
--conf_dir=/opt/zookeeper/conf \
--client_port=2181 \
--election_port=3888 \
--server_port=2888 \
--tick_time=2000 \
--init_limit=10 \
--sync_limit=5 \
--heap=512M \
--max_client_cnxns=1000 \
--snap_retain_count=3 \
--purge_interval=12 \
--max_session_timeout=40000 \
--min_session_timeout=4000 \
--log_level=INFO"
readinessProbe:
exec:
command:
- sh
- -c
- "zookeeper-ready 2181"
initialDelaySeconds: 10
timeoutSeconds: 5
livenessProbe:
exec:
command:
- sh
- -c
- "zookeeper-ready 2181"
initialDelaySeconds: 10
timeoutSeconds: 5
volumeMounts:
- name: datadir-zk
mountPath: /var/lib/zookeeper
securityContext:
runAsUser: 1000
fsGroup: 1000
volumeClaimTemplates:
- metadata:
name: datadir-zk
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 10Gi
selector:
matchLabels:
pv: zookeeper-pv
打开一个terminal,使用kubectl命令来创建zk集群
kubectl apply -f zookeeper.yaml
step2 启动成功后验证集群可用性
使用kubectl exec 进行集群验证
kubectl exec zk-0 zkServer.sh status
kubectl exec zk-1 zkServer.sh status
kubectl exec zk-2 zkServer.sh status
正常返回结果应该是改节点在集群中的状态 leader follower
kubectl exec zk-0 zkCli.sh create /hello world kubectl exec zk-1 zkCli.sh get /hello
如返回 world则zk集群搭建成功
参考文档:https://kubernetes.io/docs/tutorials/stateful-application/zookeeper/