Linux - K8S - StatefulSet - Redis Cluster(集群建设,缩容扩容等)
################################ # redis 实验 # 配置nfs [16:42:17 root@nfs nfs-data]#mkdir ./pv/redis{1..6} -p [16:42:50 root@nfs nfs-data]#for i in {1..6} > do > echo "/nfs-data/pv/redis$i *(rw,no_root_squash,sync)" >> /etc/exports > done [16:43:50 root@nfs nfs-data]#systemctl restart nfs-kernel-server.service [16:43:59 root@nfs nfs-data]#exportfs /nfs-data/pv/redis1 <world> /nfs-data/pv/redis2 <world> /nfs-data/pv/redis3 <world> /nfs-data/pv/redis4 <world> /nfs-data/pv/redis5 <world> /nfs-data/pv/redis6 <world> # 在master节点查看nfs [16:46:17 root@master1 storage]#showmount -e 10.0.0.58 Export list for 10.0.0.58: /nfs-data/pv/redis6 * /nfs-data/pv/redis5 * /nfs-data/pv/redis4 * /nfs-data/pv/redis3 * /nfs-data/pv/redis2 * /nfs-data/pv/redis1 * # 开始配置pv资源文件 [16:47:14 root@master1 storage]#cat 27-storage-statefulset-redis-pv.yaml --- apiVersion: v1 kind: PersistentVolume metadata: name: redis-pv001 spec: capacity: storage: 200M accessModes: - ReadWriteMany nfs: server: 10.0.0.58 path: /nfs-data/pv/redis1 --- apiVersion: v1 kind: PersistentVolume metadata: name: redis-pv002 spec: capacity: storage: 200M accessModes: - ReadWriteMany nfs: server: 10.0.0.58 path: /nfs-data/pv/redis2 --- apiVersion: v1 kind: PersistentVolume metadata: name: redis-pv003 spec: capacity: storage: 200M accessModes: - ReadWriteMany nfs: server: 10.0.0.58 path: /nfs-data/pv/redis3 --- apiVersion: v1 kind: PersistentVolume metadata: name: redis-pv004 spec: capacity: storage: 200M accessModes: - ReadWriteMany nfs: server: 10.0.0.58 path: /nfs-data/pv/redis4 --- apiVersion: v1 kind: PersistentVolume metadata: name: redis-pv005 spec: capacity: storage: 200M accessModes: - ReadWriteMany nfs: server: 10.0.0.58 path: /nfs-data/pv/redis5 --- apiVersion: v1 kind: PersistentVolume metadata: name: redis-pv006 spec: capacity: storage: 200M accessModes: - ReadWriteMany nfs: server: 10.0.0.58 path: /nfs-data/pv/redis6 [16:48:25 root@master1 storage]#kubectl apply -f 27-storage-statefulset-redis-pv.yaml persistentvolume/redis-pv001 created persistentvolume/redis-pv002 created persistentvolume/redis-pv003 created persistentvolume/redis-pv004 created persistentvolume/redis-pv005 created persistentvolume/redis-pv006 created [16:48:57 root@master1 storage]#kubectl get pv NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE redis-pv001 200M RWX Retain Available 31s redis-pv002 200M RWX Retain Available 31s redis-pv003 200M RWX Retain Available 31s redis-pv004 200M RWX Retain Available 31s redis-pv005 200M RWX Retain Available 31s redis-pv006 200M RWX Retain Available 31s # 配置redis svn资源文件 [16:47:48 root@master1 storage]#cat 28-storage-statefulset-redis-headlessservice.yml apiVersion: v1 kind: Service metadata: name: redis-service labels: app: redis spec: ports: - name: redis-port port: 6379 clusterIP: None selector: app: redis appCluster: redis-cluster [16:48:03 root@master1 storage]#kubectl apply -f 28-storage-statefulset-redis-headlessservice.yml service/redis-service created [16:48:55 root@master1 storage]#kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 7d17h redis-service ClusterIP None <none> 6379/TCP 31s # 通过configmap配置redis配置文件 [16:50:39 root@master1 storage]#cat redisconf/redis.conf port 6379 cluster-enabled yes cluster-config-file /var/lib/redis/nodes.conf cluster-node-timeout 5000 appendonly yes dir /var/lib/redis [16:50:49 root@master1 storage]#kubectl create configmap redis-conf --from-file=redisconf/ configmap/redis-conf created [16:52:34 root@master1 storage]#kubectl get configmaps NAME DATA AGE kube-root-ca.crt 1 7d17h redis-conf 1 28s # 通过反亲和性和match表达式,配置核心statefulset redis资源文件 [16:55:40 root@master1 storage]#kubectl get all -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES pod/redis-cluster-0 0/1 ContainerCreating 0 5s <none> node1.noisedu.cn <none> <none> NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 7d17h <none> service/redis-service ClusterIP None <none> 6379/TCP 7m19s app=redis,appCluster=redis-cluster NAME READY AGE CONTAINERS IMAGES statefulset.apps/redis-cluster 0/6 5s redis 10.0.0.55:80/mykubernetes/redis:6.2.5 [16:55:45 root@master1 storage]#kubectl get all -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES pod/redis-cluster-0 1/1 Running 0 10s 10.244.3.11 node1.noisedu.cn <none> <none> pod/redis-cluster-1 0/1 Pending 0 2s <none> <none> <none> <none> NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 7d17h <none> service/redis-service ClusterIP None <none> 6379/TCP 7m24s app=redis,appCluster=redis-cluster NAME READY AGE CONTAINERS IMAGES statefulset.apps/redis-cluster 1/6 10s redis 10.0.0.55:80/mykubernetes/redis:6.2.5 [16:55:51 root@master1 storage]#kubectl get pod -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES redis-cluster-0 1/1 Running 0 16s 10.244.3.11 node1.noisedu.cn <none> <none> redis-cluster-1 1/1 Running 0 8s 10.244.4.8 node2.noisedu.cn <none> <none> redis-cluster-2 1/1 Running 0 3s 10.244.3.12 node1.noisedu.cn <none> <none> redis-cluster-3 0/1 Pending 0 0s <none> <none> <none> <none> [16:55:58 root@master1 storage]#kubectl get pod -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES redis-cluster-0 1/1 Running 0 20s 10.244.3.11 node1.noisedu.cn <none> <none> redis-cluster-1 1/1 Running 0 12s 10.244.4.8 node2.noisedu.cn <none> <none> redis-cluster-2 1/1 Running 0 7s 10.244.3.12 node1.noisedu.cn <none> <none> redis-cluster-3 0/1 ContainerCreating 0 4s <none> node2.noisedu.cn <none> <none> [16:56:01 root@master1 storage]#kubectl get pod -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES redis-cluster-0 1/1 Running 0 23s 10.244.3.11 node1.noisedu.cn <none> <none> redis-cluster-1 1/1 Running 0 15s 10.244.4.8 node2.noisedu.cn <none> <none> redis-cluster-2 1/1 Running 0 10s 10.244.3.12 node1.noisedu.cn <none> <none> redis-cluster-3 1/1 Running 0 7s 10.244.4.9 node2.noisedu.cn <none> <none> redis-cluster-4 0/1 ContainerCreating 0 3s <none> node1.noisedu.cn <none> <none> [16:56:04 root@master1 storage]#kubectl get pod -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES redis-cluster-0 1/1 Running 0 26s 10.244.3.11 node1.noisedu.cn <none> <none> redis-cluster-1 1/1 Running 0 18s 10.244.4.8 node2.noisedu.cn <none> <none> redis-cluster-2 1/1 Running 0 13s 10.244.3.12 node1.noisedu.cn <none> <none> redis-cluster-3 1/1 Running 0 10s 10.244.4.9 node2.noisedu.cn <none> <none> redis-cluster-4 1/1 Running 0 6s 10.244.3.13 node1.noisedu.cn <none> <none> redis-cluster-5 0/1 ContainerCreating 0 3s <none> node2.noisedu.cn <none> <none> [16:56:06 root@master1 storage]#kubectl get pod -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES redis-cluster-0 1/1 Running 0 28s 10.244.3.11 node1.noisedu.cn <none> <none> redis-cluster-1 1/1 Running 0 20s 10.244.4.8 node2.noisedu.cn <none> <none> redis-cluster-2 1/1 Running 0 15s 10.244.3.12 node1.noisedu.cn <none> <none> redis-cluster-3 1/1 Running 0 12s 10.244.4.9 node2.noisedu.cn <none> <none> redis-cluster-4 1/1 Running 0 8s 10.244.3.13 node1.noisedu.cn <none> <none> redis-cluster-5 1/1 Running 0 5s 10.244.4.10 node2.noisedu.cn <none> <none> # 查看pv,pvc,均已绑定成功 [16:57:31 root@master1 storage]#kubectl get pv,pvc NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE persistentvolume/redis-pv001 200M RWX Retain Bound default/redis-data-redis-cluster-1 8m58s persistentvolume/redis-pv002 200M RWX Retain Bound default/redis-data-redis-cluster-3 8m58s persistentvolume/redis-pv003 200M RWX Retain Bound default/redis-data-redis-cluster-4 8m58s persistentvolume/redis-pv004 200M RWX Retain Bound default/redis-data-redis-cluster-5 8m58s persistentvolume/redis-pv005 200M RWX Retain Bound default/redis-data-redis-cluster-0 8m58s persistentvolume/redis-pv006 200M RWX Retain Bound default/redis-data-redis-cluster-2 8m58s NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE persistentvolumeclaim/redis-data-redis-cluster-0 Bound redis-pv005 200M RWX 114s persistentvolumeclaim/redis-data-redis-cluster-1 Bound redis-pv001 200M RWX 106s persistentvolumeclaim/redis-data-redis-cluster-2 Bound redis-pv006 200M RWX 101s persistentvolumeclaim/redis-data-redis-cluster-3 Bound redis-pv002 200M RWX 98s persistentvolumeclaim/redis-data-redis-cluster-4 Bound redis-pv003 200M RWX 94s persistentvolumeclaim/redis-data-redis-cluster-5 Bound redis-pv004 200M RWX 91s # 查看结果 [16:59:08 root@master1 storage]#nodelist=$(kubectl get pod -o wide | grep -v NAME | awk '{print $1}') [17:00:27 root@master1 storage]#for i in $nodelist; do role=$(kubectl exec -it $i -- redis-cli info Replication | grep role); echo "$i | $role"; done redis-cluster-0 | role:master redis-cluster-1 | role:master redis-cluster-2 | role:master redis-cluster-3 | role:master redis-cluster-4 | role:master redis-cluster-5 | role:master
# 到达目前,我们创建了6个redis的pod,还需要接着进行集群的初始化,首先我们创建一个管理redis的容器 [17:13:06 root@master1 storage]#kubectl run -i --tty ubuntu --image=10.0.0.55:80/myapps/tomcat:v1.0 /bin/bash root@ubuntu:/usr/local/tomcat# apt update root@ubuntu:/usr/local/tomcat# apt -y install redis-server dnsutils # 查看集群节点 root@ubuntu:/usr/local/tomcat# dig +short redis-cluster-0.redis-service.default.svc.cluster.local 10.244.3.11 # 在这个管理容器创建集群 root@ubuntu:/usr/local/tomcat# redis-cli --cluster create 10.244.3.11:6379 10.244.4.8:6379 10.244.3.12:6379 10.244.4.9:6379 10.244.3.13:6379 10.244.4.10:6379 --cluster-replicas 1 >>> Performing hash slots allocation on 6 nodes... Master[0] -> Slots 0 - 5460 Master[1] -> Slots 5461 - 10922 Master[2] -> Slots 10923 - 16383 Adding replica 10.244.3.13:6379 to 10.244.3.11:6379 Adding replica 10.244.4.10:6379 to 10.244.4.8:6379 Adding replica 10.244.4.9:6379 to 10.244.3.12:6379 M: f9a01f75bb5ad07c8144b96dd15c2c35da4cdafa 10.244.3.11:6379 slots:[0-5460] (5461 slots) master M: 31073abcd14ffc325a13b147239511a14ac8bf8c 10.244.4.8:6379 slots:[5461-10922] (5462 slots) master M: 73b2e718bfdd9e6e7eabcf2138fc6e64be991f83 10.244.3.12:6379 slots:[10923-16383] (5461 slots) master S: 28dbde64584a8798d27ec803d01767db483bd9da 10.244.4.9:6379 replicates 73b2e718bfdd9e6e7eabcf2138fc6e64be991f83 S: a1522d36c2d8bdc67385b8798318f570bb5370b0 10.244.3.13:6379 replicates f9a01f75bb5ad07c8144b96dd15c2c35da4cdafa S: d665b308142461fdc85354977fb3c2204ae52a3c 10.244.4.10:6379 replicates 31073abcd14ffc325a13b147239511a14ac8bf8c Can I set the above configuration? (type 'yes' to accept): yes >>> Nodes configuration updated >>> Assign a different config epoch to each node >>> Sending CLUSTER MEET messages to join the cluster Waiting for the cluster to join . >>> Performing Cluster Check (using node 10.244.3.11:6379) M: f9a01f75bb5ad07c8144b96dd15c2c35da4cdafa 10.244.3.11:6379 slots:[0-5460] (5461 slots) master 1 additional replica(s) S: d665b308142461fdc85354977fb3c2204ae52a3c 10.244.4.10:6379 slots: (0 slots) slave replicates 31073abcd14ffc325a13b147239511a14ac8bf8c M: 31073abcd14ffc325a13b147239511a14ac8bf8c 10.244.4.8:6379 slots:[5461-10922] (5462 slots) master 1 additional replica(s) M: 73b2e718bfdd9e6e7eabcf2138fc6e64be991f83 10.244.3.12:6379 slots:[10923-16383] (5461 slots) master 1 additional replica(s) S: a1522d36c2d8bdc67385b8798318f570bb5370b0 10.244.3.13:6379 slots: (0 slots) slave replicates f9a01f75bb5ad07c8144b96dd15c2c35da4cdafa S: 28dbde64584a8798d27ec803d01767db483bd9da 10.244.4.9:6379 slots: (0 slots) slave replicates 73b2e718bfdd9e6e7eabcf2138fc6e64be991f83 [OK] All nodes agree about slots configuration. >>> Check for open slots... >>> Check slots coverage... [OK] All 16384 slots covered. # 到这里,集群建立完毕,链接任意一个pod检验一下 [18:16:37 root@master1 ~]#kubectl exec -it redis-cluster-0 -- bash root@redis-cluster-0:/data# redis-cli cluster info cluster_state:ok cluster_slots_assigned:16384 cluster_slots_ok:16384 cluster_slots_pfail:0 cluster_slots_fail:0 cluster_known_nodes:6 cluster_size:3 cluster_current_epoch:6 cluster_my_epoch:1 cluster_stats_messages_ping_sent:160 cluster_stats_messages_pong_sent:159 cluster_stats_messages_sent:319 cluster_stats_messages_ping_received:154 cluster_stats_messages_pong_received:160 cluster_stats_messages_meet_received:5 cluster_stats_messages_received:319 root@redis-cluster-0:/data# redis-cli cluster nodes d665b308142461fdc85354977fb3c2204ae52a3c 10.244.4.10:6379@16379 slave 31073abcd14ffc325a13b147239511a14ac8bf8c 0 1639909673000 2 connected 31073abcd14ffc325a13b147239511a14ac8bf8c 10.244.4.8:6379@16379 master - 0 1639909672000 2 connected 5461-10922 f9a01f75bb5ad07c8144b96dd15c2c35da4cdafa 10.244.3.11:6379@16379 myself,master - 0 1639909671000 1 connected 0-5460 73b2e718bfdd9e6e7eabcf2138fc6e64be991f83 10.244.3.12:6379@16379 master - 0 1639909672000 3 connected 10923-16383 a1522d36c2d8bdc67385b8798318f570bb5370b0 10.244.3.13:6379@16379 slave f9a01f75bb5ad07c8144b96dd15c2c35da4cdafa 0 1639909673387 1 connected 28dbde64584a8798d27ec803d01767db483bd9da 10.244.4.9:6379@16379 slave 73b2e718bfdd9e6e7eabcf2138fc6e64be991f83 0 1639909672884 3 connected # 查看挂载的数据 [16:44:04 root@nfs nfs-data]#tree ./pv/ ./pv/ ├── redis1 │ ├── appendonly.aof │ ├── dump.rdb │ └── nodes.conf ├── redis2 │ ├── appendonly.aof │ ├── dump.rdb │ └── nodes.conf ├── redis3 │ ├── appendonly.aof │ ├── dump.rdb │ └── nodes.conf ├── redis4 │ ├── appendonly.aof │ ├── dump.rdb │ └── nodes.conf ├── redis5 │ ├── appendonly.aof │ ├── dump.rdb │ └── nodes.conf └── redis6 ├── appendonly.aof ├── dump.rdb └── nodes.conf # 建立外部访问的clusterIP, 之前的基于statefulset的headless只能k8s集群内部访问 [18:33:22 root@master1 storage]#cat 30-storage-statefulset-redis-access-service.yaml kind: Service apiVersion: v1 metadata: name: statefulset-redis-access-service labels: app: redis spec: selector: app: redis appCluster: redis-cluster ports: - name: redis-port protocol: TCP port: 6379 targetPort: 6379 [18:33:24 root@master1 storage]#kubectl apply -f 30-storage-statefulset-redis-access-service.yaml service/statefulset-redis-access-service created [18:33:29 root@master1 storage]#kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 7d19h redis-service ClusterIP None <none> 6379/TCP 105m statefulset-redis-access-service ClusterIP 10.105.138.153 <none> 6379/TCP 4s # 此时其他应用就可访问 # 查看高可用状态 [18:36:28 root@master1 storage]#nodelist=$(kubectl get pod -o wide | grep -v NAME | grep cluster | awk '{print $1}') [18:37:08 root@master1 storage]#for i in $nodelist; do role=$(kubectl exec -it $i -- redis-cli info Replication | grep slave); echo "$i | $role"; done redis-cluster-0 | connected_slaves:1 slave0:ip=10.244.3.13,port=6379,state=online,offset=896,lag=1 redis-cluster-1 | connected_slaves:1 slave0:ip=10.244.4.10,port=6379,state=online,offset=896,lag=1 redis-cluster-2 | connected_slaves:1 slave0:ip=10.244.4.9,port=6379,state=online,offset=896,lag=0 redis-cluster-3 | role:slave slave_repl_offset:896 slave_priority:100 slave_read_only:1 connected_slaves:0 redis-cluster-4 | role:slave slave_repl_offset:910 slave_priority:100 slave_read_only:1 connected_slaves:0 redis-cluster-5 | role:slave slave_repl_offset:896 slave_priority:100 slave_read_only:1 connected_slaves:0 # 也可以进入其中一个pod查看 [18:37:35 root@master1 storage]#kubectl exec -it redis-cluster-0 -- redis-cli ROLE 1) "master" 2) (integer) 1162 3) 1) 1) "10.244.3.13" 2) "6379" 3) "1162" # 主从切换,手动删除一个主节点,然后查看 [18:40:16 root@master1 storage]#kubectl delete pod redis-cluster-0 ; kubectl get pod redis-cluster-0 -o wide pod "redis-cluster-0" deleted NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES redis-cluster-0 0/1 ContainerCreating 0 0s <none> node1.noisedu.cn <none> <none> [18:43:11 root@master1 storage]#kubectl get pod redis-cluster-0 -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES redis-cluster-0 1/1 Running 0 10s 10.244.3.15 node1.noisedu.cn <none> <none> [18:43:21 root@master1 storage]#nodelist=$(kubectl get pod -o wide | grep -v NAME | grep cluster | awk '{print $1}') [18:43:33 root@master1 storage]#for i in $nodelist; do role=$(kubectl exec -it $i -- redis-cli info Replication | grep slave); echo "$i | $role"; done redis-cluster-0 | connected_slaves:1 slave0:ip=10.244.3.13,port=6379,state=online,offset=28,lag=1 redis-cluster-1 | connected_slaves:1 slave0:ip=10.244.4.10,port=6379,state=online,offset=1428,lag=1 redis-cluster-2 | connected_slaves:1 slave0:ip=10.244.4.9,port=6379,state=online,offset=1428,lag=1 redis-cluster-3 | role:slave slave_repl_offset:1428 slave_priority:100 slave_read_only:1 connected_slaves:0 redis-cluster-4 | role:slave slave_repl_offset:28 slave_priority:100 slave_read_only:1 connected_slaves:0 redis-cluster-5 | role:slave slave_repl_offset:1428 slave_priority:100 slave_read_only:1 connected_slaves:0 # 结果发现,由于startfulset的性质,自动再次生成一个redis-cluster-0,并且也继承了之前的slave0,效果与之前一样,没有影响 # 动态扩容,原本nfs挂载有6个pv,现在增加2个,总共8个,redis集群数量也从原来的6个转变为8个 # 创建nfs挂载目录 [18:35:07 root@nfs nfs-data]#mkdir /nfs-data/pv/redis{7..8} [18:47:00 root@nfs nfs-data]#chmod 777 /nfs-data/pv/* [18:47:13 root@nfs nfs-data]#echo "/nfs-data/pv/redis7 *(rw,all_squash)" >> /etc/exports [18:48:00 root@nfs nfs-data]#echo "/nfs-data/pv/redis8 *(rw,all_squash)" >> /etc/exports [18:48:04 root@nfs nfs-data]#systemctl restart nfs-kernel-server.service [18:48:19 root@nfs nfs-data]#exportfs /nfs-data/pv/redis1 <world> /nfs-data/pv/redis2 <world> /nfs-data/pv/redis3 <world> /nfs-data/pv/redis4 <world> /nfs-data/pv/redis5 <world> /nfs-data/pv/redis6 <world> /nfs-data/pv/redis7 <world> /nfs-data/pv/redis8 <world> # 准备pv资源文件 [19:06:08 root@master1 storage]#cat 31-storage-statefulset-redis-pv-add.yaml --- apiVersion: v1 kind: PersistentVolume metadata: name: redis-pv007 spec: capacity: storage: 200M accessModes: - ReadWriteMany nfs: server: 10.0.0.58 path: /nfs-data/pv/redis7 --- apiVersion: v1 kind: PersistentVolume metadata: name: redis-pv008 spec: capacity: storage: 200M accessModes: - ReadWriteMany nfs: server: 10.0.0.58 path: /nfs-data/pv/redis8 [19:06:14 root@master1 storage]#kubectl apply -f 31-storage-statefulset-redis-pv-add.yaml persistentvolume/redis-pv007 created persistentvolume/redis-pv008 created [19:06:22 root@master1 storage]#kubectl get pv -o wide NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE VOLUMEMODE redis-pv001 200M RWX Retain Bound default/redis-data-redis-cluster-1 137m Filesystem redis-pv002 200M RWX Retain Bound default/redis-data-redis-cluster-3 137m Filesystem redis-pv003 200M RWX Retain Bound default/redis-data-redis-cluster-4 137m Filesystem redis-pv004 200M RWX Retain Bound default/redis-data-redis-cluster-5 137m Filesystem redis-pv005 200M RWX Retain Bound default/redis-data-redis-cluster-0 137m Filesystem redis-pv006 200M RWX Retain Bound default/redis-data-redis-cluster-2 137m Filesystem redis-pv007 200M RWX Retain Available 8s Filesystem redis-pv008 200M RWX Retain Available 8s Filesystem # 应用新的pv,直接修改redis的yml文件里面的replicas字段为8 [19:06:30 root@master1 storage]#kubectl patch statefulsets redis-cluster -p '{"spec":{"replicas":8}}' statefulset.apps/redis-cluster patched [19:06:47 root@master1 storage]#kubectl get pod -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES redis-cluster-0 1/1 Running 0 23m 10.244.3.15 node1.noisedu.cn <none> <none> redis-cluster-1 1/1 Running 0 131m 10.244.4.8 node2.noisedu.cn <none> <none> redis-cluster-2 1/1 Running 0 131m 10.244.3.12 node1.noisedu.cn <none> <none> redis-cluster-3 1/1 Running 0 130m 10.244.4.9 node2.noisedu.cn <none> <none> redis-cluster-4 1/1 Running 0 130m 10.244.3.13 node1.noisedu.cn <none> <none> redis-cluster-5 1/1 Running 0 130m 10.244.4.10 node2.noisedu.cn <none> <none> redis-cluster-6 1/1 Running 0 6s 10.244.4.11 node2.noisedu.cn <none> <none> redis-cluster-7 0/1 ContainerCreating 0 2s <none> node1.noisedu.cn <none> <none> ubuntu 1/1 Running 1 (37m ago) 111m 10.244.3.14 node1.noisedu.cn <none> <none> [19:06:53 root@master1 storage]#kubectl get pod -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES redis-cluster-0 1/1 Running 0 23m 10.244.3.15 node1.noisedu.cn <none> <none> redis-cluster-1 1/1 Running 0 131m 10.244.4.8 node2.noisedu.cn <none> <none> redis-cluster-2 1/1 Running 0 131m 10.244.3.12 node1.noisedu.cn <none> <none> redis-cluster-3 1/1 Running 0 130m 10.244.4.9 node2.noisedu.cn <none> <none> redis-cluster-4 1/1 Running 0 130m 10.244.3.13 node1.noisedu.cn <none> <none> redis-cluster-5 1/1 Running 0 130m 10.244.4.10 node2.noisedu.cn <none> <none> redis-cluster-6 1/1 Running 0 8s 10.244.4.11 node2.noisedu.cn <none> <none> redis-cluster-7 1/1 Running 0 4s 10.244.3.16 node1.noisedu.cn <none> <none> ubuntu 1/1 Running 1 (37m ago) 111m 10.244.3.14 node1.noisedu.cn <none> <none> [19:06:55 root@master1 storage]#kubectl get pv,pvc -o wide NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE VOLUMEMODE persistentvolume/redis-pv001 200M RWX Retain Bound default/redis-data-redis-cluster-1 138m Filesystem persistentvolume/redis-pv002 200M RWX Retain Bound default/redis-data-redis-cluster-3 138m Filesystem persistentvolume/redis-pv003 200M RWX Retain Bound default/redis-data-redis-cluster-4 138m Filesystem persistentvolume/redis-pv004 200M RWX Retain Bound default/redis-data-redis-cluster-5 138m Filesystem persistentvolume/redis-pv005 200M RWX Retain Bound default/redis-data-redis-cluster-0 138m Filesystem persistentvolume/redis-pv006 200M RWX Retain Bound default/redis-data-redis-cluster-2 138m Filesystem persistentvolume/redis-pv007 200M RWX Retain Bound default/redis-data-redis-cluster-6 51s Filesystem persistentvolume/redis-pv008 200M RWX Retain Bound default/redis-data-redis-cluster-7 51s Filesystem NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE VOLUMEMODE persistentvolumeclaim/redis-data-redis-cluster-0 Bound redis-pv005 200M RWX 131m Filesystem persistentvolumeclaim/redis-data-redis-cluster-1 Bound redis-pv001 200M RWX 131m Filesystem persistentvolumeclaim/redis-data-redis-cluster-2 Bound redis-pv006 200M RWX 131m Filesystem persistentvolumeclaim/redis-data-redis-cluster-3 Bound redis-pv002 200M RWX 131m Filesystem persistentvolumeclaim/redis-data-redis-cluster-4 Bound redis-pv003 200M RWX 131m Filesystem persistentvolumeclaim/redis-data-redis-cluster-5 Bound redis-pv004 200M RWX 131m Filesystem persistentvolumeclaim/redis-data-redis-cluster-6 Bound redis-pv007 200M RWX 26s Filesystem persistentvolumeclaim/redis-data-redis-cluster-7 Bound redis-pv008 200M RWX 22s Filesystem # 现在加入到集群中,进入管理容器添加用redis-cli --cluster add-node ip:port clusterIP:port root@ubuntu:/usr/local/tomcat# redis-cli --cluster add-node 10.244.3.16:6379 10.244.3.15:6379 >>> Adding node 10.244.3.16:6379 to cluster 10.244.3.15:6379 >>> Performing Cluster Check (using node 10.244.3.15:6379) M: f9a01f75bb5ad07c8144b96dd15c2c35da4cdafa 10.244.3.15:6379 slots:[0-5460] (5461 slots) master 1 additional replica(s) S: d665b308142461fdc85354977fb3c2204ae52a3c 10.244.4.10:6379 slots: (0 slots) slave replicates 31073abcd14ffc325a13b147239511a14ac8bf8c M: 31073abcd14ffc325a13b147239511a14ac8bf8c 10.244.4.8:6379 slots:[5461-10922] (5462 slots) master 1 additional replica(s) M: 73b2e718bfdd9e6e7eabcf2138fc6e64be991f83 10.244.3.12:6379 slots:[10923-16383] (5461 slots) master 1 additional replica(s) S: 28dbde64584a8798d27ec803d01767db483bd9da 10.244.4.9:6379 slots: (0 slots) slave replicates 73b2e718bfdd9e6e7eabcf2138fc6e64be991f83 S: a1522d36c2d8bdc67385b8798318f570bb5370b0 10.244.3.13:6379 slots: (0 slots) slave replicates f9a01f75bb5ad07c8144b96dd15c2c35da4cdafa [OK] All nodes agree about slots configuration. >>> Check for open slots... >>> Check slots coverage... [OK] All 16384 slots covered. >>> Send CLUSTER MEET to node 10.244.3.16:6379 to make it join the cluster. [OK] New node added correctly. root@ubuntu:/usr/local/tomcat# redis-cli --cluster add-node 10.244.4.11:6379 10.244.3.15:6379 >>> Adding node 10.244.4.11:6379 to cluster 10.244.3.15:6379 >>> Performing Cluster Check (using node 10.244.3.15:6379) M: f9a01f75bb5ad07c8144b96dd15c2c35da4cdafa 10.244.3.15:6379 slots:[0-5460] (5461 slots) master 1 additional replica(s) S: d665b308142461fdc85354977fb3c2204ae52a3c 10.244.4.10:6379 slots: (0 slots) slave replicates 31073abcd14ffc325a13b147239511a14ac8bf8c M: 31073abcd14ffc325a13b147239511a14ac8bf8c 10.244.4.8:6379 slots:[5461-10922] (5462 slots) master 1 additional replica(s) M: 73b2e718bfdd9e6e7eabcf2138fc6e64be991f83 10.244.3.12:6379 slots:[10923-16383] (5461 slots) master 1 additional replica(s) S: 28dbde64584a8798d27ec803d01767db483bd9da 10.244.4.9:6379 slots: (0 slots) slave replicates 73b2e718bfdd9e6e7eabcf2138fc6e64be991f83 M: 499c340ac70bb084c30da5306ce1e69f7e449f20 10.244.3.16:6379 slots: (0 slots) master S: a1522d36c2d8bdc67385b8798318f570bb5370b0 10.244.3.13:6379 slots: (0 slots) slave replicates f9a01f75bb5ad07c8144b96dd15c2c35da4cdafa [OK] All nodes agree about slots configuration. >>> Check for open slots... >>> Check slots coverage... [OK] All 16384 slots covered. >>> Send CLUSTER MEET to node 10.244.4.11:6379 to make it join the cluster. [OK] New node added correctly. # 查看redis集群 [19:27:09 root@master1 ~]#kubectl exec -it redis-cluster-0 -- redis-cli CLUSTER INFO cluster_state:ok cluster_slots_assigned:16384 cluster_slots_ok:16384 cluster_slots_pfail:0 cluster_slots_fail:0 cluster_known_nodes:8 cluster_size:3 cluster_current_epoch:7 cluster_my_epoch:1 cluster_stats_messages_ping_sent:4780 cluster_stats_messages_pong_sent:4833 cluster_stats_messages_sent:9613 cluster_stats_messages_ping_received:4831 cluster_stats_messages_pong_received:4780 cluster_stats_messages_meet_received:2 cluster_stats_messages_received:9613 [19:27:29 root@master1 ~]#kubectl exec -it redis-cluster-0 -- redis-cli CLUSTER NODES d665b308142461fdc85354977fb3c2204ae52a3c 10.244.4.10:6379@16379 slave 31073abcd14ffc325a13b147239511a14ac8bf8c 0 1639913252000 2 connected 31073abcd14ffc325a13b147239511a14ac8bf8c 10.244.4.8:6379@16379 master - 0 1639913251676 2 connected 5461-10922 73b2e718bfdd9e6e7eabcf2138fc6e64be991f83 10.244.3.12:6379@16379 master - 0 1639913252583 3 connected 10923-16383 a74f43d67cefc03642e2d5a6784f3d3f6afa4a6b 10.244.4.11:6379@16379 master - 0 1639913252684 0 connected 28dbde64584a8798d27ec803d01767db483bd9da 10.244.4.9:6379@16379 slave 73b2e718bfdd9e6e7eabcf2138fc6e64be991f83 0 1639913252583 3 connected f9a01f75bb5ad07c8144b96dd15c2c35da4cdafa 10.244.3.15:6379@16379 myself,master - 0 1639913252000 1 connected 0-5460 499c340ac70bb084c30da5306ce1e69f7e449f20 10.244.3.16:6379@16379 master - 0 1639913251575 7 connected a1522d36c2d8bdc67385b8798318f570bb5370b0 10.244.3.13:6379@16379 slave f9a01f75bb5ad07c8144b96dd15c2c35da4cdafa 0 1639913250566 1 connected # 登录集群中主节点,给10.244.4.11:6379分配4096槽位 [19:37:12 root@master1 ~]#kubectl exec -it redis-cluster-0 -- redis-cli --cluster reshard 10.244.3.15:6379 # 查看10.244.4.11:6379已经分配了4096个槽位 [19:37:12 root@master1 ~]#kubectl exec -it redis-cluster-0 -- redis-cli CLUSTER NODES d665b308142461fdc85354977fb3c2204ae52a3c 10.244.4.10:6379@16379 slave 31073abcd14ffc325a13b147239511a14ac8bf8c 0 1639913864528 2 connected 31073abcd14ffc325a13b147239511a14ac8bf8c 10.244.4.8:6379@16379 master - 0 1639913864023 2 connected 6827-10922 73b2e718bfdd9e6e7eabcf2138fc6e64be991f83 10.244.3.12:6379@16379 master - 0 1639913865543 3 connected 12288-16383 a74f43d67cefc03642e2d5a6784f3d3f6afa4a6b 10.244.4.11:6379@16379 master - 0 1639913865000 8 connected 0-1364 5461-6826 10923-12287 28dbde64584a8798d27ec803d01767db483bd9da 10.244.4.9:6379@16379 slave 73b2e718bfdd9e6e7eabcf2138fc6e64be991f83 0 1639913865000 3 connected f9a01f75bb5ad07c8144b96dd15c2c35da4cdafa 10.244.3.15:6379@16379 myself,master - 0 1639913862000 1 connected 1365-5460 499c340ac70bb084c30da5306ce1e69f7e449f20 10.244.3.16:6379@16379 master - 0 1639913865000 7 connected a1522d36c2d8bdc67385b8798318f570bb5370b0 10.244.3.13:6379@16379 slave f9a01f75bb5ad07c8144b96dd15c2c35da4cdafa 0 1639913864000 1 connected # 设置redis-cluster-7为新的redis-cluster-6的slave节点 [19:39:37 root@master1 ~]#kubectl get pod -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES redis-cluster-0 1/1 Running 0 56m 10.244.3.15 node1.noisedu.cn <none> <none> redis-cluster-1 1/1 Running 0 163m 10.244.4.8 node2.noisedu.cn <none> <none> redis-cluster-2 1/1 Running 0 163m 10.244.3.12 node1.noisedu.cn <none> <none> redis-cluster-3 1/1 Running 0 163m 10.244.4.9 node2.noisedu.cn <none> <none> redis-cluster-4 1/1 Running 0 163m 10.244.3.13 node1.noisedu.cn <none> <none> redis-cluster-5 1/1 Running 0 163m 10.244.4.10 node2.noisedu.cn <none> <none> redis-cluster-6 1/1 Running 0 32m 10.244.4.11 node2.noisedu.cn <none> <none> redis-cluster-7 1/1 Running 0 32m 10.244.3.16 node1.noisedu.cn <none> <none> ubuntu 1/1 Running 1 (69m ago) 144m 10.244.3.14 node1.noisedu.cn <none> <none> [19:39:40 root@master1 ~]#kubectl exec -it redis-cluster-7 -- redis-cli cluster replicate a74f43d67cefc03642e2d5a6784f3d3f6afa4a6b OK # 查看slave是否设置成功 [19:40:41 root@master1 ~]#kubectl exec -it redis-cluster-0 -- redis-cli CLUSTER NODES d665b308142461fdc85354977fb3c2204ae52a3c 10.244.4.10:6379@16379 slave 31073abcd14ffc325a13b147239511a14ac8bf8c 0 1639914075517 2 connected 31073abcd14ffc325a13b147239511a14ac8bf8c 10.244.4.8:6379@16379 master - 0 1639914075618 2 connected 6827-10922 73b2e718bfdd9e6e7eabcf2138fc6e64be991f83 10.244.3.12:6379@16379 master - 0 1639914074000 3 connected 12288-16383 a74f43d67cefc03642e2d5a6784f3d3f6afa4a6b 10.244.4.11:6379@16379 master - 0 1639914074607 8 connected 0-1364 5461-6826 10923-12287 28dbde64584a8798d27ec803d01767db483bd9da 10.244.4.9:6379@16379 slave 73b2e718bfdd9e6e7eabcf2138fc6e64be991f83 0 1639914075619 3 connected f9a01f75bb5ad07c8144b96dd15c2c35da4cdafa 10.244.3.15:6379@16379 myself,master - 0 1639914073000 1 connected 1365-5460 499c340ac70bb084c30da5306ce1e69f7e449f20 10.244.3.16:6379@16379 slave a74f43d67cefc03642e2d5a6784f3d3f6afa4a6b 0 1639914074505 8 connected a1522d36c2d8bdc67385b8798318f570bb5370b0 10.244.3.13:6379@16379 slave f9a01f75bb5ad07c8144b96dd15c2c35da4cdafa 0 1639914074505 1 connected # 数据实验,注意使用-c开启集群模式 [19:41:15 root@master1 ~]#kubectl exec -it redis-cluster-7 -- redis-cli -c 127.0.0.1:6379> set key1 test1 -> Redirected to slot [9189] located at 10.244.4.8:6379 OK 10.244.4.8:6379> exit [19:43:06 root@master1 ~]#kubectl exec -it redis-cluster-6 -- redis-cli -c 127.0.0.1:6379> get key1 -> Redirected to slot [9189] located at 10.244.4.8:6379 "test1" 10.244.4.8:6379> exit # 动态缩容, 如果从节点有数据,那无所谓,直接删除,如果主节点有数据,则需要先转移主节点的数据到其他节点。 # 由于之前我们有设置一个key1在槽位9189,其所在主节点是10.244.4.8(redis-cluster-1),从节点是10.244.4.10(redis-cluster-5),我们可以就此,删除其主节点和从节点。 # 删除从节点 [19:43:21 root@master1 ~]#kubectl exec -it redis-cluster-0 -- redis-cli --cluster del-node 10.244.4.10:6379 d665b308142461fdc85354977fb3c2204ae52a3c >>> Removing node d665b308142461fdc85354977fb3c2204ae52a3c from cluster 10.244.4.10:6379 >>> Sending CLUSTER FORGET messages to the cluster... >>> Sending CLUSTER RESET SOFT to the deleted node. [19:51:50 root@master1 ~]#kubectl exec -it redis-cluster-0 -- redis-cli CLUSTER NODES 31073abcd14ffc325a13b147239511a14ac8bf8c 10.244.4.8:6379@16379 master - 0 1639914715602 2 connected 6827-10922 73b2e718bfdd9e6e7eabcf2138fc6e64be991f83 10.244.3.12:6379@16379 master - 0 1639914714000 3 connected 12288-16383 a74f43d67cefc03642e2d5a6784f3d3f6afa4a6b 10.244.4.11:6379@16379 master - 0 1639914715502 8 connected 0-1364 5461-6826 10923-12287 28dbde64584a8798d27ec803d01767db483bd9da 10.244.4.9:6379@16379 slave 73b2e718bfdd9e6e7eabcf2138fc6e64be991f83 0 1639914714000 3 connected f9a01f75bb5ad07c8144b96dd15c2c35da4cdafa 10.244.3.15:6379@16379 myself,master - 0 1639914715000 1 connected 1365-5460 499c340ac70bb084c30da5306ce1e69f7e449f20 10.244.3.16:6379@16379 slave a74f43d67cefc03642e2d5a6784f3d3f6afa4a6b 0 1639914714493 8 connected a1522d36c2d8bdc67385b8798318f570bb5370b0 10.244.3.13:6379@16379 slave f9a01f75bb5ad07c8144b96dd15c2c35da4cdafa 0 1639914715602 1 connected # 删除主节点:1 移出所有槽位,2,删除节点 [19:51:55 root@master1 ~]#kubectl exec -it redis-cluster-0 -- redis-cli --cluster reshard 10.244.4.8:6379 # 确认无槽位在10.244.4.8(redis-cluster-1) [19:56:44 root@master1 ~]#kubectl exec -it redis-cluster-0 -- redis-cli CLUSTER NODES 31073abcd14ffc325a13b147239511a14ac8bf8c 10.244.4.8:6379@16379 master - 0 1639915031209 2 connected 73b2e718bfdd9e6e7eabcf2138fc6e64be991f83 10.244.3.12:6379@16379 master - 0 1639915032720 3 connected 12288-16383 a74f43d67cefc03642e2d5a6784f3d3f6afa4a6b 10.244.4.11:6379@16379 master - 0 1639915031108 8 connected 0-1364 5461-6826 10923-12287 28dbde64584a8798d27ec803d01767db483bd9da 10.244.4.9:6379@16379 slave 73b2e718bfdd9e6e7eabcf2138fc6e64be991f83 0 1639915031512 3 connected f9a01f75bb5ad07c8144b96dd15c2c35da4cdafa 10.244.3.15:6379@16379 myself,master - 0 1639915031000 9 connected 1365-5460 6827-10922 499c340ac70bb084c30da5306ce1e69f7e449f20 10.244.3.16:6379@16379 slave a74f43d67cefc03642e2d5a6784f3d3f6afa4a6b 0 1639915031511 8 connected a1522d36c2d8bdc67385b8798318f570bb5370b0 10.244.3.13:6379@16379 slave f9a01f75bb5ad07c8144b96dd15c2c35da4cdafa 0 1639915032216 9 connected # 删除节点 [19:57:13 root@master1 ~]#kubectl exec -it redis-cluster-0 -- redis-cli --cluster del-node 10.244.4.8:6379 31073abcd14ffc325a13b147239511a14ac8bf8c >>> Removing node 31073abcd14ffc325a13b147239511a14ac8bf8c from cluster 10.244.4.8:6379 >>> Sending CLUSTER FORGET messages to the cluster... >>> Sending CLUSTER RESET SOFT to the deleted node. [19:58:24 root@master1 ~]#kubectl exec -it redis-cluster-0 -- redis-cli CLUSTER NODES 73b2e718bfdd9e6e7eabcf2138fc6e64be991f83 10.244.3.12:6379@16379 master - 0 1639915107000 3 connected 12288-16383 a74f43d67cefc03642e2d5a6784f3d3f6afa4a6b 10.244.4.11:6379@16379 master - 0 1639915107535 8 connected 0-1364 5461-6826 10923-12287 28dbde64584a8798d27ec803d01767db483bd9da 10.244.4.9:6379@16379 slave 73b2e718bfdd9e6e7eabcf2138fc6e64be991f83 0 1639915107000 3 connected f9a01f75bb5ad07c8144b96dd15c2c35da4cdafa 10.244.3.15:6379@16379 myself,master - 0 1639915106000 9 connected 1365-5460 6827-10922 499c340ac70bb084c30da5306ce1e69f7e449f20 10.244.3.16:6379@16379 slave a74f43d67cefc03642e2d5a6784f3d3f6afa4a6b 0 1639915106528 8 connected a1522d36c2d8bdc67385b8798318f570bb5370b0 10.244.3.13:6379@16379 slave f9a01f75bb5ad07c8144b96dd15c2c35da4cdafa 0 1639915107535 9 connected # 此时缩容完成,但是请注意K8S的redis pod并没有变化,只是pod中的redis集群的节点数量发生变化 [19:58:27 root@master1 ~]#kubectl get pod -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES redis-cluster-0 1/1 Running 0 75m 10.244.3.15 node1.noisedu.cn <none> <none> redis-cluster-1 1/1 Running 0 3h3m 10.244.4.8 node2.noisedu.cn <none> <none> redis-cluster-2 1/1 Running 0 3h2m 10.244.3.12 node1.noisedu.cn <none> <none> redis-cluster-3 1/1 Running 0 3h2m 10.244.4.9 node2.noisedu.cn <none> <none> redis-cluster-4 1/1 Running 0 3h2m 10.244.3.13 node1.noisedu.cn <none> <none> redis-cluster-5 1/1 Running 0 3h2m 10.244.4.10 node2.noisedu.cn <none> <none> redis-cluster-6 1/1 Running 0 52m 10.244.4.11 node2.noisedu.cn <none> <none> redis-cluster-7 1/1 Running 0 51m 10.244.3.16 node1.noisedu.cn <none> <none> ubuntu 1/1 Running 1 (88m ago) 163m 10.244.3.14 node1.noisedu.cn <none> <none>