CentOS 7 单机安装Redis Cluster6.0.9(3主3从伪集群)
本此安装基于单实例Redis,请先确定您的linux有一个Redis。此篇文章仅在一台服务器安装(3主3从),使用不同端口号。因为使用了不同目录,所以不影响其他Redis的存在。本机器ip:192.168.77.129。
- 首先创建集群文件夹,存放不同配置文件
cd /usr/local/redis6.0.9
mkdir redis-cluster
cd redis-cluster
mkdir 7291 7292 7293 7294 7295 7296
- 复制配置文件至
7291目录
cp /usr/local/redis6.0.9/redis.conf /usr/local/redis6.0.9/redis-cluster/7291
- 修改
7291下配置文件内容
port 7291
daemonize yes
protected-mode no
dir /usr/local/redis6.0.9/redis-cluster/7291/
cluster-enabled yes
cluster-config-file nodes-7291.conf
cluster-node-timeout 5000
appendonly yes
pidfile /var/run/redis_7291.pid
- 把7291下的redis.conf复制到其他5个目录
cd /usr/local/redis6.0.9/redis-cluster/7291
cp redis.conf ../7292
cp redis.conf ../7293
cp redis.conf ../7294
cp redis.conf ../7295
cp redis.conf ../7296
- 批量替换内容
cd /usr/local/redis6.0.9/redis-cluster
sed -i 's/7291/7292/g' 7292/redis.conf
sed -i 's/7291/7293/g' 7293/redis.conf
sed -i 's/7291/7294/g' 7294/redis.conf
sed -i 's/7291/7295/g' 7295/redis.conf
sed -i 's/7291/7296/g' 7296/redis.conf
- 启动所有节点
cd /usr/local/redis-6.0.9/
./src/redis-server redis-cluster/7291/redis.conf
./src/redis-server redis-cluster/7292/redis.conf
./src/redis-server redis-cluster/7293/redis.conf
./src/redis-server redis-cluster/7294/redis.conf
./src/redis-server redis-cluster/7295/redis.conf
./src/redis-server redis-cluster/7296/redis.conf
- 检查进程
ps -ef|grep redis
root 7212 1 0 00:58 ? 00:00:05 ./bin/redis-server 0.0.0.0:7291 [cluster]
root 7218 1 0 00:58 ? 00:00:02 ./bin/redis-server 0.0.0.0:7292 [cluster]
root 7224 1 0 00:58 ? 00:00:03 ./bin/redis-server 0.0.0.0:7293 [cluster]
root 7230 1 0 00:58 ? 00:00:01 ./bin/redis-server 0.0.0.0:7294 [cluster]
root 7236 1 0 00:58 ? 00:00:01 ./bin/redis-server 0.0.0.0:7295 [cluster]
root 7242 1 0 00:58 ? 00:00:01 ./bin/redis-server 0.0.0.0:7296 [cluster]
- 创建集群
注意! 使用绝对ip,不要使用127.0.0.1
redis-cli --cluster create 192.168.77.129:7291 192.168.77.129:7292 192.168.77.129:7293 192.168.77.129:7294 192.168.77.129:7295 192.168.77.129:7296 --cluster-replicas 1
Redis会给出一个预计的方案,对6个节点分配3主3从,如果认为没有问题,输入yes确认
Trying to optimize slaves allocation for anti-affinity
[WARNING] Some slaves are in the same host as their master
M: dfb0877d35ae48b8b27586f6915b3cd8ca28c54d 192.168.77.129:7291
slots:[0-5460] (5461 slots) master
M: 7ffee2fab13d1dffea594b6ffb4059ef0fb5cacc 192.168.77.129:7292
slots:[5461-10922] (5462 slots) master
M: 6982a6b6eb49a9a9c2d1023dd8ac5d0f1037ab11 192.168.77.129:7293
slots:[10923-16383] (5461 slots) master
S: 80948a8bbf317d09886654d96e0e049c20a72a29 192.168.77.129:7294
replicates 6982a6b6eb49a9a9c2d1023dd8ac5d0f1037ab11
S: 9aed1daaed2f9815c6918b89c7f8fb9659495580 192.168.77.129:7295
replicates dfb0877d35ae48b8b27586f6915b3cd8ca28c54d
S: 1031d08509a33a346987337f7bb23b4e2e886d69 192.168.77.129:7296
replicates 7ffee2fab13d1dffea594b6ffb4059ef0fb5cacc
Can I set the above configuration? (type 'yes' to accept):
- 注意看slot的分布:
7291 slots:[0-5460] (5461 slots) master
7293 slots:[10923-16383] (5461 slots) master
7292 slots:[5461-10922] (5462 slots) master
输入yes
Nodes configuration updated
Assign a different config epoch to each node
Sending CLUSTER MEET messages to join the cluster
Waiting for the cluster to join
Performing Cluster Check (using node 192.168.77.129:7291)
M: dfb0877d35ae48b8b27586f6915b3cd8ca28c54d 192.168.77.129:7291
slots:[0-5460] (5461 slots) master
1 additional replica(s)
S: 1031d08509a33a346987337f7bb23b4e2e886d69 192.168.77.129:7296
slots: (0 slots) slave
replicates 7ffee2fab13d1dffea594b6ffb4059ef0fb5cacc
M: 6982a6b6eb49a9a9c2d1023dd8ac5d0f1037ab11 192.168.77.129:7293
slots:[10923-16383] (5461 slots) master
1 additional replica(s)
S: 80948a8bbf317d09886654d96e0e049c20a72a29 192.168.77.129:7294
slots: (0 slots) slave
replicates 6982a6b6eb49a9a9c2d1023dd8ac5d0f1037ab11
S: 9aed1daaed2f9815c6918b89c7f8fb9659495580 192.168.77.129:7295
slots: (0 slots) slave
replicates dfb0877d35ae48b8b27586f6915b3cd8ca28c54d
M: 7ffee2fab13d1dffea594b6ffb4059ef0fb5cacc 192.168.77.129:7292
slots:[5461-10922] (5462 slots) master
1 additional replica(s)
[OK] All nodes agree about slots configuration.
Check for open slots...
Check slots coverage...
[OK] All 16384 slots covered.
- 批量写入值
cd /usr/local/soft/redis-6.0.9/redis-cluster/
vim setkey.sh
#!/bin/bash
for ((i=0;i<20000;i++))
do
echo -en "helloworld" | redis-cli -h 192.168.44.181 -p 7291 -c -x set name$i >>redis.log
done
chmod +x setkey.sh
./setkey.sh
连接到客户端检查数据分布
127.0.0.1:7291> dbsize
(integer) 6652
127.0.0.1:7291> exit
[root@192 redis6.0.9]# ./bin/redis-cli -p 7292
127.0.0.1:7292> dbsize
(integer) 6683
127.0.0.1:7292> exit
[root@192 redis6.0.9]# ./bin/redis-cli -p 7293
127.0.0.1:7293> dbsize
(integer) 6665
- 新增节点如何重新分片:
一个新节点add-node加入集群后,是没有slots的
redis-cli --cluster reshard 目标节点(IP端口)
这时会要求你输入分配的槽位,生成reshard计划,确定就会迁移数据
cluster管理命令
redis-cli --cluster help
Cluster Manager Commands:
create host1:port1 ... hostN:portN
--cluster-replicas <arg>
check host:port
--cluster-search-multiple-owners
info host:port
fix host:port
--cluster-search-multiple-owners
reshard host:port
--cluster-from <arg>
--cluster-to <arg>
--cluster-slots <arg>
--cluster-yes
--cluster-timeout <arg>
--cluster-pipeline <arg>
--cluster-replace
rebalance host:port
--cluster-weight <node1=w1...nodeN=wN>
--cluster-use-empty-masters
--cluster-timeout <arg>
--cluster-simulate
--cluster-pipeline <arg>
--cluster-threshold <arg>
--cluster-replace
add-node new_host:new_port existing_host:existing_port
--cluster-slave
--cluster-master-id <arg>
del-node host:port node_id
call host:port command arg arg .. arg
set-timeout host:port milliseconds
import host:port
--cluster-from <arg>
--cluster-copy
--cluster-replace
help
For check, fix, reshard, del-node, set-timeout you can specify the host and port of any working node in the cluster.
集群命令
cluster info :打印集群的信息
cluster nodes :列出集群当前已知的所有节点(node),以及这些节点的相关信息。
cluster meet :将 ip 和 port 所指定的节点添加到集群当中,让它成为集群的一份子。
cluster forget <node_id> :从集群中移除 node_id 指定的节点(保证空槽道)。
cluster replicate <node_id> :将当前节点设置为 node_id 指定的节点的从节点。
cluster saveconfig :将节点的配置文件保存到硬盘里面。
槽slot命令
cluster addslots [slot …] :将一个或多个槽(slot)指派(assign)给当前节点。
cluster delslots [slot …] :移除一个或多个槽对当前节点的指派。
cluster flushslots :移除指派给当前节点的所有槽,让当前节点变成一个没有指派任何槽的节点。
cluster setslot node <node_id> :将槽 slot 指派给 node_id 指定的节点,如果槽已经指派给另一个节点,那么先让另一个节点删除该槽>,然后再进行指派。
cluster setslot migrating <node_id> :将本节点的槽 slot 迁移到 node_id 指定的节点中。
cluster setslot importing <node_id> :从 node_id 指定的节点中导入槽 slot 到本节点。
cluster setslot stable :取消对槽 slot 的导入(import)或者迁移(migrate)。
键命令
cluster keyslot :计算键 key 应该被放置在哪个槽上。
cluster countkeysinslot :返回槽 slot 目前包含的键值对数量。
cluster getkeysinslot :返回 count 个 slot 槽中的键

浙公网安备 33010602011771号