redis-cluster 根据wiki学习记录 搭建,增删节点,故障转移
介绍:
redis cluster是官方提供的一种集群方案,Redis-Cluster采用无中心结构,每个节点保存数据和整个集群状态,每个节点都和其他所有节点连接。
结构特点:
- 所有节点彼此互联,redis服务器启动两个端口,一个为服务客户端的端口,另外一个为内部通讯端口,内部通讯端口比服务端口大10000. 比如:6379与16379
- 超过半数及以上的节点时,节点才会失效
- 客户端与redis服务器直连,无代理服务器,只需连接到集群中的任何一个节点即可 redis-cli -c -h -p
- Redis集群预分好16384个桶,当需要在 Redis 集群中放置一个 key-value 时,根据 CRC16(key) mod 16384的值,决定将一个key放到哪个桶中
思路:
redis_cluster部署还是较哨兵容易些,一般建议就是3主3从,下面操作就是。
1.写好6个实例的配置。要注意配置文件写好对应的实例信息(dir ,logfile ,pidfile)。最后都需要添加几行cluster 的配置,cluster-config-file
2.配置好后,启动6个实例
3.至于集群怎么关联的,通过(redis5.x)命令redis-cli --cluster 将6个实例组成一个集群;命令后陆续跟6个实例的ip port ,默认是前三个为主节点后三个为从节点
4.其中一台主挂了,这台主相对应的从会升为主继续工作
节点规划
| 三主三从 | master | slave |
|---|---|---|
| node-1 | 192.168.0.142 6379 | 192.168.0.142 26379 |
| node-2 | 192.168.0.143 6379 | 192.168.0.143 26379 |
| node-3 | 192.168.0.144 6379 | 192.168.0.144 26379 |
| 16379 消息传递端口 | 36379 消息传递端口 |
目录规划
| 安装目录 | /usr/local/redis4 |
|---|---|
| 配置文件目录 | /usr/local/redis4/conf/redis_cluster/ |
| 数据存储目录 | /database/redis4/ |
| 日志目录 | /var/log/redis4/ |
| pidfile | /var/run/ |
| 服务启动用户名 | root |
单节点部署
版本: redis 4.0.10
下面以一台0.142为例,另两台也需要配好
| 安装目录 | /usr/local/redis4 |
|---|---|
| 配置文件目录 | /usr/local/redis4/conf |
| 数据存储目录 | /database/redis4/redis_6379 |
| 日志目录 | /var/log/redis4/redis_6379.log |
| 服务启动用户名 | root |
| redis实例文件命名规则 | ||
|---|---|---|
| 类型 | 规则 | 示例 |
| 实例配置文件名 | redis_端口.conf | redis_6379.conf |
| 实例数据目录名 | redis_端口 | redis_6379 |
| 实例日志文件名 | redis_端口.log | redis_6379.log |
服务器优化配置
主要是安装服务前服务器的优化,视服务器情况决定是否使用以下优化配置,不配置以下优化服务启动会有WARNING警告,但服务可以正常启动和运行。
1、禁用Linux透明大页
透明大页介绍:
优点:内存会有个分页机制,对于内存运行需求量比较大的程序,默认的4K页需要经过多次映射到物理内存,一个2M页就搞定;
缺点:由于hugepage使用2M的PAGE,每次缺页会分配2M的page而不是4K.如果用户访问内存不连续,会导致物理内存分配过多
#查看是否关闭,0就是关闭
[root@node-1 opt]# grep HugePage /proc/meminfo
AnonHugePages: 0 kB
HugePages_Total: 0
#关闭(不需要重启系统)
#echo never > /sys/kernel/mm/redhat_transparent_hugepage/enabled
#echo never > /sys/kernel/mm/redhat_transparent_hugepage/defrag
2、/proc/sys/net/core/somaxconn
socket监听(listen)的backlog上限(系统默认是128,限制了接收新 TCP 连接侦听队列的大小,redis配置文件默认tcp-backlog是511)
编辑/etc/sysctl.conf文件添加以下内容
net.core.somaxconn = 1024
执行命令使生效:sysctl -p
3、内存分配策略()
编辑/etc/sysctl.conf文件添加以下内容
vm.overcommit_memory = 1
执行命令使生效:sysctl -p
安装
解压
tar -xf redis-4.0.10.tar.gz
进入目录编译安装
#cd redis-4.0.10
#make PREFIX=/usr/local/redis4
注:运行make test需要8.5之上的tcl,所以先运行:yum install tcl -y
#make test #make test 能检测之前进行的make编译步骤中遗漏的错误,确保服务的编译正常。
#make install PREFIX=/usr/local/redis4
如过编译有报错提示gcc未找到命令
yum install -y gcc epel-release jemalloc-devel
cd deps/
make hiredis jemalloc linenoise lua
cd ..
make PREFIX=/usr/local/redis4
echo $?
创建目录(配置文件目录,数据目录,日志目录)
[root@node-1 redis-4.0.10]# mkdir -p /usr/local/redis4/conf /database/redis4 /var/log/redis4
#创建一个实例目录
[root@node-1 redis-4.0.10]# mkdir -p /database/redis4/redis_6379
修改配置文件
[root@node-1 redis-4.0.10]# cp redis.conf /usr/local/redis4/conf/
[root@node-1 redis-4.0.10]# vim /usr/local/redis4/conf/redis.conf
1 daemonize yes #指定是否后台运行,yes:是,no:否
2 pidfile/var/run/redis_6379.pid #指定pid文件路径
3 port 6379#指定实例端口
4.bind 192.168.0.142
5 logfile "/var/log/redis4/redis_6379.log"#指定日志文件路径
6 dbfilename dump.rdb #指定dump文件名
7 dir/database/redis4/redis_6379/#指定数据存储目录
8 appendonly yes#启用AOF持久化方式,yes:启用,no:禁用
启动服务
/usr/local/redis4/bin/redis-server /usr/local/redis4/conf/redis.conf
集群部署
下面以一台为例0.142另两个也需要配好
准备环境
1、添加epel源
wget http://dl.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm
rpm -ivh epel-release-6-8.noarch.rpm
2、ruby环境安装
#redis-trib.rb是官方提供的Redis Cluster的管理工具,无需额外下载,默认位于源码包的src目录下,但因该工具是用ruby开发的,所以需要准备相关的依赖环境。
yum -y install ruby ruby-devel rubygems rpm-build
gem install redis -v 3.3.5 若报错ruby版本太低,见文章末尾
注:ruby gem安装的redis库,版本不能使用4.0,否则在reshard分片时会报错。
若执行gem install redis -v 3.3.5无反应
可手动安装gem,再执行gem install redis -v 3.3.5 参考:https://blog.csdn.net/wangshuminjava/article/details/80284810
2.1安装连接:http://www.linuxe.cn/post-375.html
3、创建数据存储目录
mkdir -p /database/redis4/redis_26379
4、创建配置文件目录
主从的配置文件都需要添加下面^cluster* 那几行配置,注意端口替换成各节点的(dir ,logfile ,pidfile,配置文件...)
cd /usr/local/redis4/conf
mkdir redis_cluster/{6379,26379} -p
cp /usr/local/redis4/conf/redis_6379.conf /usr/local/redis4/conf/redis_cluster/6379/redis6379.conf
cp /usr/local/redis4/conf/redis_6379.conf /usr/local/redis4/conf/redis_cluster/26379/redis26379.conf
编辑配置文件redis_6379.conf和redis26379.conf,添加如下配置
dir ,logfile ,pidfile 路径端口也要注意修改
cluster-enabled yes #开启集群
cluster-config-file nodes6379.conf #集群配置信息文件,由Redis自行更新,不用手动配置。每个节点都有一个集群配置文件用于持久化保存集群信息,需确保与运行中实例的配置文件名不冲突。
cluster-node-timeout 10000 #节点互连超时时间,毫秒为单位
cluster-require-full-coverage no
#集群所有节点状态为ok才提供服务。有一个节点挂掉整个集群都不可用。建议设置为no,即某节点挂掉,集群依然对外提供服务。
5、设置环境变量PATH
cp -a /root/redis-4.0.10/src/redis-trib.rb /usr/local/redis4/bin/
vim /etc/profile
PATH=$PATH:/usr/local/redis4/bin
source /etc/profile
启动所有节点
/usr/local/redis4/bin/redis-server /usr/local/redis4/conf/redis_cluster/6379/redis_6379.conf
/usr/local/redis4/bin/redis-server /usr/local/redis4/conf/redis_cluster/26379/redis_26379.conf
通过redis提供的redis-trib.rb集群管理工具进行管理创建集群(只在142上操作)
redis-trib.rb create --replicas 1 192.168.0.142:6379 192.168.0.143:6379 192.168.0.144:6379 192.168.0.144:26379 192.168.0.143:26379 192.168.0.142:26379
--replicas参数指定集群中每个主节点配备几个从节点,这里设置为1。
#Can I set the above configuration? (type 'yes' to accept): yes #中途会有一个交互,问是否同意节点配置 yes
.....
.....
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
16384个槽全部被分配,集群创建成功。注意:给redis-trib.rb的节点地址必须是不包含任何槽/数据的节点,否则会拒绝创建集群。
检查集群状态
[root@node-1 conf]# redis-trib.rb check 192.168.0.142:6379 指定任意节点即可
>>> Performing Cluster Check (using node 192.168.0.142:6379)
M: 6b5387c7a4e647212b6943cb42b38abfaa45c4a3 192.168.0.142:6379
slots:0-5460 (5461 slots) master
1 additional replica(s)
S: 5605c1fe0f0cc7ccf2d78ecc2357eb851370eb6f 192.168.0.143:26379
slots: (0 slots) slave
replicates 6b5387c7a4e647212b6943cb42b38abfaa45c4a3
M: 0cbfe1938a16594e35dbff487a49fe224da270b9 192.168.0.143:6379
slots:5461-10922 (5462 slots) master
1 additional replica(s)
S: 5aba67cfc49fd788c9025a58d095613487fe1ced 192.168.0.144:26379
slots: (0 slots) slave
replicates 0cbfe1938a16594e35dbff487a49fe224da270b9
S: 6209d272e14ab59f5a1d4a08d43dedeba57ec1ba 192.168.0.142:26379
slots: (0 slots) slave
replicates 4311abf4f943795c0d117babb714b27b8ed1a80e
M: 4311abf4f943795c0d117babb714b27b8ed1a80e 192.168.0.144:6379
slots:10923-16383 (5461 slots) master
1 additional replica(s)
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
查看集群信息
[root@node-1 conf]# redis-trib.rb info 192.168.0.142:6379 任意节点即可
192.168.0.142:6379 (6b5387c7...) -> 0 keys | 5461 slots | 1 slaves.
192.168.0.143:6379 (0cbfe193...) -> 0 keys | 5462 slots | 1 slaves.
192.168.0.144:6379 (4311abf4...) -> 0 keys | 5461 slots | 1 slaves.
[OK] 0 keys in 3 masters.
0.00 keys per slot on average.
添加节点
添加主节点
1.步骤重复环境准备阶段,注意修改端口
2.将节点加入到集群
方法一:redis-trib.rb add-node 192.168.0.142:6380(新主节点ip:port) 192.168.0.142:6379(已存在节点的ip:port)
方法二:进入任意一个节点:
[root@node-1 6381]# redis-cli -c -h 192.168.0.142 -p 6379
192.168.0.142:6379> cluster meet 192.168.0.142 6380
OK
给新节点分配槽位slot
分配前
#准备好id
[root@node-1 conf]# redis-cli -h 192.168.0.142 -p 6379 cluster nodes
2e9f699fde48fcfbc566a8f14d21be85c66dc062 192.168.0.142:6380@16380 master - 0 1596789801413 0 connected
5605c1fe0f0cc7ccf2d78ecc2357eb851370eb6f 192.168.0.143:26379@36379 slave 6b5387c7a4e647212b6943cb42b38abfaa45c4a3 0 1596789800000 7 connected
0cbfe1938a16594e35dbff487a49fe224da270b9 192.168.0.143:6379@16379 master - 0 1596789800000 2 connected 6212-10922
6b5387c7a4e647212b6943cb42b38abfaa45c4a3 192.168.0.142:6379@16379 myself,master - 0 1596789796000 7 connected 0-6211 10923-11671
5aba67cfc49fd788c9025a58d095613487fe1ced 192.168.0.144:26379@36379 slave 0cbfe1938a16594e35dbff487a49fe224da270b9 0 1596789802416 5 connected
6209d272e14ab59f5a1d4a08d43dedeba57ec1ba 192.168.0.142:26379@36379 slave 4311abf4f943795c0d117babb714b27b8ed1a80e 0 1596789800411 6 connected
4311abf4f943795c0d117babb714b27b8ed1a80e 192.168.0.144:6379@16379 master - 0 1596789799408 3 connected 11672-16383
#6380现在还是空的
[root@node-2 opt]# redis-trib.rb info 192.168.0.143:6379
192.168.0.143:6379 (0cbfe193...) -> 0 keys | 4711 slots | 1 slaves.
192.168.0.142:6379 (6b5387c7...) -> 0 keys | 6961 slots | 1 slaves.
192.168.0.142:6380 (2e9f699f...) -> 0 keys | 0 slots | 0 slaves.
192.168.0.144:6379 (4311abf4...) -> 0 keys | 4712 slots | 1 slaves.
[OK] 0 keys in 4 masters.
0.00 keys per slot on average.
开始分配
[root@node-1 conf]# redis-trib.rb reshard 192.168.0.142:6380
>>> Performing Cluster Check (using node 192.168.0.142:6380)
M: 2e9f699fde48fcfbc566a8f14d21be85c66dc062 192.168.0.142:6380
slots: (0 slots) master
0 additional replica(s)
S: 5aba67cfc49fd788c9025a58d095613487fe1ced 192.168.0.144:26379
slots: (0 slots) slave
replicates 0cbfe1938a16594e35dbff487a49fe224da270b9
M: 4311abf4f943795c0d117babb714b27b8ed1a80e 192.168.0.144:6379
slots:11672-16383 (4712 slots) master
1 additional replica(s)
S: 5605c1fe0f0cc7ccf2d78ecc2357eb851370eb6f 192.168.0.143:26379
slots: (0 slots) slave
replicates 6b5387c7a4e647212b6943cb42b38abfaa45c4a3
M: 0cbfe1938a16594e35dbff487a49fe224da270b9 192.168.0.143:6379
slots:6212-10922 (4711 slots) master
1 additional replica(s)
S: 6209d272e14ab59f5a1d4a08d43dedeba57ec1ba 192.168.0.142:26379
slots: (0 slots) slave
replicates 4311abf4f943795c0d117babb714b27b8ed1a80e
M: 6b5387c7a4e647212b6943cb42b38abfaa45c4a3 192.168.0.142:6379
slots:0-6211,10923-11671 (6961 slots) master
1 additional replica(s)
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
How many slots do you want to move (from 1 to 16384)? 2000 #第一个交互,分配多少槽位?
What is the receiving node ID? 2e9f699fde48fcfbc566a8f14d21be85c66dc062 #新节点的ID
Please enter all the source node IDs.
Type 'all' to use all the nodes as source nodes for the hash slots.
Type 'done' once you entered all the source nodes IDs.
Source node #1:6b5387c7a4e647212b6943cb42b38abfaa45c4a3
Source node #2:done
#all表示从所有的master重新分配;
#或者写数据要提取slot的master节点id,最后用done结束
Ready to move 2000 slots.
Source nodes:
M: 6b5387c7a4e647212b6943cb42b38abfaa45c4a3 192.168.0.142:6379
slots:0-6211,10923-11671 (6961 slots) master
1 additional replica(s)
Destination node:
M: 2e9f699fde48fcfbc566a8f14d21be85c66dc062 192.168.0.142:6380
slots: (0 slots) master
0 additional replica(s)
Resharding plan:
Moving slot 0 from 6b5387c7a4e647212b6943cb42b38abfaa45c4a3
Moving slot 1 from 6b5387c7a4e647212b6943cb42b38abfaa45c4a3
Moving slot 2 from 6b5387c7a4e647212b6943cb42b38abfaa45c4a3
..............
..............
Do you want to proceed with the proposed reshard plan (yes/no)? yes #想要继续分槽位吗?
..............
................
分配完成
[root@node-2 opt]# redis-trib.rb info 192.168.0.143:6379
192.168.0.143:6379 (0cbfe193...) -> 0 keys | 4711 slots | 1 slaves.
192.168.0.142:6379 (6b5387c7...) -> 0 keys | 4961 slots | 1 slaves.
192.168.0.142:6380 (2e9f699f...) -> 0 keys | 2000 slots | 0 slaves.
192.168.0.144:6379 (4311abf4...) -> 0 keys | 4712 slots | 1 slaves.
[OK] 0 keys in 4 masters.
0.00 keys per slot on average.
添加从节点
添加前
[root@node-2 opt]# redis-trib.rb info 192.168.0.143:6379
192.168.0.143:6379 (0cbfe193...) -> 0 keys | 4711 slots | 1 slaves.
192.168.0.142:6379 (6b5387c7...) -> 0 keys | 4961 slots | 1 slaves.
192.168.0.142:6380 (2e9f699f...) -> 0 keys | 2000 slots | 0 slaves.
192.168.0.144:6379 (4311abf4...) -> 0 keys | 4712 slots | 1 slaves.
[OK] 0 keys in 4 masters.
0.00 keys per slot on average.
添加节点
重复准备环境步骤,创建新从节点
16383
[root@node-1 conf]# redis-cli -h 192.168.0.142 -p 6379 cluster nodes
2e9f699fde48fcfbc566a8f14d21be85c66dc062 192.168.0.142:6380@16380 master - 0 1596793698059 8 connected 0-1999
#添加节点
命令格式:redis-trib.rb add-node --slave --master-id 主节点id 添加节点的ip和端口 集群中已存在节点ip和端口
[root@node-1 conf]# redis-trib.rb add-node --slave --master-id 2e9f699fde48fcfbc566a8f14d21be85c66dc062 192.168.0.142:26380 192.168.0.142:6379
>>> Adding node 192.168.0.142:26380 to cluster 192.168.0.142:6379
>>> Performing Cluster Check (using node 192.168.0.142:6379)
M: 6b5387c7a4e647212b6943cb42b38abfaa45c4a3 192.168.0.142:6379
slots:2000-6211,10923-11671 (4961 slots) master
1 additional replica(s)
M: 2e9f699fde48fcfbc566a8f14d21be85c66dc062 192.168.0.142:6380
slots:0-1999 (2000 slots) master
0 additional replica(s)
S: 5605c1fe0f0cc7ccf2d78ecc2357eb851370eb6f 192.168.0.143:26379
slots: (0 slots) slave
replicates 6b5387c7a4e647212b6943cb42b38abfaa45c4a3
M: 0cbfe1938a16594e35dbff487a49fe224da270b9 192.168.0.143:6379
slots:6212-10922 (4711 slots) master
1 additional replica(s)
S: 5aba67cfc49fd788c9025a58d095613487fe1ced 192.168.0.144:26379
slots: (0 slots) slave
replicates 0cbfe1938a16594e35dbff487a49fe224da270b9
S: 6209d272e14ab59f5a1d4a08d43dedeba57ec1ba 192.168.0.142:26379
slots: (0 slots) slave
replicates 4311abf4f943795c0d117babb714b27b8ed1a80e
M: 4311abf4f943795c0d117babb714b27b8ed1a80e 192.168.0.144:6379
slots:11672-16383 (4712 slots) master
1 additional replica(s)
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
>>> Send CLUSTER MEET to node 192.168.0.142:26380 to make it join the cluster.
Waiting for the cluster to join.
>>> Configure node as replica of 192.168.0.142:6380.
[OK] New node added correctly.
添加完成
[root@node-2 opt]# redis-trib.rb info 192.168.0.143:6379
192.168.0.143:6379 (0cbfe193...) -> 0 keys | 4711 slots | 1 slaves.
192.168.0.142:6379 (6b5387c7...) -> 0 keys | 4961 slots | 1 slaves.
192.168.0.142:6380 (2e9f699f...) -> 0 keys | 2000 slots | 1 slaves.
192.168.0.144:6379 (4311abf4...) -> 0 keys | 4712 slots | 1 slaves.
[OK] 0 keys in 4 masters.
删除节点
删除主节点
1、首先要使用reshard移除master的全部slot(目前只能把被删除master的slot迁移到一个节点上)
redis-trib.rb reshard 192.168.0.183:6379
...
迁移过程
...
[root@node-2 opt]# redis-trib.rb info 192.168.0.143:6379
192.168.0.143:6379 (0cbfe193...) -> 0 keys | 4711 slots | 1 slaves.
192.168.0.142:6379 (6b5387c7...) -> 0 keys | 4961 slots | 1 slaves.
192.168.0.142:6380 (2e9f699f...) -> 0 keys | 0 slots | 0 slaves.
192.168.0.144:6379 (4311abf4...) -> 0 keys | 6712 slots | 2 slaves.
[OK] 0 keys in 4 masters.
0.00 keys per slot on average.
2、 删除当前节点
redis-trib.rb del-node 192.168.0.142:6380 '7030164ada8fcabd6f8ecca2d03350a2c436d73a' < 任意 ip:端口 自己的节点id>
[root@node-1 conf]# redis-trib.rb del-node 192.168.0.142:6380 2e9f699fde48fcfbc566a8f14d21be85c66dc062
>>> Removing node 2e9f699fde48fcfbc566a8f14d21be85c66dc062 from cluster 192.168.0.142:6380
>>> Sending CLUSTER FORGET messages to the cluster...
>>> SHUTDOWN the node.
[root@node-1 conf]# redis-cli -h 192.168.0.142 -p 6379 cluster nodes 已经删掉了
5605c1fe0f0cc7ccf2d78ecc2357eb851370eb6f 192.168.0.143:26379@36379 slave 6b5387c7a4e647212b6943cb42b38abfaa45c4a3 0 1596797414173 7 connected
33e74c38f9ff08c725702ba0024b916e3f944a20 192.168.0.142:26380@36380 slave 4311abf4f943795c0d117babb714b27b8ed1a80e 0 1596797411000 9 connected
0cbfe1938a16594e35dbff487a49fe224da270b9 192.168.0.143:6379@16379 master - 0 1596797411166 2 connected 6212-10922
6b5387c7a4e647212b6943cb42b38abfaa45c4a3 192.168.0.142:6379@16379 myself,master - 0 1596797406000 7 connected 2000-6211 10923-11671
5aba67cfc49fd788c9025a58d095613487fe1ced 192.168.0.144:26379@36379 slave 0cbfe1938a16594e35dbff487a49fe224da270b9 0 1596797413170 5 connected
6209d272e14ab59f5a1d4a08d43dedeba57ec1ba 192.168.0.142:26379@36379 slave 4311abf4f943795c0d117babb714b27b8ed1a80e 0 1596797412167 9 connected
4311abf4f943795c0d117babb714b27b8ed1a80e 192.168.0.144:6379@16379 master - 0 1596797410000 9 connected 0-1999 11672-16383
删除从节点
[root@node-1 conf]# redis-trib.rb del-node 192.168.0.142:26380 33e74c38f9ff08c725702ba0024b916e3f944a20
>>> Removing node 33e74c38f9ff08c725702ba0024b916e3f944a20 from cluster 192.168.0.142:26380
>>> Sending CLUSTER FORGET messages to the cluster...
>>> SHUTDOWN the node.
[root@node-1 conf]# redis-cli -h 192.168.0.142 -p 6379 cluster nodes 已经删掉了
5605c1fe0f0cc7ccf2d78ecc2357eb851370eb6f 192.168.0.143:26379@36379 slave 6b5387c7a4e647212b6943cb42b38abfaa45c4a3 0 1596797623000 7 connected
0cbfe1938a16594e35dbff487a49fe224da270b9 192.168.0.143:6379@16379 master - 0 1596797622000 2 connected 6212-10922
6b5387c7a4e647212b6943cb42b38abfaa45c4a3 192.168.0.142:6379@16379 myself,master - 0 1596797619000 7 connected 2000-6211 10923-11671
5aba67cfc49fd788c9025a58d095613487fe1ced 192.168.0.144:26379@36379 slave 0cbfe1938a16594e35dbff487a49fe224da270b9 0 1596797623622 5 connected
6209d272e14ab59f5a1d4a08d43dedeba57ec1ba 192.168.0.142:26379@36379 slave 4311abf4f943795c0d117babb714b27b8ed1a80e 0 1596797622619 9 connected
4311abf4f943795c0d117babb714b27b8ed1a80e 192.168.0.144:6379@16379 master - 0 1596797621000 9 connected 0-1999 11672-16383
故障转移
命令:CLUSTER failover:#手动进行故障转移,需要在从节点执行。使从升为新主
eg:
192.168.0.144:6380> CLUSTER failover
OK
转移前
[root@node-1 conf]# redis-cli -h 192.168.0.142 -p 6379 cluster nodes
5605c1fe0f0cc7ccf2d78ecc2357eb851370eb6f 192.168.0.143:26379@36379 slave 6b5387c7a4e647212b6943cb42b38abfaa45c4a3 0 1596797623000 7 connected
0cbfe1938a16594e35dbff487a49fe224da270b9 192.168.0.143:6379@16379 master - 0 1596797622000 2 connected 6212-10922
6b5387c7a4e647212b6943cb42b38abfaa45c4a3 192.168.0.142:6379@16379 myself,master - 0 1596797619000 7 connected 2000-6211 10923-11671
5aba67cfc49fd788c9025a58d095613487fe1ced 192.168.0.144:26379@36379 slave 0cbfe1938a16594e35dbff487a49fe224da270b9 0 1596797623622 5 connected
6209d272e14ab59f5a1d4a08d43dedeba57ec1ba 192.168.0.142:26379@36379 slave 4311abf4f943795c0d117babb714b27b8ed1a80e 0 1596797622619 9 connected
4311abf4f943795c0d117babb714b27b8ed1a80e 192.168.0.144:6379@16379 master - 0 1596797621000 9 connected 0-1999 11672-16383
开始转移
[root@node-1 conf]# redis-cli -h 192.168.0.142 -p 26379
192.168.0.142:26379>
192.168.0.142:26379> cluster failover
OK
192.168.0.142:26379>
转移完成
[root@node-1 conf]# redis-cli -h 192.168.0.142 -p 6379 cluster nodes
5605c1fe0f0cc7ccf2d78ecc2357eb851370eb6f 192.168.0.143:26379@36379 slave 6b5387c7a4e647212b6943cb42b38abfaa45c4a3 0 1596797862201 7 connected
0cbfe1938a16594e35dbff487a49fe224da270b9 192.168.0.143:6379@16379 master - 0 1596797861198 2 connected 6212-10922
6b5387c7a4e647212b6943cb42b38abfaa45c4a3 192.168.0.142:6379@16379 myself,master - 0 1596797852000 7 connected 2000-6211 10923-11671
5aba67cfc49fd788c9025a58d095613487fe1ced 192.168.0.144:26379@36379 slave 0cbfe1938a16594e35dbff487a49fe224da270b9 0 1596797864205 5 connected
6209d272e14ab59f5a1d4a08d43dedeba57ec1ba 192.168.0.142:26379@36379 master - 0 1596797861198 10 connected 0-1999 11672-16383
4311abf4f943795c0d117babb714b27b8ed1a80e 192.168.0.144:6379@16379 slave 6209d272e14ab59f5a1d4a08d43dedeba57ec1ba 0 1596797863203 10 connected
完善项
命令
相关命令:https://blog.csdn.net/hguisu/article/details/82979050
节点主从对应有问题(操作回顾)
1.将有问题的主动删掉(主先移走slots在删)
2.在meet 从新加入,给主分配slots报错
(error) ERR Slot 2000 is already busy 删除节点数据,
3.采取清空数据,重新创建集群
1.flushall 清楚节点数据
2.执行cluster reset
...
[root@node-3 ~]# redis-cli -h 192.168.0.142-p 6379 cluster reset
[root@node-3 ~]# redis-cli -h 192.168.0.143 -p 26379 cluster reset
...
4.创建集群
[root@node-1 ~]# redis-trib.rb create --replicas 1 192.168.0.142:6379 192.168.0.143:6379 192.168.0.144:6379 192.168.0.143:26379 192.168.0.144:26379 192.168.0.142:26379
这里说明下,--replicas 是指定主从配置,后面那个1代表每个节点有几个从节点(这里设置1个)然后后面跟6个节点的ip和端口。由于是每个节点1个从,所以是6/2,前三个节点自动会成为master,后3个对应的是slave。接着执行命令,可以看到它的一个执行的计划如下:
redis-cli --cluster create #需要redis5.x
5.查看集群
[root@node-1 ~]# redis-cli -h 192.168.0.142 -p 6379 cluster nodes
命令:
集群命令:
1.查看节点信息: cluster nodes
redis-cli -c -p 6379 cluster nodes
redis-cli -c -p 6379 cluster info
2.创建集群:
5.x以前 : redis-trib.rb create --replicas 1
5.x :redis-cli --cluster create
节点命令:
1.加入节点 :
第一种:cluster meet
第二种:redis-trib.rb add-node
例如:新增节点:10.80.82.74:7029
redis-trib.rb add-node 10.80.82.74:7029 10.25.157.78:7022
2.移除节点:
CLUSTER FORGET <node_id>
redis-trib.rb del-node
3.设置主从节点:
CLUSTER REPLICATE <node_id> 将当前节点设置为 node_id 指定的节点的从节点。
4.节点数据备份到硬盘:
CLUSTER SAVECONFIG 将节点的配置文件保存到硬盘里面。
--------报错--------------
ruby报错版本太低
问题:ruby报错版本太低
解决办法:
1、安装curl,并更新nss
yum install curl -y
yum -y update nss
2、安装RVM
gpg2 --keyserver hkp://keys.gnupg.net --recv-keys D39DC0E3
curl -L get.rvm.io | bash -s stable
source /usr/local/rvm/scripts/rvm
3、查看目前的ruby版本
ruby --version
4、删除当前ruby版本
用yum安装的ruby 需要通过:yum erase ruby-* 命令删除
其它通过:
rvm remove 版本
5、查看RVM库中的ruby版本
rvm list known
6、选择安装ruby版本并设为默认
rvm install "ruby-2.3.3"
rvm use 2.3.3 --default
7、安装redis-ruby
gem install redis -v 3.3.5
redis重启集群有问题
思路
redis使用集群部署,如果遇到断电或者服务器重启,当再次启动的时候,有时候会启动不了。需要使用trib的fix命令进行修复。如果修复还是不行的话,可以清除节点数据再重新建集群,前提要备份之后操作。
故障表现:
- 查看集群状态命令: cluster info
cluster_state:fail
cluster_size:0
- trib的check检查集群状态
redis-trib.rb check ip:port
[ERR] Not all 16384 slots are covered by nodes.
这个往往是由于主node移除了,但是并没有移除node上面的slot,从而导致了slot总数没有达到16384,其实也就是slots分布不正确。所以在删除节点的时候一定要注意删除的是否是Master主节点。
官方是推荐使用redis-trib.rb fix 来修复集群
再 cluster nodes 看节点没有的话 就修复吧
[root@node01 src]# ./redis-trib.rb fix ip:port
修复完成后再用check命令检查下是否正确
[root@node01 src]# ./redis-trib.rb check ip:port
只要输入任意集群中节点即可,会自动检查所有相关节点。可以查看相应的输出看下是否是每个Master都有了slots,如果分布不均匀那可以使用下面的方式重新分配slot:
[root@node01 src]# ./redis-trib.rb reshard ip:port
如果通过fix方式修复不了
1.停掉redis
2.然后把aof,rdb,nodes节点文件删除,删除之前需要备份
3.然后启动各个redis节点
4.重新创建集群
redis-trib.rb create --replicas 1 ip1:port1 ip2:port2 ip3:port3 ip4:port4 ip5:port5 ip6:port6
redis5 版 redis-cli --cluster create ip1:port1 ip2:port2 ip3:port3 ip4:port4 ip5:port5 ip6:port6 --cluster-replicas 1
5.创建好后连接redis查看集群状态
redis-cli -c -h ip -p port
cluster info
cluster nodes
可以看到3主3从的redis服务已经启动完成
[root@node-1 redis4]# ./bin/redis-cli -c -h 192.168.0.142 -p 6389
192.168.0.142:6389> cluster info
cluster_state:ok
cluster_slots_assigned:16384
cluster_slots_ok:16384
cluster_slots_pfail:0
cluster_slots_fail:0
cluster_known_nodes:6
cluster_size:3
cluster_current_epoch:6
cluster_my_epoch:1
cluster_stats_messages_ping_sent:7853799
cluster_stats_messages_pong_sent:8449966
cluster_stats_messages_sent:16303765
cluster_stats_messages_ping_received:8449961
cluster_stats_messages_pong_received:7853799
cluster_stats_messages_meet_received:5
cluster_stats_messages_received:16303765
192.168.0.142:6389> cluster nodes
5f97796413bceedac56cd0e460dadfac1fa08618 192.168.0.142:26389@36389 slave 26d30398013ea3f7de23088cda80cb7bc32c1236 0 1607832937093 6 connected
5d585fe51101563ff9d8b8188075a60e50844744 192.168.0.143:26389@36389 slave 96f4585dd82537389c65b065d0725b1347f29f29 0 1607832937093 5 connected
26d30398013ea3f7de23088cda80cb7bc32c1236 192.168.0.144:6389@16389 master - 0 1607832939096 3 connected 10923-16383
dc501c99dd540f27a7746a13d54712c3fac2d25d 192.168.0.143:6389@16389 master - 0 1607832939096 2 connected 5461-10922
96f4585dd82537389c65b065d0725b1347f29f29 192.168.0.142:6389@16389 myself,master - 0 1607832935000 1 connected 0-5460
30f4897467be07590dc04fa720b550bffc1f475d 192.168.0.144:26389@36389 slave dc501c99dd540f27a7746a13d54712c3fac2d25d 0 1607832940097 4 connected

浙公网安备 33010602011771号