11.3主3从Redis集群

/ Docker / 没有评论 / 1160浏览

3主3从Redis集群配置

启动Redis实例6个

docker run -d --name redis-node-1 --net host --privileged=true -v /Users/huzd/docker/redis/share/redis-node-1:/data redis:6.0.8 --cluster-enabled yes --appendonly yes --port 6381

docker run -d --name redis-node-2 --net host --privileged=true -v /Users/huzd/docker/redis/share/redis-node-2:/data redis:6.0.8 --cluster-enabled yes --appendonly yes --port 6382

docker run -d --name redis-node-3 --net host --privileged=true -v /Users/huzd/docker/redis/share/redis-node-3:/data redis:6.0.8 --cluster-enabled yes --appendonly yes --port 6383

docker run -d --name redis-node-4 --net host --privileged=true -v /Users/huzd/docker/redis/share/redis-node-4:/data redis:6.0.8 --cluster-enabled yes --appendonly yes --port 6384

docker run -d --name redis-node-5 --net host --privileged=true -v /Users/huzd/docker/redis/share/redis-node-5:/data redis:6.0.8 --cluster-enabled yes --appendonly yes --port 6385

docker run -d --name redis-node-6 --net host --privileged=true -v /Users/huzd/docker/redis/share/redis-node-6:/data redis:6.0.8 --cluster-enabled yes --appendonly yes --port 6386

配置Redis集群

在节点1中执行如下命令:

redis-cli --cluster create 127.0.0.1:6381 127.0.0.1:6382 127.0.0.1:6383 127.0.0.1:6384 127.0.0.1:6385 127.0.0.1:6386 --cluster-replicas 1

执行结果:

root@docker-desktop:/data# redis-cli --cluster create 127.0.0.1:6381 127.0.0.1:6382 127.0.0.1:6383 127.0.0.1:6384 127.0.0.1:6385 127.0.0.1:6386 --cluster-replicas 1
>>> Performing hash slots allocation on 6 nodes...
Master[0] -> Slots 0 - 5460
Master[1] -> Slots 5461 - 10922
Master[2] -> Slots 10923 - 16383
Adding replica 127.0.0.1:6385 to 127.0.0.1:6381
Adding replica 127.0.0.1:6386 to 127.0.0.1:6382
Adding replica 127.0.0.1:6384 to 127.0.0.1:6383
>>> Trying to optimize slaves allocation for anti-affinity
[WARNING] Some slaves are in the same host as their master
M: ea434ea560d21bb578c53e6010ff2097662f165e 127.0.0.1:6381
   slots:[0-5460] (5461 slots) master
M: bef5037d60f216c64c48156f30dfca91c563fbf6 127.0.0.1:6382
   slots:[5461-10922] (5462 slots) master
M: 0afe61850004f50235a03e1f0f7ff29491df3769 127.0.0.1:6383
   slots:[10923-16383] (5461 slots) master
S: d152a771c0f2a77edd3a1f54bf7d175e23ae4155 127.0.0.1:6384
   replicates bef5037d60f216c64c48156f30dfca91c563fbf6
S: 09c9d40e39890e6e1d85bbfc886ea15e1c63c16a 127.0.0.1:6385
   replicates 0afe61850004f50235a03e1f0f7ff29491df3769
S: 5bf74f0b838d8e25a133bb642738491027af3165 127.0.0.1:6386
   replicates ea434ea560d21bb578c53e6010ff2097662f165e
Can I set the above configuration? (type 'yes' to accept): yes
>>> Nodes configuration updated
>>> Assign a different config epoch to each node
>>> Sending CLUSTER MEET messages to join the cluster
Waiting for the cluster to join
.
>>> Performing Cluster Check (using node 127.0.0.1:6381)
M: ea434ea560d21bb578c53e6010ff2097662f165e 127.0.0.1:6381
   slots:[0-5460] (5461 slots) master
   1 additional replica(s)
S: d152a771c0f2a77edd3a1f54bf7d175e23ae4155 127.0.0.1:6384
   slots: (0 slots) slave
   replicates bef5037d60f216c64c48156f30dfca91c563fbf6
S: 09c9d40e39890e6e1d85bbfc886ea15e1c63c16a 127.0.0.1:6385
   slots: (0 slots) slave
   replicates 0afe61850004f50235a03e1f0f7ff29491df3769
M: 0afe61850004f50235a03e1f0f7ff29491df3769 127.0.0.1:6383
   slots:[10923-16383] (5461 slots) master
   1 additional replica(s)
S: 5bf74f0b838d8e25a133bb642738491027af3165 127.0.0.1:6386
   slots: (0 slots) slave
   replicates ea434ea560d21bb578c53e6010ff2097662f165e
M: bef5037d60f216c64c48156f30dfca91c563fbf6 127.0.0.1:6382
   slots:[5461-10922] (5462 slots) master
   1 additional replica(s)
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
root@docker-desktop:/data# 

查看集群状态

root@docker-desktop:/data# redis-cli -p 6381
127.0.0.1:6381> cluster info
cluster_state:ok
cluster_slots_assigned:16384
cluster_slots_ok:16384
cluster_slots_pfail:0
cluster_slots_fail:0
cluster_known_nodes:6
cluster_size:3
cluster_current_epoch:6
cluster_my_epoch:1
cluster_stats_messages_ping_sent:203
cluster_stats_messages_pong_sent:184
cluster_stats_messages_sent:387
cluster_stats_messages_ping_received:179
cluster_stats_messages_pong_received:203
cluster_stats_messages_meet_received:5
cluster_stats_messages_received:387
127.0.0.1:6381> 
127.0.0.1:6381> cluster nodes
d152a771c0f2a77edd3a1f54bf7d175e23ae4155 127.0.0.1:6384@16384 slave bef5037d60f216c64c48156f30dfca91c563fbf6 0 1656597697359 2 connected
09c9d40e39890e6e1d85bbfc886ea15e1c63c16a 127.0.0.1:6385@16385 slave 0afe61850004f50235a03e1f0f7ff29491df3769 0 1656597696000 3 connected
0afe61850004f50235a03e1f0f7ff29491df3769 127.0.0.1:6383@16383 master - 0 1656597696000 3 connected 10923-16383
5bf74f0b838d8e25a133bb642738491027af3165 127.0.0.1:6386@16386 slave ea434ea560d21bb578c53e6010ff2097662f165e 0 1656597695000 1 connected
bef5037d60f216c64c48156f30dfca91c563fbf6 127.0.0.1:6382@16382 master - 0 1656597696330 2 connected 5461-10922
ea434ea560d21bb578c53e6010ff2097662f165e 127.0.0.1:6381@16381 myself,master - 0 1656597697000 1 connected 0-5460

集群存取值操作

127.0.0.1:6381> keys *
(empty array)
127.0.0.1:6381> set k1 v1
(error) MOVED 12706 127.0.0.1:6383
127.0.0.1:6381> set k2 v2
OK
127.0.0.1:6381> set k3 v3
OK
127.0.0.1:6381> get k2
"v2"
127.0.0.1:6381> get k3
"v3"
127.0.0.1:6381> 

error 说明

往集群中添加数据报错原因是我们登录时登录的是单主节点方式的登录,当我们set值的时候如果计算出来的槽值不归属当前登录的节点就会报错。并且无法set值。解决办法是使用:路由优化

redis-cli -p 6381 -c
# -c 代表路由优化,采用集群方式登录

操作示例:

127.0.0.1:6383> flushall
OK
127.0.0.1:6383> set k1 v1
OK
127.0.0.1:6383> set k2 v2  # 发生了路由切换
-> Redirected to slot [449] located at 127.0.0.1:6381
OK
127.0.0.1:6381> set k3 v3
OK
127.0.0.1:6381> set k4 v4
-> Redirected to slot [8455] located at 127.0.0.1:6382
OK
127.0.0.1:6382> 

查看集群状态

命令行:

redis-cli --cluster check 127.0.0.1:6381

实例:

root@docker-desktop:/data# redis-cli --cluster check 127.0.0.1:6381
127.0.0.1:6381 (ea434ea5...) -> 3 keys | 5461 slots | 1 slaves.
127.0.0.1:6383 (0afe6185...) -> 1 keys | 5461 slots | 1 slaves.
127.0.0.1:6382 (bef5037d...) -> 2 keys | 5462 slots | 1 slaves.
[OK] 6 keys in 3 masters.
0.00 keys per slot on average.
>>> Performing Cluster Check (using node 127.0.0.1:6381)
M: ea434ea560d21bb578c53e6010ff2097662f165e 127.0.0.1:6381
   slots:[0-5460] (5461 slots) master
   1 additional replica(s)
S: d152a771c0f2a77edd3a1f54bf7d175e23ae4155 127.0.0.1:6384
   slots: (0 slots) slave
   replicates bef5037d60f216c64c48156f30dfca91c563fbf6
S: 09c9d40e39890e6e1d85bbfc886ea15e1c63c16a 127.0.0.1:6385
   slots: (0 slots) slave
   replicates 0afe61850004f50235a03e1f0f7ff29491df3769
M: 0afe61850004f50235a03e1f0f7ff29491df3769 127.0.0.1:6383
   slots:[10923-16383] (5461 slots) master
   1 additional replica(s)
S: 5bf74f0b838d8e25a133bb642738491027af3165 127.0.0.1:6386
   slots: (0 slots) slave
   replicates ea434ea560d21bb578c53e6010ff2097662f165e
M: bef5037d60f216c64c48156f30dfca91c563fbf6 127.0.0.1:6382
   slots:[5461-10922] (5462 slots) master
   1 additional replica(s)
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
root@docker-desktop:/data#