[redisson]Redis中虚IP的BUG

2024-07-18 990 views
0

服务器是linux,两台机器,做的redis集群。ip是25和26.但是在使用redisson做batch查询的时候,有时候会出现Unable to send command! Node source: NodeSource [slot=null, addr=null, redisClient=[addr=redis://172.16.1.247:7002], redirect=null 这种问题,导致程序报错。 配置中,并没有配置247这个ip,这个ip是一个虚IP。不是真实ip。不清楚为什么,而且不是每次都出现这个错误。

Redis version

3.X

Redisson version

3.9.0

Redisson configuration
# Redis cluster config
spring.redis.cluster.nodes=\
  172.16.1.25:7001\
  ,172.16.1.25:7002\
  ,172.16.1.25:7003\
  ,172.16.1.26:7004\
  ,172.16.1.26:7005\
  ,172.16.1.26:7006

@Configuration
public class RedissonConfig {
    @Value("${spring.redis.cluster.nodes}")
    String nodes;
    @Bean(name = "redisClusterClient", destroyMethod = "shutdown")
    RedissonClient redisClusterClientInit() {
        Config config = new Config();
        config.setCodec(StringCodec.INSTANCE);
        ClusterServersConfig clusterConfig = config.useClusterServers();
        for (String node : nodes.split(",")) {
            clusterConfig.addNodeAddress("redis://" + node);
        }
        return Redisson.create(config);
    }
}

错误描述如下

08-11-2018 12:05:41 CST azkaban-redis-alarm-to-mysql INFO - Caused by: org.redisson.client.WriteRedisConnectionException: Unable to send command! Node source: NodeSource [slot=null, addr=null, redisClient=[addr=redis://172.16.1.247:7002], redirect=null, entry=MasterSlaveEntry [masterEntry=[freeSubscribeConnectionsAmount=1, freeSubscribeConnectionsCounter=50, freeConnectionsAmount=32, freeConnectionsCounter=64, freezed=false, freezeReason=null, client=[addr=redis://172.16.1.26:7004], nodeType=MASTER, firstFail=0]]], connection: RedisConnection@1417226931 [redisClient=[addr=redis://172.16.1.247:7002], channel=[id: 0x6e92dd06, L:0.0.0.0/0.0.0.0:2354]], command: (SCAN), command params: [253940, MATCH, ne:alarm:*:*:*, COUNT, 10] after 3 retry attempts
08-11-2018 12:05:41 CST azkaban-redis-alarm-to-mysql INFO -     at org.redisson.command.CommandAsyncService.checkWriteFuture(CommandAsyncService.java:837) ~[redisson-3.9.0.jar!/:na]
08-11-2018 12:05:41 CST azkaban-redis-alarm-to-mysql INFO -     at org.redisson.command.CommandAsyncService.access$200(CommandAsyncService.java:92) ~[redisson-3.9.0.jar!/:na]
08-11-2018 12:05:41 CST azkaban-redis-alarm-to-mysql INFO -     at org.redisson.command.CommandAsyncService$11$1.operationComplete(CommandAsyncService.java:794) ~[redisson-3.9.0.jar!/:na]
08-11-2018 12:05:41 CST azkaban-redis-alarm-to-mysql INFO -     at org.redisson.command.CommandAsyncService$11$1.operationComplete(CommandAsyncService.java:791) ~[redisson-3.9.0.jar!/:na]
08-11-2018 12:05:41 CST azkaban-redis-alarm-to-mysql INFO -     at io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:511) ~[netty-common-4.1.29.Final.jar!/:4.1.29.Final]
08-11-2018 12:05:41 CST azkaban-redis-alarm-to-mysql INFO -     at io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:485) ~[netty-common-4.1.29.Final.jar!/:4.1.29.Final]
08-11-2018 12:05:41 CST azkaban-redis-alarm-to-mysql INFO -     at io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:424) ~[netty-common-4.1.29.Final.jar!/:4.1.29.Final]
08-11-2018 12:05:41 CST azkaban-redis-alarm-to-mysql INFO -     at io.netty.util.concurrent.DefaultPromise.tryFailure(DefaultPromise.java:121) ~[netty-common-4.1.29.Final.jar!/:4.1.29.Final]
08-11-2018 12:05:41 CST azkaban-redis-alarm-to-mysql INFO -     at io.netty.channel.AbstractChannel$AbstractUnsafe.safeSetFailure(AbstractChannel.java:987) ~[netty-transport-4.1.29.Final.jar!/:4.1.29.Final]
08-11-2018 12:05:41 CST azkaban-redis-alarm-to-mysql INFO -     at io.netty.channel.AbstractChannel$AbstractUnsafe.write(AbstractChannel.java:869) ~[netty-transport-4.1.29.Final.jar!/:4.1.29.Final]
08-11-2018 12:05:41 CST azkaban-redis-alarm-to-mysql INFO -     at io.netty.channel.DefaultChannelPipeline$HeadContext.write(DefaultChannelPipeline.java:1391) ~[netty-transport-4.1.29.Final.jar!/:4.1.29.Final]
08-11-2018 12:05:41 CST azkaban-redis-alarm-to-mysql INFO -     at io.netty.channel.AbstractChannelHandlerContext.invokeWrite0(AbstractChannelHandlerContext.java:738) ~[netty-transport-4.1.29.Final.jar!/:4.1.29.Final]
08-11-2018 12:05:41 CST azkaban-redis-alarm-to-mysql INFO -     at io.netty.channel.AbstractChannelHandlerContext.invokeWrite(AbstractChannelHandlerContext.java:730) ~[netty-transport-4.1.29.Final.jar!/:4.1.29.Final]
08-11-2018 12:05:41 CST azkaban-redis-alarm-to-mysql INFO -     at io.netty.channel.AbstractChannelHandlerContext.access$1900(AbstractChannelHandlerContext.java:38) ~[netty-transport-4.1.29.Final.jar!/:4.1.29.Final]
08-11-2018 12:05:41 CST azkaban-redis-alarm-to-mysql INFO -     at io.netty.channel.AbstractChannelHandlerContext$AbstractWriteTask.write(AbstractChannelHandlerContext.java:1081) ~[netty-transport-4.1.29.Final.jar!/:4.1.29.Final]
08-11-2018 12:05:41 CST azkaban-redis-alarm-to-mysql INFO -     at io.netty.channel.AbstractChannelHandlerContext$WriteAndFlushTask.write(AbstractChannelHandlerContext.java:1128) ~[netty-transport-4.1.29.Final.jar!/:4.1.29.Final]
08-11-2018 12:05:41 CST azkaban-redis-alarm-to-mysql INFO -     at io.netty.channel.AbstractChannelHandlerContext$AbstractWriteTask.run(AbstractChannelHandlerContext.java:1070) ~[netty-transport-4.1.29.Final.jar!/:4.1.29.Final]
08-11-2018 12:05:41 CST azkaban-redis-alarm-to-mysql INFO -     at io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:163) ~[netty-common-4.1.29.Final.jar!/:4.1.29.Final]
08-11-2018 12:05:41 CST azkaban-redis-alarm-to-mysql INFO -     at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:404) ~[netty-common-4.1.29.Final.jar!/:4.1.29.Final]
08-11-2018 12:05:41 CST azkaban-redis-alarm-to-mysql INFO -     at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:446) ~[netty-transport-4.1.29.Final.jar!/:4.1.29.Final]
08-11-2018 12:05:41 CST azkaban-redis-alarm-to-mysql INFO -     at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:884) ~[netty-common-4.1.29.Final.jar!/:4.1.29.Final]
08-11-2018 12:05:41 CST azkaban-redis-alarm-to-mysql INFO -     at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ~[netty-common-4.1.29.Final.jar!/:4.1.29.Final]
08-11-2018 12:05:41 CST azkaban-redis-alarm-to-mysql INFO -     at java.lang.Thread.run(Thread.java:745) ~[na:1.8.0_111]
08-11-2018 12:05:41 CST azkaban-redis-alarm-to-mysql INFO - Caused by: java.nio.channels.ClosedChannelException: null
08-11-2018 12:05:41 CST azkaban-redis-alarm-to-mysql INFO -     at io.netty.channel.AbstractChannel$AbstractUnsafe.write(...)(Unknown Source) ~[netty-transport-4.1.29.Final.jar!/:4.1.29.Final]

回答

5

Redis服务端的cluster-announce-ipcluster-announce-portcluster-announce-bus-port这三个参数设置了吗?

8

没有发现这三个参数配置,我这里使用的redis3.x,能具体说一下在哪个文件中么?

5

我也碰到这个问题了,请问如何进行配置?

8

我更改了配置,目前就没有这个问题了。初始化代码如下:

package com.umltech.dubbo.provider.conf;

import cn.hutool.log.StaticLog;
import org.redisson.Redisson;
import org.redisson.api.RedissonClient;
import org.redisson.client.codec.StringCodec;
import org.redisson.config.ClusterServersConfig;
import org.redisson.config.Config;
import org.redisson.config.ReadMode;
import org.redisson.config.SingleServerConfig;
import org.springframework.beans.factory.annotation.Value;
import org.springframework.context.annotation.Configuration;

import javax.annotation.PostConstruct;
import javax.annotation.PreDestroy;
import java.util.HashMap;
import java.util.Map;

/**
 * redisson配置
 *
 * @author 孙宇
 */
@Configuration
public class RedissonConfig {

    @Value("${spring.redis.cluster.nodes}")
    String nodes;

    @Value("${spring.redis.url}")
    String node;

    RedissonClient redisClusterClient;

    Map<Integer, RedissonClient> redisSingleClientMap = new HashMap<>();

    /**
     * 获得redis集群client
     *
     * @return {@link RedissonClient}
     */
    public RedissonClient getRedisClusterClient() {
        return redisClusterClient;
    }

    /**
     * 获得redis单机client map
     *
     * @return {@link RedissonClient}
     */
    public Map<Integer, RedissonClient> getRedisSingleClientMap() {
        return redisSingleClientMap;
    }

    @PostConstruct
    public void redisClientInit() {
        Config config = new Config();
        config.setCodec(StringCodec.INSTANCE);
        ClusterServersConfig clusterConfig = config.useClusterServers();
        for (String node : nodes.split(",")) {
            clusterConfig.addNodeAddress("redis://" + node);
        }
        clusterConfig.setTimeout(10000);//等待节点回复命令的时间。该时间从命令发送成功时开始计时
        clusterConfig.setScanInterval(1000);//对Redis集群节点状态扫描的时间间隔。单位是毫秒。
        clusterConfig.setReadMode(ReadMode.MASTER_SLAVE);//设置读取操作选择节点的模式。 可用值为: SLAVE - 只在从服务节点里读取。 MASTER - 只在主服务节点里读取。 MASTER_SLAVE - 在主从服务节点里都可以读取
        clusterConfig.setRetryAttempts(10);//如果尝试达到 retryAttempts(命令失败重试次数) 仍然不能将命令发送至某个指定的节点时,将抛出错误。如果尝试在此限制之内发送成功,则开始启用 timeout(命令等待超时) 计时。
        clusterConfig.setFailedSlaveReconnectionInterval(500);//当与某个节点的连接断开时,等待与其重新建立连接的时间间隔。时间单位是毫秒
        redisClusterClient = Redisson.create(config);
        StaticLog.info("创建redis cluster client");

        for (int i = 0; i <= 15; i++) {
            config = new Config();
            config.setCodec(StringCodec.INSTANCE);
            SingleServerConfig singleConfig = config.useSingleServer();
            singleConfig.setAddress("redis://" + node);
            singleConfig.setDatabase(i);
            singleConfig.setTimeout(10000);//等待节点回复命令的时间。该时间从命令发送成功时开始计时
            singleConfig.setRetryAttempts(10);//如果尝试达到 retryAttempts(命令失败重试次数) 仍然不能将命令发送至某个指定的节点时,将抛出错误。如果尝试在此限制之内发送成功,则开始启用 timeout(命令等待超时) 计时。
            redisSingleClientMap.put(i, Redisson.create(config));
            StaticLog.info("创建redis single client db {}", i);
        }
    }

    @PreDestroy
    public void redisSingleClientShutdown() {
        StaticLog.info("销毁redis cluster client");
        redisClusterClient.shutdown();
        redisSingleClientMap.forEach((db, redissonClient) -> {
            StaticLog.info("销毁redis single client db {}", db);
            redissonClient.shutdown();
        });
    }

}

工具类方法

package com.umltech.dubbo.provider.util;

import cn.hutool.core.collection.CollectionUtil;
import com.umltech.dubbo.provider.conf.RedissonConfig;
import com.umltech.dubbo.provider.pojo.StringLiveBean;
import org.redisson.api.BatchOptions;
import org.redisson.api.RBatch;
import org.redisson.api.RListAsync;
import org.redisson.api.RMapAsync;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.stereotype.Component;

import java.util.*;
import java.util.concurrent.TimeUnit;
import java.util.concurrent.atomic.AtomicInteger;
import java.util.function.Consumer;

/**
 * redis 集群操作工具类
 *
 * @author 孙宇
 */
@Component
public class RedisClusterUtil {

    @Autowired
    RedissonConfig redissonConfig;

    BatchOptions batchOptions = BatchOptions.defaults();
    BatchOptions skipResultBatchOptions = BatchOptions.defaults().skipResult();// 告知Redis不用返回结果(可以减少网络用量)

    public boolean ping() {
        return redissonConfig.getRedisClusterClient().getClusterNodesGroup().pingAll();
    }

    public String get(String key) {
        Object v = redissonConfig.getRedisClusterClient().getBucket(key).get();
        if (v != null) {
            return String.valueOf(v);
        }
        return null;
    }

    public void set(String key, String value) {
        redissonConfig.getRedisClusterClient().getBucket(key).set(value);
    }

    public void set(String key, String value, long timeToLive, TimeUnit timeUnit) {
        redissonConfig.getRedisClusterClient().getBucket(key).set(value, timeToLive, timeUnit);
    }

    public long del(String... key) {
        return del(Arrays.asList(key));
    }

    public long del(List<String> keys) {
        return redissonConfig.getRedisClusterClient().getKeys().delete(keys.toArray(new String[]{}));
    }

    public long delByPattern(String pattern) {
        return redissonConfig.getRedisClusterClient().getKeys().deleteByPattern(pattern);
    }

    public List<String> keys(String pattern) {
        List<String> l = new ArrayList<>();
        redissonConfig.getRedisClusterClient().getKeys().getKeysByPattern(pattern, 10000).forEach(key -> l.add(key));
        return l;
    }

    public List<String> keys(String pattern, int count) {
        int c = 10000;
        if (count > 0) {
            c = count;
        }
        List<String> l = new ArrayList<>();
        redissonConfig.getRedisClusterClient().getKeys().getKeysByPattern(pattern, c).forEach(key -> l.add(key));
        return l;
    }

    public boolean exists(String key) {
        return redissonConfig.getRedisClusterClient().getBucket(key).isExists();
    }

    public boolean expire(String key, long timeToLive, TimeUnit timeUnit) {
        return redissonConfig.getRedisClusterClient().getBucket(key).expire(timeToLive, timeUnit);
    }

    public long ttl(String key) {
        return redissonConfig.getRedisClusterClient().getBucket(key).remainTimeToLive();
    }

    public long incr(String key) {
        return redissonConfig.getRedisClusterClient().getAtomicLong(key).incrementAndGet();
    }

    public Map<String, String> mget(String... key) {
        return mget(Arrays.asList(key));
    }

    public Map<String, String> mget(List<String> keys) {
        Map<String, String> m = new HashMap<>();
        CollectionUtil.split(keys, 500).forEach(ks -> {
            RBatch batch = redissonConfig.getRedisClusterClient().createBatch(batchOptions);
            ks.forEach(key -> batch.getBucket(key).getAsync());
            AtomicInteger ai = new AtomicInteger();
            batch.execute().forEach((Consumer<Object>) o -> {
                if (o != null) {
                    m.put(ks.get(ai.getAndIncrement()), String.valueOf(o));
                } else {
                    m.put(ks.get(ai.getAndIncrement()), null);
                }
            });
        });
        return m;
    }

    public List<String> mget_list(List<String> keys) {
        List<String> l = new ArrayList<>();
        CollectionUtil.split(keys, 500).forEach(ks -> {
            RBatch batch = redissonConfig.getRedisClusterClient().createBatch(batchOptions);
            ks.forEach(key -> batch.getBucket(key).getAsync());
            batch.execute().forEach((Consumer<Object>) o -> {
                if (o != null) {
                    l.add(String.valueOf(o));
                } else {
                    l.add(null);
                }
            });
        });
        return l;
    }

    public void mset(Map<String, String> keyValueMap) {
        RBatch batch = redissonConfig.getRedisClusterClient().createBatch(skipResultBatchOptions);
        keyValueMap.forEach((k, v) -> batch.getBucket(k).setAsync(v));
        batch.execute();
    }

    public void mset(List<StringLiveBean> stringLiveBeanList) {
        RBatch batch = redissonConfig.getRedisClusterClient().createBatch(skipResultBatchOptions);
        stringLiveBeanList.forEach(stringLiveBean -> batch.getBucket(stringLiveBean.getKey()).setAsync(stringLiveBean.getValue(), stringLiveBean.getTimeToLive(), stringLiveBean.getTimeUnit()));
        batch.execute();
    }

    public long hdel(String key, List<String> fields) {
        return redissonConfig.getRedisClusterClient().getMap(key).fastRemove(fields.toArray(new Object[]{}));
    }

    public boolean hexists(String key, String field) {
        return redissonConfig.getRedisClusterClient().getMap(key).containsKey(field);
    }

    public String hget(String key, String field) {
        Object v = redissonConfig.getRedisClusterClient().getMap(key).get(field);
        if (v != null) {
            return String.valueOf(v);
        }
        return null;
    }

    public Map<Object, Object> hgetall(String key) {
        return redissonConfig.getRedisClusterClient().getMap(key).readAllMap();
    }

    public Object hincrby(String key, String field, Number increment) {
        return redissonConfig.getRedisClusterClient().getMap(key).addAndGet(field, increment);
    }

    public Object hincrby(String key, String field, Number increment, long timeToLive, TimeUnit timeUnit) {
        RBatch batch = redissonConfig.getRedisClusterClient().createBatch(batchOptions);
        RMapAsync<Object, Object> map = batch.getMap(key);
        map.addAndGetAsync(field, increment);
        map.expireAsync(timeToLive, timeUnit);
        return batch.execute().getResponses().get(0);
    }

    public Set<Object> hkeys(String key) {
        return redissonConfig.getRedisClusterClient().getMap(key).readAllKeySet();
    }

    public int hlen(String key) {
        return redissonConfig.getRedisClusterClient().getMap(key).size();
    }

    public Map<Object, Object> hmget(String key, Set<Object> fields) {
        return redissonConfig.getRedisClusterClient().getMap(key).getAll(fields);
    }

    public List<String> hmget_list(String key, List<String> fields) {
        Map<String, String> m = hmget(key, new HashSet(fields));
        List<String> l = new ArrayList<>();
        fields.forEach(k -> l.add(m.get(k)));
        return l;
    }

    public void hmset(String key, Map<String, String> fieldValueMap) {
        redissonConfig.getRedisClusterClient().getMap(key).putAll(fieldValueMap);
    }

    public Object hset(String key, String field, String value) {
        return redissonConfig.getRedisClusterClient().getMap(key).put(field, value);
    }

    public Collection<Object> hvals(String key) {
        return redissonConfig.getRedisClusterClient().getMap(key).readAllValues();
    }

    public int llen(String key) {
        return redissonConfig.getRedisClusterClient().getList(key).size();
    }

    public boolean zadd(String key, double score, Object member) {
        return redissonConfig.getRedisClusterClient().getScoredSortedSet(key).add(score, member);
    }

    public int zadd(String key, Map<Object, Double> memberScoreMap) {
        return redissonConfig.getRedisClusterClient().getScoredSortedSet(key).addAll(memberScoreMap);
    }

    public int zcount(String key, double startScore, boolean startScoreInclusive, double endScore, boolean endScoreInclusive) {
        return redissonConfig.getRedisClusterClient().getScoredSortedSet(key).count(startScore, startScoreInclusive, endScore, endScoreInclusive);
    }

    public Collection<Object> zrangebyscore(String key, boolean reverse, double startScore, boolean startScoreInclusive, double endScore, boolean endScoreInclusive) {
        if (reverse) {
            return redissonConfig.getRedisClusterClient().getScoredSortedSet(key).valueRangeReversed(startScore, startScoreInclusive, endScore, endScoreInclusive);
        } else {
            return redissonConfig.getRedisClusterClient().getScoredSortedSet(key).valueRange(startScore, startScoreInclusive, endScore, endScoreInclusive);
        }
    }

    public Collection<Object> zrangebyscore(String key, boolean reverse, double startScore, boolean startScoreInclusive, double endScore, boolean endScoreInclusive, int offset, int count) {
        if (reverse) {
            return redissonConfig.getRedisClusterClient().getScoredSortedSet(key).valueRangeReversed(startScore, startScoreInclusive, endScore, endScoreInclusive, offset, count);
        } else {
            return redissonConfig.getRedisClusterClient().getScoredSortedSet(key).valueRange(startScore, startScoreInclusive, endScore, endScoreInclusive, offset, count);
        }
    }

    public boolean rpush(String key, String value) {
        return redissonConfig.getRedisClusterClient().getList(key).add(value);
    }

    public boolean rpush(String key, String value, long timeToLive, TimeUnit timeUnit) {
        RBatch batch = redissonConfig.getRedisClusterClient().createBatch(batchOptions);
        RListAsync<Object> list = batch.getList(key);
        list.addAsync(value);
        list.expireAsync(timeToLive, timeUnit);
        return (Boolean) batch.execute().getResponses().get(0);
    }

}
package com.umltech.dubbo.provider.util;

import com.umltech.dubbo.provider.conf.RedissonConfig;
import com.umltech.dubbo.provider.pojo.StringLiveBean;
import org.redisson.api.BatchOptions;
import org.redisson.api.RBatch;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.stereotype.Component;

import java.util.*;
import java.util.concurrent.TimeUnit;

/**
 * redis 单机操作工具类
 *
 * @author 孙宇
 */
@Component
public class RedisSingleUtil {

    @Autowired
    RedissonConfig redissonConfig;

    BatchOptions skipResultBatchOptions = BatchOptions.defaults().skipResult();// 告知Redis不用返回结果(可以减少网络用量)

    public boolean ping(int db) {
        return redissonConfig.getRedisSingleClientMap().get(db).getNodesGroup().pingAll();
    }

    public String get(int db, String key) {
        Object v = redissonConfig.getRedisSingleClientMap().get(db).getBucket(key).get();
        if (v != null) {
            return String.valueOf(v);
        }
        return null;
    }

    public void set(int db, String key, String value) {
        redissonConfig.getRedisSingleClientMap().get(db).getBucket(key).set(value);
    }

    public void set(int db, String key, String value, long timeToLive, TimeUnit timeUnit) {
        redissonConfig.getRedisSingleClientMap().get(db).getBucket(key).set(value, timeToLive, timeUnit);
    }

    public long del(int db, String... key) {
        return del(db, Arrays.asList(key));
    }

    public long del(int db, List<String> keys) {
        return redissonConfig.getRedisSingleClientMap().get(db).getKeys().delete(keys.toArray(new String[]{}));
    }

    public long delByPattern(int db, String pattern) {
        return redissonConfig.getRedisSingleClientMap().get(db).getKeys().deleteByPattern(pattern);
    }

    public List<String> keys(int db, String pattern) {
        List<String> l = new ArrayList<>();
        redissonConfig.getRedisSingleClientMap().get(db).getKeys().getKeysByPattern(pattern, 10000).forEach(key -> l.add(key));
        return l;
    }

    public List<String> keys(int db, String pattern, int count) {
        int c = 10000;
        if (count > 0) {
            c = count;
        }
        List<String> l = new ArrayList<>();
        redissonConfig.getRedisSingleClientMap().get(db).getKeys().getKeysByPattern(pattern, c).forEach(key -> l.add(key));
        return l;
    }

    public boolean exists(int db, String key) {
        return redissonConfig.getRedisSingleClientMap().get(db).getBucket(key).isExists();
    }

    public boolean expire(int db, String key, long timeToLive, TimeUnit timeUnit) {
        return redissonConfig.getRedisSingleClientMap().get(db).getBucket(key).expire(timeToLive, timeUnit);
    }

    public long ttl(int db, String key) {
        return redissonConfig.getRedisSingleClientMap().get(db).getBucket(key).remainTimeToLive();
    }

    public long incr(int db, String key) {
        return redissonConfig.getRedisSingleClientMap().get(db).getAtomicLong(key).incrementAndGet();
    }

    public Map<String, String> mget(int db, String... key) {
        return redissonConfig.getRedisSingleClientMap().get(db).getBuckets().get(key);
    }

    public Map<String, String> mget(int db, List<String> keys) {
        return redissonConfig.getRedisSingleClientMap().get(db).getBuckets().get(keys.toArray(new String[]{}));
    }

    public List<String> mget_list(int db, List<String> keys) {
        Map<String, String> m = mget(db, keys);
        List<String> l = new ArrayList<>();
        keys.forEach(k -> l.add(m.get(k)));
        return l;
    }

    public void mset(int db, Map<String, String> keyValueMap) {
        redissonConfig.getRedisSingleClientMap().get(db).getBuckets().set(keyValueMap);
    }

    public void mset(int db, List<StringLiveBean> stringLiveBeanList) {
        RBatch batch = redissonConfig.getRedisSingleClientMap().get(db).createBatch(skipResultBatchOptions);
        stringLiveBeanList.forEach(stringLiveBean -> batch.getBucket(stringLiveBean.getKey()).setAsync(stringLiveBean.getValue(), stringLiveBean.getTimeToLive(), stringLiveBean.getTimeUnit()));
        batch.execute();
    }

    public long hdel(int db, String key, List<String> fields) {
        return redissonConfig.getRedisSingleClientMap().get(db).getMap(key).fastRemove(fields.toArray(new Object[]{}));
    }

    public boolean hexists(int db, String key, String field) {
        return redissonConfig.getRedisSingleClientMap().get(db).getMap(key).containsKey(field);
    }

    public String hget(int db, String key, String field) {
        Object v = redissonConfig.getRedisSingleClientMap().get(db).getMap(key).get(field);
        if (v != null) {
            return String.valueOf(v);
        }
        return null;
    }

    public Map<Object, Object> hgetall(int db, String key) {
        return redissonConfig.getRedisSingleClientMap().get(db).getMap(key).readAllMap();
    }

    public Object hincrby(int db, String key, String field, Number delta) {
        return redissonConfig.getRedisSingleClientMap().get(db).getMap(key).addAndGet(field, delta);
    }

    public Set<Object> hkeys(int db, String key) {
        return redissonConfig.getRedisSingleClientMap().get(db).getMap(key).readAllKeySet();
    }

    public int hlen(int db, String key) {
        return redissonConfig.getRedisSingleClientMap().get(db).getMap(key).size();
    }

    public Map<Object, Object> hmget(int db, String key, Set<Object> fields) {
        return redissonConfig.getRedisSingleClientMap().get(db).getMap(key).getAll(fields);
    }

    public List<String> hmget_list(int db, String key, List<String> fields) {
        Map<String, String> m = hmget(db, key, new HashSet(fields));
        List<String> l = new ArrayList<>();
        fields.forEach(k -> l.add(m.get(k)));
        return l;
    }

    public void hmset(int db, String key, Map<String, String> fieldValueMap) {
        redissonConfig.getRedisSingleClientMap().get(db).getMap(key).putAll(fieldValueMap);
    }

    public Object hset(int db, String key, String field, String value) {
        return redissonConfig.getRedisSingleClientMap().get(db).getMap(key).put(field, value);
    }

    public Collection<Object> hvals(int db, String key) {
        return redissonConfig.getRedisSingleClientMap().get(db).getMap(key).readAllValues();
    }

    public int llen(int db, String key) {
        return redissonConfig.getRedisSingleClientMap().get(db).getList(key).size();
    }

    public boolean zadd(int db, String key, double score, Object member) {
        return redissonConfig.getRedisSingleClientMap().get(db).getScoredSortedSet(key).add(score, member);
    }

    public int zadd(int db, String key, Map<Object, Double> memberScoreMap) {
        return redissonConfig.getRedisSingleClientMap().get(db).getScoredSortedSet(key).addAll(memberScoreMap);
    }

    public int zcount(int db, String key, double startScore, boolean startScoreInclusive, double endScore, boolean endScoreInclusive) {
        return redissonConfig.getRedisSingleClientMap().get(db).getScoredSortedSet(key).count(startScore, startScoreInclusive, endScore, endScoreInclusive);
    }

    public Collection<Object> zrangebyscore(int db, String key, boolean reverse, double startScore, boolean startScoreInclusive, double endScore, boolean endScoreInclusive) {
        if (reverse) {
            return redissonConfig.getRedisSingleClientMap().get(db).getScoredSortedSet(key).valueRangeReversed(startScore, startScoreInclusive, endScore, endScoreInclusive);
        } else {
            return redissonConfig.getRedisSingleClientMap().get(db).getScoredSortedSet(key).valueRange(startScore, startScoreInclusive, endScore, endScoreInclusive);
        }
    }

    public Collection<Object> zrangebyscore(int db, String key, boolean reverse, double startScore, boolean startScoreInclusive, double endScore, boolean endScoreInclusive, int offset, int count) {
        if (reverse) {
            return redissonConfig.getRedisSingleClientMap().get(db).getScoredSortedSet(key).valueRangeReversed(startScore, startScoreInclusive, endScore, endScoreInclusive, offset, count);
        } else {
            return redissonConfig.getRedisSingleClientMap().get(db).getScoredSortedSet(key).valueRange(startScore, startScoreInclusive, endScore, endScoreInclusive, offset, count);
        }
    }

}

然后直接使用工具类了。

7

我的redis是单机的,但也会出现这种问题

6

@wmaozhi 请重新开一个帖子按照模板介绍你的情况。