docker安裝redis
docker pull redis
docker run -itd --name redis -p 6379:6379 -v /home/admin/redis/redis.conf:/etc/redis/redis.conf -v /home/admin/redis/data:/data redis redis-server /etc/redis/redis.conf
docker安裝完redis後内部是沒有配置檔案的,需要在建立之前指定
下面指令是讓redis加載所給的配置檔案
redis-server /etc/redis/redis.conf
redis.conf内容見附錄
- 進入redis容器,并進行相關操作
- 輸入redis-cli,進入redis指令行
- redis-cli中使用中文的值可能會有亂碼
使用以下指令進入可以避免
redis-cli --raw
常用指令
- 切換資料庫
資料庫預設從0~15
- select num
select 3
- 檢視資料庫資料大小
- dbsize
dbsize
- 檢視所有的鍵
- keys *
keys *
- 清空目前資料庫
- flushdb
flushdb
- 清空全部資料庫
- flushall
flushall
- 檢視某個鍵是否存在
- exists key
exists name
- 設定某個鍵過期的時間
- expire key second
設定鍵name失效的時間為5s
expire name 5
- 檢視鍵剩餘的時間
- ttl key
ttl name
- 移動鍵到另一個資料庫
- move key dbindex
移動name到1号資料庫
move name 1
- 檢視資料類型
- type key
type name
- 删除鍵
- del key
del name
資料類型及相關操作
- string
- list
- set
- hash
- zset(有序集合)
特殊資料類型
- geospetial(地理位置)
- hyperloglog(基數統計)
- bitmap(位運算)
string類型操作
- 設定資料
- set key value
set name zhangsan
- 檢視資料
- get key
get name
- 删除鍵
- del key
del name
- 向字元串尾部追加資料
- append key value
append name zhangsan
- 檢視資料長度
- strlen key
strlen name
- 數字字元串加1
- incr key
incr age
- 數字字元串減1
- decr key
decr age
- 數字字元串增加指定數量
- incrby key num
incrby age 10
- 數字字元串減少指定數量
- decrby key num
decrby age 5
- 截取資料
- getrange key start end
截取鍵name中字元串0~3位置的字元
getrange name 0 3
截取全部字元串
getrange name 0 -1
- 設定字元串中某位置開始的資料
- setrange key index string
設定name資料中從1位置開始的資料為abc
setrange name 1 abc
- 設定鍵值同時指定過期時間
- setex key second value
setex name 5 zhangsan
- 鍵值不存在再設定鍵值
- setnx key value
如果name存在則不設定,否則設定鍵name的值為zhangsan
setnx name zhangsan
- 批量設定鍵值
- mset key value …
mset k1 v1 k2 v2 k3 v3
- 批量擷取值
- mget key1 key2 …
mget k1 k2 k3
- 批量設定鍵值,有一個鍵已存在則失敗
- msetnx key value …
如果k1已存在,則整個操作失敗(原子性)
msetnx k1 v1 k2 abc
- 巧設鍵
- user:{id}:{filed}
mset user:1:name zhangsan user:1:age 4
擷取值
mget user:1:name user:1:age
- 擷取舊值設定新值
- getset key value
傳回舊值zhagnsan,設定新值lisi,舊值不存在則為null
getset name lisi
list類型操作
- 向左插入元素
- lpush key value …
向左插入, 0位置為def
def, abc
lpush list abc
lpush list def
插入多個元素
lpush list a b c d e
- 向右插入元素
- rpush key value
向右插入, 0位置為abc
abc, def
rpush list abc
rpush list def
- 檢視元素
- lrange key start stop
lrange list 0 -1
- 移除左邊元素
- lpop key
lpop list
- 移除右邊元素
- rpop key
rpop list
- 擷取清單中某個值
- lindex key index
lindex list 0
- 擷取清單長度
- llen key
llen list
- 移除清單中任意個數的值
- lrem key count value
從左開始依次移除清單3個a
lrem list 3 a
- 在原清單中截取資料
- ltrim key start stop
截取清單中0-5的資料,其餘删除
ltrim list 0 5
- 移除清單最後一個元素,同時将該元素添加到新的清單中
- rpoplpush key1 key2
rpoplpush list li
- 将指定下标的值替換
- lset key index value
如果下标不存在則報錯
lset list 0 abc
- 在某個值前後進行插入
- linsert key before|after pivot value
從左往右,在第一個a前面插入b
linsert list before a b
從左往右,在第一個a後面插入b
linsert list after a b
set類型操作
set内的值無序,不能重複
- 添加元素
- sadd key value …
sadd st a b c
- 檢視元素
- smembers key
smembers st
- 擷取集合元素的個數
- scard key
scard st
- 移除元素
- srem key member …
srem st a b c
- 随機擷取任意數量元素
- srandmember key [count]
srandmember st
srandmember st 2
- 随機移除任意數量的元素
- spop key [count]
spop st 3
- 将一個集合中的元素移動到另一個集合
- smove source destination member
将集合st中的a元素移動到集合st2中
smove st st2 a
- 擷取集合的差集
- sdiff key1 key2 …
找出st1資料在st2、st3…中不存在的
sdiff st1 st2
- 擷取集合的交集
- sinter key1 key2 …
找出st1和st2…中共同擁有的元素
sinter st1 st2
- 擷取集合的并集
- sunion key1 key2 …
sunion st1 st2
hash類型操作
- 添加資料
- hset key field value
在hash集合hs中添加一個鍵name,值為zhangsan
hset hs name zhangsan
- 查詢資料
- hget key field
hget hs name
- 批量添加資料
- hmset key field value [field value …]
hmset hs name zhangsan age 4
- 批量查找資料
- hmget key field [field …]
hmget hs name age
- 擷取某個hash集合中所有的鍵值
- hgetall key
hgetall hs
- 删除hash集合中的某個資料
- hdel key field [field …]
hdel hs name
- 擷取hash集合中的資料個數
- hlen key
hlen hs
- 判斷hash集合中的字段是否存在
- hexists key field
hexists hs name
- 隻擷取所有的鍵
- hkeys key
hkeys hs
- 隻擷取所有的值
- hvals key
hvals hs
- 數字增加指定值
- hincrby key field increment
num增加5
hincrby hs num 5
num減少5
hincrby hs num -5
- 存在則不添加,否則添加
- hsetnx key field value
hsetnx hs name lisi
zset類型操作
- 添加資料
- zadd key score member …
zadd zs 1 a 2 b 3 c 4 d
- 檢視資料(按分數從小到大排序)
- zrangebyscore key min max [WITHSCORES] [LIMIT offset count]
-inf, +inf 表示無窮小和無窮大
在集合zs中,尋找分數在[-∞, +∞]區間,按照從小到大排序
zrangebyscore zs -inf +inf
在集合zs中,尋找分數在[-∞, +∞]區間,按照從小到大排序, 附帶分數
zrangebyscore zs -inf +inf withscores
min和max形成的區間是閉區間[min, max], 如果想要開區間(min, max)
zrangebyscore zs (1 (3 withscores
- 檢視資料(按分數從大到小排序)
- zrevrangebyscore key max min [WITHSCORES] [LIMIT offset count]
zrevrangebyscore zs +inf -inf withscores limit 0 2
- 檢視某一範圍内資料
- zrange key start stop [WITHSCORES]
zrange zs 0 1
- 反轉檢視某一範圍資料
- zrevrange key start stop [WITHSCORES]
zrevrange zs 0 -1
- 删除資料
- zrem key member [member …]
zrem zs a
- 擷取集合元素的個數
- zcard key
zcard zs
- 擷取指定區間的元素個數
- zcount key min max
zcount zs 2 3
geospetial類型操作
- 添加資料
- geoadd key longitude latitude member [longitude latitude member …]
geoadd china:city 116.405285 39.904989 beijing
geoadd china:city 121.472644 31.231706 shanghai
geoadd china:city 106.504962 29.533155 chongqing
geoadd china:city 120.153576 30.287459 hangzhou
- 擷取指定城市的經緯度值
- geopos key member [member …]
geopos china:city beijing
- 擷取兩城市間的直線距離
- geodist key member1 member2 [unit]
unit(機關):
- m: 米
- km:千米
- mi: 英裡
- ft: 英尺
geodist china:city beijing shanghai
geodist china:city beijing shanghai km
- 擷取附近的城市
- georadius key longitude latitude radius m|km|ft|mi [WITHCOORD] [WITHDIST] [WITHHASH] [COUNT count] [ASC|DESC] [STORE key] [STOREDIST key]
檢視經度110,緯度30附近800km内的城市
georadius china:city 110 30 800 km
- withcoord:将位置元素的經度與緯度也一并傳回
- withdist:在傳回位置元素的同時,将距離也一并傳回。距離的機關和使用者給定的範圍機關保持一緻
- withhash:以52位的符号整數形式,傳回位置元素經過geohash編碼的有序集合分值。用于底層應用或調試,實際作用不大。
- sort取值範圍
- asc:根據中心位置,按照從近到遠的方式傳回位置元素
- desc:根據中心位置,按照從遠到近的方式傳回位置元素
- store key:将傳回結果而的地理位置資訊儲存到指定鍵
- storedist key:将傳回結果距離中心節點的距離儲存到指定鍵
檢視經度110,緯度30附近800km内的城市, 并附帶經緯度、距離,按城市距離進行從近到遠排序,傳回前兩個
georadius china:city 110 30 8000 km withcoord withdist count 2 asc
- 擷取城市的hash表示
- geohash key member [member …]
傳回11個字元的字元串,将二維的經緯度轉化為字元串,字元串越接近,距離越近
geohash china:city beijing shanghai
- geospetial底層基于zset,可以通過zset的相關指令操作geospetial
zrange china:city 0 -1
zrem china:city beijing
hyperloglog類型操作
- 添加資料
- pfadd key element [element …]
pfadd pf a b c d e
- 檢視元素個數
- pfcount key [key …]
pfcount pf
- 合并集合
- pfmerge destkey sourcekey [sourcekey …]
将pf1和pf2中的資料合并到pf3中,去除重複的
pfmerge pf3 pf1 pf2
bitmap類型操作
- 添加資料
- setbit key offset value
setbit bm 1 0
setbit bm 2 1
setbit bm 3 1
- 擷取資料
- getbit key offset
getbit bm 3
- 統計1的個數
- bitcount key [start end]
bitcount bm
事務
執行事務
redis的事務:
- 開始事務(multi)
- 指令入隊
- 執行事務(exec) / 放棄事務(discard)
multi
set k1 v1
set k2 v2
set k3 v3
get k2
exec
異常
- 編譯型異常(代碼、指令有問題): 事務中所有的指令都不會被執行
multi
set k1
set k2 v2
exec
- 運作時異常(文法問題): 指令可以執行,錯誤指令抛出異常
multi
incr k1
set k2 1
incr k2
exec
監控
- 悲觀鎖
認為任何時候都會出問題,無論做什麼都會加鎖
- 樂觀鎖
認為任何時候都不會出問題,是以不會上鎖。在更新資料的時候判斷是否有人修改過這個資料
使用樂觀鎖測試多線程修改值
- 終端1
set money 100
set out 0
watch money
multi
decrby money 20
incrby out 20
- 終端2
multi
set money 50
exec
- 終端1
此時終端1的操作會失敗
watch可以當做redis的樂觀鎖,當使用該指令時,redis會監測資料變化情況,如果在此期間進行資料的更新,則事務操作失敗。
unwatch可以進行解鎖
jedis使用
環境配置
pom.xml
<dependency>
<groupId>redis.clients</groupId>
<artifactId>jedis</artifactId>
</dependency>
<dependency>
<groupId>com.alibaba</groupId>
<artifactId>fastjson</artifactId>
<version>1.2.71</version>
</dependency>
基本使用
jedis的使用和指令一緻
Jedis jedis = new Jedis("127.0.0.1", 6379);
jedis.flushDB();
jedis.set("username", "zhangsan");
System.out.println(jedis.get("username"));
jedis.lpush("li", "1");
jedis.lpush("li", "2");
System.out.println(jedis.lrange("li", 0, -1));
事務操作
Jedis jedis = new Jedis("127.0.0.1", 6379);
JSONObject jsonObject = new JSONObject();
jsonObject.put("username", "zhangsan");
jedis.watch("user");
// 開啟事務
Transaction transaction = jedis.multi();
try {
transaction.set("user", jsonObject.toJSONString());
// 執行事務
transaction.exec();
} catch (Exception e) {
// 放棄事務
transaction.discard();
e.printStackTrace();
} finally {
jedis.close();
}
System.out.println(jedis.get("user"));
Springboot內建redis
環境配置
pom.xml
<dependencies>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-data-redis</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-web</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-devtools</artifactId>
<scope>runtime</scope>
<optional>true</optional>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-configuration-processor</artifactId>
<optional>true</optional>
</dependency>
<dependency>
<groupId>org.projectlombok</groupId>
<artifactId>lombok</artifactId>
<optional>true</optional>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-test</artifactId>
<scope>test</scope>
<exclusions>
<exclusion>
<groupId>org.junit.vintage</groupId>
<artifactId>junit-vintage-engine</artifactId>
</exclusion>
</exclusions>
</dependency>
</dependencies>
application.properties
spring.redis.host=localhost
spring.redis.port=6379
基本操作
資料庫操作
// 擷取redis的連接配接對象
RedisConnection connection = redisTemplate.getConnectionFactory().getConnection();
connection.flushAll();
connection.flushDb();
不同資料類型操作
// 不同資料類型操作
redisTemplate.opsForValue();
redisTemplate.opsForList();
redisTemplate.opsForSet();
redisTemplate.opsForHash();
redisTemplate.opsForZSet();
redisTemplate.opsForGeo();
redisTemplate.opsForHyperLogLog();
自定義redisTemplate
@Configuration
public class RedisConfig {
@Bean
public RedisTemplate<String, Object> redisTemplate(RedisConnectionFactory redisConnectionFactory) {
RedisTemplate<String, Object> redisTemplate = new RedisTemplate<>();
redisTemplate.setConnectionFactory(redisConnectionFactory);
Jackson2JsonRedisSerializer jackson2JsonRedisSerializer = new Jackson2JsonRedisSerializer(Object.class);
ObjectMapper objectMapper = new ObjectMapper();
objectMapper.setVisibility(PropertyAccessor.ALL, JsonAutoDetect.Visibility.ANY);
objectMapper.activateDefaultTyping(LaissezFaireSubTypeValidator.instance, ObjectMapper.DefaultTyping.NON_FINAL);
jackson2JsonRedisSerializer.setObjectMapper(objectMapper);
// String的序列化
StringRedisSerializer stringRedisSerializer = new StringRedisSerializer();
// key采用String的序列化方式
redisTemplate.setKeySerializer(stringRedisSerializer);
// hash采用String的序列化方式
redisTemplate.setHashKeySerializer(RedisSerializer.string());
// value序列化采用jackson方式
redisTemplate.setValueSerializer(jackson2JsonRedisSerializer);
// hash的value序列化采用jackson方式
redisTemplate.setHashValueSerializer(jackson2JsonRedisSerializer);
redisTemplate.afterPropertiesSet();
return redisTemplate;
}
}
RedisUtil工具類封裝
@Component
public class RedisUtil {
@Autowired
private RedisTemplate<String, Object> redisTemplate;
/**
* 指定緩存失效時間
*
* @param key 鍵
* @param time 時間(秒)
* @return
*/
public boolean expire(String key, long time) {
try {
if (time > 0) {
redisTemplate.expire(key, time, TimeUnit.SECONDS);
}
return true;
} catch (Exception e) {
e.printStackTrace();
return false;
}
}
/**
* 根據key 擷取過期時間
*
* @param key 鍵 不能為null
* @return 時間(秒) 傳回0代表為永久有效
*/
public long getExpire(String key) {
return redisTemplate.getExpire(key, TimeUnit.SECONDS);
}
/**
* 判斷key是否存在
*
* @param key 鍵
* @return true 存在 false不存在
*/
public boolean hasKey(String key) {
try {
return redisTemplate.hasKey(key);
} catch (Exception e) {
e.printStackTrace();
return false;
}
}
/**
* 删除緩存
*
* @param key 可以傳一個值 或多個
*/
@SuppressWarnings("unchecked")
public void del(String... key) {
if (key != null && key.length > 0) {
if (key.length == 1) {
redisTemplate.delete(key[0]);
} else {
redisTemplate.delete(CollectionUtils.arrayToList(key));
}
}
}
//============================String=============================
/**
* 普通緩存擷取
*
* @param key 鍵
* @return 值
*/
public Object get(String key) {
return key == null ? null : redisTemplate.opsForValue().get(key);
}
/**
* 普通緩存放入
*
* @param key 鍵
* @param value 值
* @return true成功 false失敗
*/
public boolean set(String key, Object value) {
try {
redisTemplate.opsForValue().set(key, value);
return true;
} catch (Exception e) {
e.printStackTrace();
return false;
}
}
/**
* 普通緩存放入并設定時間
*
* @param key 鍵
* @param value 值
* @param time 時間(秒) time要大于0 如果time小于等于0 将設定無限期
* @return true成功 false 失敗
*/
public boolean set(String key, Object value, long time) {
try {
if (time > 0) {
redisTemplate.opsForValue().set(key, value, time, TimeUnit.SECONDS);
} else {
set(key, value);
}
return true;
} catch (Exception e) {
e.printStackTrace();
return false;
}
}
/**
* 遞增
*
* @param key 鍵
* @param delta 要增加幾(大于0)
* @return
*/
public long incr(String key, long delta) {
if (delta < 0) {
throw new RuntimeException("遞增因子必須大于0");
}
return redisTemplate.opsForValue().increment(key, delta);
}
/**
* 遞減
*
* @param key 鍵
* @param delta 要減少幾(小于0)
* @return
*/
public long decr(String key, long delta) {
if (delta < 0) {
throw new RuntimeException("遞減因子必須大于0");
}
return redisTemplate.opsForValue().increment(key, -delta);
}
//================================Map=================================
/**
* HashGet
*
* @param key 鍵 不能為null
* @param item 項 不能為null
* @return 值
*/
public Object hget(String key, String item) {
return redisTemplate.opsForHash().get(key, item);
}
/**
* 擷取hashKey對應的所有鍵值
*
* @param key 鍵
* @return 對應的多個鍵值
*/
public Map<Object, Object> hmget(String key) {
return redisTemplate.opsForHash().entries(key);
}
/**
* HashSet
*
* @param key 鍵
* @param map 對應多個鍵值
* @return true 成功 false 失敗
*/
public boolean hmset(String key, Map<String, Object> map) {
try {
redisTemplate.opsForHash().putAll(key, map);
return true;
} catch (Exception e) {
e.printStackTrace();
return false;
}
}
/**
* HashSet 并設定時間
*
* @param key 鍵
* @param map 對應多個鍵值
* @param time 時間(秒)
* @return true成功 false失敗
*/
public boolean hmset(String key, Map<String, Object> map, long time) {
try {
redisTemplate.opsForHash().putAll(key, map);
if (time > 0) {
expire(key, time);
}
return true;
} catch (Exception e) {
e.printStackTrace();
return false;
}
}
/**
* 向一張hash表中放入資料,如果不存在将建立
*
* @param key 鍵
* @param item 項
* @param value 值
* @return true 成功 false失敗
*/
public boolean hset(String key, String item, Object value) {
try {
redisTemplate.opsForHash().put(key, item, value);
return true;
} catch (Exception e) {
e.printStackTrace();
return false;
}
}
/**
* 向一張hash表中放入資料,如果不存在将建立
*
* @param key 鍵
* @param item 項
* @param value 值
* @param time 時間(秒) 注意:如果已存在的hash表有時間,這裡将會替換原有的時間
* @return true 成功 false失敗
*/
public boolean hset(String key, String item, Object value, long time) {
try {
redisTemplate.opsForHash().put(key, item, value);
if (time > 0) {
expire(key, time);
}
return true;
} catch (Exception e) {
e.printStackTrace();
return false;
}
}
/**
* 删除hash表中的值
*
* @param key 鍵 不能為null
* @param item 項 可以使多個 不能為null
*/
public void hdel(String key, Object... item) {
redisTemplate.opsForHash().delete(key, item);
}
/**
* 判斷hash表中是否有該項的值
*
* @param key 鍵 不能為null
* @param item 項 不能為null
* @return true 存在 false不存在
*/
public boolean hHasKey(String key, String item) {
return redisTemplate.opsForHash().hasKey(key, item);
}
/**
* hash遞增 如果不存在,就會建立一個 并把新增後的值傳回
*
* @param key 鍵
* @param item 項
* @param by 要增加幾(大于0)
* @return
*/
public double hincr(String key, String item, double by) {
return redisTemplate.opsForHash().increment(key, item, by);
}
/**
* hash遞減
*
* @param key 鍵
* @param item 項
* @param by 要減少記(小于0)
* @return
*/
public double hdecr(String key, String item, double by) {
return redisTemplate.opsForHash().increment(key, item, -by);
}
//============================set=============================
/**
* 根據key擷取Set中的所有值
*
* @param key 鍵
* @return
*/
public Set<Object> sGet(String key) {
try {
return redisTemplate.opsForSet().members(key);
} catch (Exception e) {
e.printStackTrace();
return null;
}
}
/**
* 根據value從一個set中查詢,是否存在
*
* @param key 鍵
* @param value 值
* @return true 存在 false不存在
*/
public boolean sHasKey(String key, Object value) {
try {
return redisTemplate.opsForSet().isMember(key, value);
} catch (Exception e) {
e.printStackTrace();
return false;
}
}
/**
* 将資料放入set緩存
*
* @param key 鍵
* @param values 值 可以是多個
* @return 成功個數
*/
public long sSet(String key, Object... values) {
try {
return redisTemplate.opsForSet().add(key, values);
} catch (Exception e) {
e.printStackTrace();
return 0;
}
}
/**
* 将set資料放入緩存
*
* @param key 鍵
* @param time 時間(秒)
* @param values 值 可以是多個
* @return 成功個數
*/
public long sSetAndTime(String key, long time, Object... values) {
try {
Long count = redisTemplate.opsForSet().add(key, values);
if (time > 0) {
expire(key, time);
}
return count;
} catch (Exception e) {
e.printStackTrace();
return 0;
}
}
/**
* 擷取set緩存的長度
*
* @param key 鍵
* @return
*/
public long sGetSetSize(String key) {
try {
return redisTemplate.opsForSet().size(key);
} catch (Exception e) {
e.printStackTrace();
return 0;
}
}
/**
* 移除值為value的
*
* @param key 鍵
* @param values 值 可以是多個
* @return 移除的個數
*/
public long setRemove(String key, Object... values) {
try {
Long count = redisTemplate.opsForSet().remove(key, values);
return count;
} catch (Exception e) {
e.printStackTrace();
return 0;
}
}
//===============================list=================================
/**
* 擷取list緩存的内容
*
* @param key 鍵
* @param start 開始
* @param end 結束 0 到 -1代表所有值
* @return
*/
public List<Object> lGet(String key, long start, long end) {
try {
return redisTemplate.opsForList().range(key, start, end);
} catch (Exception e) {
e.printStackTrace();
return null;
}
}
/**
* 擷取list緩存的長度
*
* @param key 鍵
* @return
*/
public long lGetListSize(String key) {
try {
return redisTemplate.opsForList().size(key);
} catch (Exception e) {
e.printStackTrace();
return 0;
}
}
/**
* 通過索引 擷取list中的值
*
* @param key 鍵
* @param index 索引 index>=0時, 0 表頭,1 第二個元素,依次類推;index<0時,-1,表尾,-2倒數第二個元素,依次類推
* @return
*/
public Object lGetIndex(String key, long index) {
try {
return redisTemplate.opsForList().index(key, index);
} catch (Exception e) {
e.printStackTrace();
return null;
}
}
/**
* 将list放入緩存
*
* @param key 鍵
* @param value 值
* @return
*/
public boolean lSet(String key, Object value) {
try {
redisTemplate.opsForList().rightPush(key, value);
return true;
} catch (Exception e) {
e.printStackTrace();
return false;
}
}
/**
* 将list放入緩存
*
* @param key 鍵
* @param value 值
* @param time 時間(秒)
* @return
*/
public boolean lSet(String key, Object value, long time) {
try {
redisTemplate.opsForList().rightPush(key, value);
if (time > 0) {
expire(key, time);
}
return true;
} catch (Exception e) {
e.printStackTrace();
return false;
}
}
/**
* 将list放入緩存
*
* @param key 鍵
* @param value 值
* @return
*/
public boolean lSet(String key, List<Object> value) {
try {
redisTemplate.opsForList().rightPushAll(key, value);
return true;
} catch (Exception e) {
e.printStackTrace();
return false;
}
}
/**
* 将list放入緩存
*
* @param key 鍵
* @param value 值
* @param time 時間(秒)
* @return
*/
public boolean lSet(String key, List<Object> value, long time) {
try {
redisTemplate.opsForList().rightPushAll(key, value);
if (time > 0) {
expire(key, time);
}
return true;
} catch (Exception e) {
e.printStackTrace();
return false;
}
}
/**
* 根據索引修改list中的某條資料
*
* @param key 鍵
* @param index 索引
* @param value 值
* @return
*/
public boolean lUpdateIndex(String key, long index, Object value) {
try {
redisTemplate.opsForList().set(key, index, value);
return true;
} catch (Exception e) {
e.printStackTrace();
return false;
}
}
/**
* 移除N個值為value
*
* @param key 鍵
* @param count 移除多少個
* @param value 值
* @return 移除的個數
*/
public long lRemove(String key, long count, Object value) {
try {
Long remove = redisTemplate.opsForList().remove(key, count, value);
return remove;
} catch (Exception e) {
e.printStackTrace();
return 0;
}
}
/**
* 模糊查詢擷取key值
*
* @param pattern
* @return
*/
public Set keys(String pattern) {
return redisTemplate.keys(pattern);
}
}
redis.conf配置
機關
# 機關大小寫不敏感
# 1k => 1000 bytes
# 1kb => 1024 bytes
# 1m => 1000000 bytes
# 1mb => 1024*1024 bytes
# 1g => 1000000000 bytes
# 1gb => 1024*1024*1024 bytes
網絡
# 綁定ip
bind 127.0.0.1
# 保護模式
protected-mode yes
# 修改端口
port 6379
通用
# 是否以守護程序的方式運作
daemonize no
# 如果以背景方式運作,需要制定pidfile
pidfile /var/run/redis_6379.pid
# 日志
# debug (a lot of information, useful for development/testing)
# verbose (many rarely useful info, but not a mess like the debug level)
# notice (moderately verbose, what you want in production probably)
# warning (only very important / critical messages are logged)
loglevel notice
# 日志檔案位置名
logfile ""
# 資料庫的數量
databases 16
# 是否顯示logo
always-show-logo yes
快照
在規定時間内,執行多少次操作,會持久化到檔案.rdb .aof
# after 900 sec (15 min) if at least 1 key changed
# after 300 sec (5 min) if at least 10 keys changed
# after 60 sec if at least 10000 keys changed
save 900 1
save 300 10
save 60 10000
# 持久化出錯是否繼續工作
stop-writes-on-bgsave-error yes
# 是否壓縮rdb檔案,需要消耗cpu資源
rdbcompression yes
# 儲存rdb檔案時進行錯誤校驗
rdbchecksum yes
# rdb檔案儲存目錄
dir ./
安全
# 密碼設定
requirepass 123456
# 指令設定
# 擷取密碼
config get requirepass
# 設定密碼
config set requirepass 123456
# 使用密碼登入
auth 123456
限制
# 最大用戶端數量
maxclients 10000
# 最大記憶體容量
maxmemory <bytes>
# 記憶體到達最大上限後的處理政策
# noeviction: 不删除政策, 達到最大記憶體限制時, 如果需要更多記憶體, 直接傳回錯誤資訊。(預設值)
# allkeys-lru: 所有key通用; 優先删除最近最少使用(less recently used ,LRU) 的 key。
# volatile-lru: 隻限于設定了 expire 的部分; 優先删除最近最少使用(less recently used ,LRU)的key。
# allkeys-random: 所有key通用; 随機删除一部分 key。
# volatile-random: 隻限于設定了 expire 的部分; 随機删除一部分 key。
# volatile-ttl: 隻限于設定了 expire 的部分; 優先删除剩餘時間(time to live,TTL) 短的key。
maxmemory-policy noeviction
APPEND ONLY(aof模式)
# 預設不啟用,使用rdb方式持久化
appendonly no
# 持久化檔案名
appendfilename "appendonly.aof"
# 每次修改都會同步
# appendfsync always
# 每秒修改一次後同步
appendfsync everysec
# 不進行同步
# appendfsync no
- 開啟aof可以使用日志的形式記錄每一步的操作
- aof檔案被破壞後,可以使用内置的工具進行修複
# 開啟aof
appendonly no
# redis啟動後會生成appendonly.aof檔案
# aof檔案被破壞後redis啟動會失敗, 使用指令修複
redis-check-aof --fix
redis釋出訂閱
- 訂閱頻道
- subscribe channel [channel …]
subscribe msg
- 将資訊發送到指定的頻道
- publish channel message
- 訂閱一個或多個符合給定模式的頻道
- psubscribe pattern [pattern …]
# 訂閱以m開頭的消息
psubscribe m*
# 另一個終端發送消息
publish msg "hello, world"
- 檢視訂閱與釋出系統狀态
- pubsub subcommand [argument [argument …]]
- 退訂所有給定模式的頻道
- punsubscribe [pattern [pattern …]]
- 隻退訂給定的頻道
- unsubscribe [channel [channel …]]
主從複制
環境配置
可以使用docker配置多個redis
docker run -itd --name redis01 -p 8901:8901 -v /home/admin/redis/redis01.conf:/etc/redis/redis.conf -v /home/admin/redis/data01:/data redis redis-server /etc/redis/redis.conf
docker run -itd --name redis02 -p 8902:8902 -v /home/admin/redis/redis02.conf:/etc/redis/redis.conf -v /home/admin/redis/data02:/data redis redis-server /etc/redis/redis.conf
redis端口改變,指令行工具redis-cli也要添加-p參數 (-h參數是指定ip, 預設是127.0.0.1, 無需改變)
redis-cli -p 8901
- 檢視目前庫資訊
info replication
# 角色
role:master
# 從機數
connected_slaves:0
配置從機
- slaveof host port
slaveof 127.0.0.1 6379
配置完成後檢視主機資訊
role:master
connected_slaves:2
slave0:ip=127.0.0.1,port=8901,state=online,offset=56,lag=0
slave1:ip=127.0.0.1,port=8902,state=online,offset=56,lag=0
注意
- 主機負責寫,從機隻能讀取
- 主機斷開後,從機資料能直接擷取
- 從機斷開後,重新連接配接主機,資料立即同步
- 通過指令配置的從機,在重新開機後會失效,變為主機
要永久生效需要在配置檔案中修改
replicaof 127.0.0.1 6379
使用docker在外部redis.conf檔案中修改後重新開機即可生效
主從鍊路
M -> S、M -> S
中間機在主機完好時作為從機,在主機當機時轉變為主機,擁有寫入功能
從機使用指令
slaveof no one
可以将自己變為主機
哨兵模式
多哨兵模式: 多個哨兵對redis伺服器進行監控,各個哨兵之間互相監控
- 哨兵1監測到主機當機,并不會馬上切換,僅僅是哨兵1的主觀認為,即主觀下線
- 當其他哨兵也監測到主機服務不可用,并且數量達到一定值時,哨兵們會進行投票,投票結果由一個哨兵發起,進行故障轉移操作
- 轉移操作結束後,通過釋出訂閱模式使得所有從機修改配置切換主機, 即客觀下線
實戰操作
配置
- 建立檔案sentinel.conf
- sentinel monitor [master-group-name] [ip] [port] [quorum]
- [master-group-name] 為自己給哨兵取的名字
- [ip] 為所要連接配接的主機ip;
- [port] 為redis的端口号
- [quorum]是哨兵用來判斷某個 Redis 伺服器是否下線的參數,表示投票需要的"最少法定人數"
sentinel monitor master 127.0.0.1 6379 1
- 配置redis.conf
- 将bind屬性設定為0.0.0.0,使得redis可以接受任意ip連接配接
- 配置從機的replicaof屬性,使得目前redis伺服器變為從機
bind 0.0.0.0
replicaof 127.0.0.1 6379
這裡使用docker來配置,是以隻需要這兩項
如果直接采用redis來配置,則
- 開啟多個redis
redis-server redis01.conf
redis-server redis02.conf
redis-server redis03.conf
- 配置屬性,避免檔案沖突
port 8379
pidfile /var/run/redis_8379.pid
logfile "8379.log"
dbfilename dump8379.rdb
啟動
- 開啟主從redis伺服器
docker start redis01
docker start redis02
docker start redis03
- 啟動哨兵
redis-sentinel sentinel.conf
- 測試
redis-cli -p 8379
info replication
将主機關閉,等待數秒,然後檢視從機資訊,可以看到其中一台從機變為主機, 其餘從機連接配接該主機
全部配置
# 哨兵sentinel執行個體運作的端口 預設26379
port 26379
# 哨兵sentinel的工作目錄
dir /tmp
# 哨兵sentinel監控的redis主節點的 ip port
# master-name 可以自己命名的主節點名字 隻能由字母A-z、數字0-9 、這三個字元".-_"組成。
# quorum 當這些quorum個數sentinel哨兵認為master主節點失聯 那麼這時 客觀上認為主節點失聯了
# sentinel monitor <master-name> <ip> <redis-port> <quorum>
sentinel monitor mymaster 127.0.0.1 6379 2
# 當在Redis執行個體中開啟了requirepass foobared 授權密碼 這樣所有連接配接Redis執行個體的用戶端都要提供密碼
# 設定哨兵sentinel 連接配接主從的密碼 注意必須為主從設定一樣的驗證密碼
# sentinel auth-pass <master-name> <password>
sentinel auth-pass mymaster MySUPER--secret-0123passw0rd
# 指定多少毫秒之後 主節點沒有應答哨兵sentinel 此時 哨兵主觀上認為主節點下線 預設30秒
# sentinel down-after-milliseconds <master-name> <milliseconds>
sentinel down-after-milliseconds mymaster 30000
# 這個配置項指定了在發生failover主備切換時最多可以有多少個slave同時對新的master進行 同步,這個數字越小,完成failover所需的時間就越長,但是如果這個數字越大,就意味着越 多的slave因為replication而不可用。可以通過将這個值設為 1 來保證每次隻有一個slave 處于不能處理指令請求的狀态。
# sentinel parallel-syncs <master-name> <numslaves>
sentinel parallel-syncs mymaster 1
# 故障轉移的逾時時間 failover-timeout 可以用在以下這些方面:
#1. 同一個sentinel對同一個master兩次failover之間的間隔時間。
#2. 當一個slave從一個錯誤的master那裡同步資料開始計算時間。直到slave被糾正為向正确的master那裡同步資料時。
#3.當想要取消一個正在進行的failover所需要的時間。
#4.當進行failover時,配置所有slaves指向新的master所需的最大時間。不過,即使過了這個逾時,slaves依然會被正确配置為指向master,但是就不按parallel-syncs所配置的規則來了
# 預設三分鐘
# sentinel failover-timeout <master-name> <milliseconds>
sentinel failover-timeout mymaster 180000
# SCRIPTS EXECUTION
#配置當某一事件發生時所需要執行的腳本,可以通過腳本來通知管理者,例如當系統運作不正常時發郵件通知相關人員。
#對于腳本的運作結果有以下規則:
#若腳本執行後傳回1,那麼該腳本稍後将會被再次執行,重複次數目前預設為10
#若腳本執行後傳回2,或者比2更高的一個傳回值,腳本将不會重複執行。
#如果腳本在執行過程中由于收到系統中斷信号被終止了,則同傳回值為1時的行為相同。
#一個腳本的最大執行時間為60s,如果超過這個時間,腳本将會被一個SIGKILL信号終止,之後重新執行。
#通知型腳本:當sentinel有任何警告級别的事件發生時(比如說redis執行個體的主觀失效和客觀失效等等),将會去調用這個腳本,這時這個腳本應該通過郵件,SMS等方式去通知系統管理者關于系統不正常運作的資訊。調用該腳本時,将傳給腳本兩個參數,一個是事件的類型,一個是事件的描述。如果sentinel.conf配置檔案中配置了這個腳本路徑,那麼必須保證這個腳本存在于這個路徑,并且是可執行的,否則sentinel無法正常啟動成功。
#通知腳本
# sentinel notification-script <master-name> <script-path>
sentinel notification-script mymaster /var/redis/notify.sh
# 用戶端重新配置主節點參數腳本
# 當一個master由于failover而發生改變時,這個腳本将會被調用,通知相關的用戶端關于master位址已經發生改變的資訊。
# 以下參數将會在調用腳本時傳給腳本:
# <master-name> <role> <state> <from-ip> <from-port> <to-ip> <to-port>
# 目前<state>總是“failover”,
# <role>是“leader”或者“observer”中的一個。
# 參數 from-ip, from-port, to-ip, to-port是用來和舊的master和新的master(即舊的slave)通信的
# 這個腳本應該是通用的,能被多次調用,不是針對性的。
# sentinel client-reconfig-script <master-name> <script-path>
sentinel client-reconfig-script mymaster /var/redis/reconfig.sh
Redis緩存穿透和雪崩
緩存穿透
查詢的資料在redis緩存中沒有,于是向資料庫中查詢,但資料庫中也不存在,此時如果大批使用者所查詢的資料都不存在,資料庫就會承受很大壓力,相當于出現了緩存穿透
解決方案
- 布隆過濾器
布隆過濾器是一種資料結構,對所有可能查詢的參數以hash形式存儲,在控制層先進行校驗,不符合則丢棄,進而避免了對底層資料庫的查詢壓力
- 緩存空對象
當存儲層不命中後,即使傳回的空對象也緩存起來,同時設定一個過期時間,之後再通路這個資料将會從緩存中擷取
問題:
- 如果空值能被存儲,那麼緩存就需要更多的空間去存儲鍵
- 即使空值設定了過期時間,但如果此時資料庫中對該空值資料進行了添加操作,而緩存設定需要一定時間,那麼當我通路緩存的時候取到的是空值,這對于需要保持一緻性的業務會有影響
緩存擊穿
當某個key在過期的瞬間,有大量的請求并發通路,這類資料一般是熱點資料,由于緩存過期,會同時通路資料庫來查詢最新的資料,并且回寫緩存,會導緻資料庫瞬間壓力過大
解決方案
- 設定熱點資料永不過期
如果沒有設定過期時間,那麼就不會出現熱點key過期後高并發的問題,但存儲可能會爆滿
- 加互斥鎖
分布式鎖:使用分布式鎖,保證對于每個key同時隻有一個線程去查詢後端服務,其他線程沒有獲得分布式鎖的權限,是以隻需要等待即可。這種方式将高并發的壓力轉移到了分布式鎖,是以對分布式鎖的考驗很大
緩存雪崩
指在某個時間段,緩存集中過期失效或redis當機
當雙十一時,大量使用者進行商品搶購,這些商品集中放入緩存,假設緩存一個小時,當一個小時過去後,這批商品的緩存就過期了,而對于這些商品的通路查詢,就都落到了資料庫上,對于資料庫而言,就會産生周期性的壓力波峰
解決方案
- redis高可用
多增設幾台redis伺服器
- 限流降級
在緩存失效後,通過加鎖或者隊列來控制讀資料庫寫緩存的線程數量。比如對某個key隻允許一個線程查詢資料和寫緩存,其他線程等待
- 資料預熱
資料加熱的含義就是在正式部署之前,先把可能的資料先預先通路一遍,這樣部分可能大量通路的資料就會加載到緩存中。在即将發生大并發通路前手動觸發加載緩存不同的key,設定不同的過期時間,讓緩存失效的時間點盡量均勻
附錄
redis.conf版本5.0.7 (和自己redis版本不一緻可能會導緻錯誤)
# Redis configuration file example.
#
# Note that in order to read the configuration file, Redis must be
# started with the file path as first argument:
#
# ./redis-server /path/to/redis.conf
# Note on units: when memory size is needed, it is possible to specify
# it in the usual form of 1k 5GB 4M and so forth:
#
# 1k => 1000 bytes
# 1kb => 1024 bytes
# 1m => 1000000 bytes
# 1mb => 1024*1024 bytes
# 1g => 1000000000 bytes
# 1gb => 1024*1024*1024 bytes
#
# units are case insensitive so 1GB 1Gb 1gB are all the same.
################################## INCLUDES ###################################
# Include one or more other config files here. This is useful if you
# have a standard template that goes to all Redis servers but also need
# to customize a few per-server settings. Include files can include
# other files, so use this wisely.
#
# Notice option "include" won't be rewritten by command "CONFIG REWRITE"
# from admin or Redis Sentinel. Since Redis always uses the last processed
# line as value of a configuration directive, you'd better put includes
# at the beginning of this file to avoid overwriting config change at runtime.
#
# If instead you are interested in using includes to override configuration
# options, it is better to use include as the last line.
#
# include /path/to/local.conf
# include /path/to/other.conf
################################## MODULES #####################################
# Load modules at startup. If the server is not able to load modules
# it will abort. It is possible to use multiple loadmodule directives.
#
# loadmodule /path/to/my_module.so
# loadmodule /path/to/other_module.so
################################## NETWORK #####################################
# By default, if no "bind" configuration directive is specified, Redis listens
# for connections from all the network interfaces available on the server.
# It is possible to listen to just one or multiple selected interfaces using
# the "bind" configuration directive, followed by one or more IP addresses.
#
# Examples:
#
# bind 192.168.1.100 10.0.0.1
# bind 127.0.0.1 ::1
#
# ~~~ WARNING ~~~ If the computer running Redis is directly exposed to the
# internet, binding to all the interfaces is dangerous and will expose the
# instance to everybody on the internet. So by default we uncomment the
# following bind directive, that will force Redis to listen only into
# the IPv4 loopback interface address (this means Redis will be able to
# accept connections only from clients running into the same computer it
# is running).
#
# IF YOU ARE SURE YOU WANT YOUR INSTANCE TO LISTEN TO ALL THE INTERFACES
# JUST COMMENT THE FOLLOWING LINE.
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
bind 127.0.0.1
# Protected mode is a layer of security protection, in order to avoid that
# Redis instances left open on the internet are accessed and exploited.
#
# When protected mode is on and if:
#
# 1) The server is not binding explicitly to a set of addresses using the
# "bind" directive.
# 2) No password is configured.
#
# The server only accepts connections from clients connecting from the
# IPv4 and IPv6 loopback addresses 127.0.0.1 and ::1, and from Unix domain
# sockets.
#
# By default protected mode is enabled. You should disable it only if
# you are sure you want clients from other hosts to connect to Redis
# even if no authentication is configured, nor a specific set of interfaces
# are explicitly listed using the "bind" directive.
protected-mode yes
# Accept connections on the specified port, default is 6379 (IANA #815344).
# If port 0 is specified Redis will not listen on a TCP socket.
port 6379
# TCP listen() backlog.
#
# In high requests-per-second environments you need an high backlog in order
# to avoid slow clients connections issues. Note that the Linux kernel
# will silently truncate it to the value of /proc/sys/net/core/somaxconn so
# make sure to raise both the value of somaxconn and tcp_max_syn_backlog
# in order to get the desired effect.
tcp-backlog 511
# Unix socket.
#
# Specify the path for the Unix socket that will be used to listen for
# incoming connections. There is no default, so Redis will not listen
# on a unix socket when not specified.
#
# unixsocket /tmp/redis.sock
# unixsocketperm 700
# Close the connection after a client is idle for N seconds (0 to disable)
timeout 0
# TCP keepalive.
#
# If non-zero, use SO_KEEPALIVE to send TCP ACKs to clients in absence
# of communication. This is useful for two reasons:
#
# 1) Detect dead peers.
# 2) Take the connection alive from the point of view of network
# equipment in the middle.
#
# On Linux, the specified value (in seconds) is the period used to send ACKs.
# Note that to close the connection the double of the time is needed.
# On other kernels the period depends on the kernel configuration.
#
# A reasonable value for this option is 300 seconds, which is the new
# Redis default starting with Redis 3.2.1.
tcp-keepalive 300
################################# GENERAL #####################################
# By default Redis does not run as a daemon. Use 'yes' if you need it.
# Note that Redis will write a pid file in /var/run/redis.pid when daemonized.
daemonize no
# If you run Redis from upstart or systemd, Redis can interact with your
# supervision tree. Options:
# supervised no - no supervision interaction
# supervised upstart - signal upstart by putting Redis into SIGSTOP mode
# supervised systemd - signal systemd by writing READY=1 to $NOTIFY_SOCKET
# supervised auto - detect upstart or systemd method based on
# UPSTART_JOB or NOTIFY_SOCKET environment variables
# Note: these supervision methods only signal "process is ready."
# They do not enable continuous liveness pings back to your supervisor.
supervised no
# If a pid file is specified, Redis writes it where specified at startup
# and removes it at exit.
#
# When the server runs non daemonized, no pid file is created if none is
# specified in the configuration. When the server is daemonized, the pid file
# is used even if not specified, defaulting to "/var/run/redis.pid".
#
# Creating a pid file is best effort: if Redis is not able to create it
# nothing bad happens, the server will start and run normally.
pidfile /var/run/redis_6379.pid
# Specify the server verbosity level.
# This can be one of:
# debug (a lot of information, useful for development/testing)
# verbose (many rarely useful info, but not a mess like the debug level)
# notice (moderately verbose, what you want in production probably)
# warning (only very important / critical messages are logged)
loglevel notice
# Specify the log file name. Also the empty string can be used to force
# Redis to log on the standard output. Note that if you use standard
# output for logging but daemonize, logs will be sent to /dev/null
logfile ""
# To enable logging to the system logger, just set 'syslog-enabled' to yes,
# and optionally update the other syslog parameters to suit your needs.
# syslog-enabled no
# Specify the syslog identity.
# syslog-ident redis
# Specify the syslog facility. Must be USER or between LOCAL0-LOCAL7.
# syslog-facility local0
# Set the number of databases. The default database is DB 0, you can select
# a different one on a per-connection basis using SELECT <dbid> where
# dbid is a number between 0 and 'databases'-1
databases 16
# By default Redis shows an ASCII art logo only when started to log to the
# standard output and if the standard output is a TTY. Basically this means
# that normally a logo is displayed only in interactive sessions.
#
# However it is possible to force the pre-4.0 behavior and always show a
# ASCII art logo in startup logs by setting the following option to yes.
always-show-logo yes
################################ SNAPSHOTTING ################################
#
# Save the DB on disk:
#
# save <seconds> <changes>
#
# Will save the DB if both the given number of seconds and the given
# number of write operations against the DB occurred.
#
# In the example below the behaviour will be to save:
# after 900 sec (15 min) if at least 1 key changed
# after 300 sec (5 min) if at least 10 keys changed
# after 60 sec if at least 10000 keys changed
#
# Note: you can disable saving completely by commenting out all "save" lines.
#
# It is also possible to remove all the previously configured save
# points by adding a save directive with a single empty string argument
# like in the following example:
#
# save ""
save 900 1
save 300 10
save 60 10000
# By default Redis will stop accepting writes if RDB snapshots are enabled
# (at least one save point) and the latest background save failed.
# This will make the user aware (in a hard way) that data is not persisting
# on disk properly, otherwise chances are that no one will notice and some
# disaster will happen.
#
# If the background saving process will start working again Redis will
# automatically allow writes again.
#
# However if you have setup your proper monitoring of the Redis server
# and persistence, you may want to disable this feature so that Redis will
# continue to work as usual even if there are problems with disk,
# permissions, and so forth.
stop-writes-on-bgsave-error yes
# Compress string objects using LZF when dump .rdb databases?
# For default that's set to 'yes' as it's almost always a win.
# If you want to save some CPU in the saving child set it to 'no' but
# the dataset will likely be bigger if you have compressible values or keys.
rdbcompression yes
# Since version 5 of RDB a CRC64 checksum is placed at the end of the file.
# This makes the format more resistant to corruption but there is a performance
# hit to pay (around 10%) when saving and loading RDB files, so you can disable it
# for maximum performances.
#
# RDB files created with checksum disabled have a checksum of zero that will
# tell the loading code to skip the check.
rdbchecksum yes
# The filename where to dump the DB
dbfilename dump.rdb
# The working directory.
#
# The DB will be written inside this directory, with the filename specified
# above using the 'dbfilename' configuration directive.
#
# The Append Only File will also be created inside this directory.
#
# Note that you must specify a directory here, not a file name.
dir ./
################################# REPLICATION #################################
# Master-Replica replication. Use replicaof to make a Redis instance a copy of
# another Redis server. A few things to understand ASAP about Redis replication.
#
# +------------------+ +---------------+
# | Master | ---> | Replica |
# | (receive writes) | | (exact copy) |
# +------------------+ +---------------+
#
# 1) Redis replication is asynchronous, but you can configure a master to
# stop accepting writes if it appears to be not connected with at least
# a given number of replicas.
# 2) Redis replicas are able to perform a partial resynchronization with the
# master if the replication link is lost for a relatively small amount of
# time. You may want to configure the replication backlog size (see the next
# sections of this file) with a sensible value depending on your needs.
# 3) Replication is automatic and does not need user intervention. After a
# network partition replicas automatically try to reconnect to masters
# and resynchronize with them.
#
# replicaof <masterip> <masterport>
# If the master is password protected (using the "requirepass" configuration
# directive below) it is possible to tell the replica to authenticate before
# starting the replication synchronization process, otherwise the master will
# refuse the replica request.
#
# masterauth <master-password>
# When a replica loses its connection with the master, or when the replication
# is still in progress, the replica can act in two different ways:
#
# 1) if replica-serve-stale-data is set to 'yes' (the default) the replica will
# still reply to client requests, possibly with out of date data, or the
# data set may just be empty if this is the first synchronization.
#
# 2) if replica-serve-stale-data is set to 'no' the replica will reply with
# an error "SYNC with master in progress" to all the kind of commands
# but to INFO, replicaOF, AUTH, PING, SHUTDOWN, REPLCONF, ROLE, CONFIG,
# SUBSCRIBE, UNSUBSCRIBE, PSUBSCRIBE, PUNSUBSCRIBE, PUBLISH, PUBSUB,
# COMMAND, POST, HOST: and LATENCY.
#
replica-serve-stale-data yes
# You can configure a replica instance to accept writes or not. Writing against
# a replica instance may be useful to store some ephemeral data (because data
# written on a replica will be easily deleted after resync with the master) but
# may also cause problems if clients are writing to it because of a
# misconfiguration.
#
# Since Redis 2.6 by default replicas are read-only.
#
# Note: read only replicas are not designed to be exposed to untrusted clients
# on the internet. It's just a protection layer against misuse of the instance.
# Still a read only replica exports by default all the administrative commands
# such as CONFIG, DEBUG, and so forth. To a limited extent you can improve
# security of read only replicas using 'rename-command' to shadow all the
# administrative / dangerous commands.
replica-read-only yes
# Replication SYNC strategy: disk or socket.
#
# -------------------------------------------------------
# WARNING: DISKLESS REPLICATION IS EXPERIMENTAL CURRENTLY
# -------------------------------------------------------
#
# New replicas and reconnecting replicas that are not able to continue the replication
# process just receiving differences, need to do what is called a "full
# synchronization". An RDB file is transmitted from the master to the replicas.
# The transmission can happen in two different ways:
#
# 1) Disk-backed: The Redis master creates a new process that writes the RDB
# file on disk. Later the file is transferred by the parent
# process to the replicas incrementally.
# 2) Diskless: The Redis master creates a new process that directly writes the
# RDB file to replica sockets, without touching the disk at all.
#
# With disk-backed replication, while the RDB file is generated, more replicas
# can be queued and served with the RDB file as soon as the current child producing
# the RDB file finishes its work. With diskless replication instead once
# the transfer starts, new replicas arriving will be queued and a new transfer
# will start when the current one terminates.
#
# When diskless replication is used, the master waits a configurable amount of
# time (in seconds) before starting the transfer in the hope that multiple replicas
# will arrive and the transfer can be parallelized.
#
# With slow disks and fast (large bandwidth) networks, diskless replication
# works better.
repl-diskless-sync no
# When diskless replication is enabled, it is possible to configure the delay
# the server waits in order to spawn the child that transfers the RDB via socket
# to the replicas.
#
# This is important since once the transfer starts, it is not possible to serve
# new replicas arriving, that will be queued for the next RDB transfer, so the server
# waits a delay in order to let more replicas arrive.
#
# The delay is specified in seconds, and by default is 5 seconds. To disable
# it entirely just set it to 0 seconds and the transfer will start ASAP.
repl-diskless-sync-delay 5
# Replicas send PINGs to server in a predefined interval. It's possible to change
# this interval with the repl_ping_replica_period option. The default value is 10
# seconds.
#
# repl-ping-replica-period 10
# The following option sets the replication timeout for:
#
# 1) Bulk transfer I/O during SYNC, from the point of view of replica.
# 2) Master timeout from the point of view of replicas (data, pings).
# 3) Replica timeout from the point of view of masters (REPLCONF ACK pings).
#
# It is important to make sure that this value is greater than the value
# specified for repl-ping-replica-period otherwise a timeout will be detected
# every time there is low traffic between the master and the replica.
#
# repl-timeout 60
# Disable TCP_NODELAY on the replica socket after SYNC?
#
# If you select "yes" Redis will use a smaller number of TCP packets and
# less bandwidth to send data to replicas. But this can add a delay for
# the data to appear on the replica side, up to 40 milliseconds with
# Linux kernels using a default configuration.
#
# If you select "no" the delay for data to appear on the replica side will
# be reduced but more bandwidth will be used for replication.
#
# By default we optimize for low latency, but in very high traffic conditions
# or when the master and replicas are many hops away, turning this to "yes" may
# be a good idea.
repl-disable-tcp-nodelay no
# Set the replication backlog size. The backlog is a buffer that accumulates
# replica data when replicas are disconnected for some time, so that when a replica
# wants to reconnect again, often a full resync is not needed, but a partial
# resync is enough, just passing the portion of data the replica missed while
# disconnected.
#
# The bigger the replication backlog, the longer the time the replica can be
# disconnected and later be able to perform a partial resynchronization.
#
# The backlog is only allocated once there is at least a replica connected.
#
# repl-backlog-size 1mb
# After a master has no longer connected replicas for some time, the backlog
# will be freed. The following option configures the amount of seconds that
# need to elapse, starting from the time the last replica disconnected, for
# the backlog buffer to be freed.
#
# Note that replicas never free the backlog for timeout, since they may be
# promoted to masters later, and should be able to correctly "partially
# resynchronize" with the replicas: hence they should always accumulate backlog.
#
# A value of 0 means to never release the backlog.
#
# repl-backlog-ttl 3600
# The replica priority is an integer number published by Redis in the INFO output.
# It is used by Redis Sentinel in order to select a replica to promote into a
# master if the master is no longer working correctly.
#
# A replica with a low priority number is considered better for promotion, so
# for instance if there are three replicas with priority 10, 100, 25 Sentinel will
# pick the one with priority 10, that is the lowest.
#
# However a special priority of 0 marks the replica as not able to perform the
# role of master, so a replica with priority of 0 will never be selected by
# Redis Sentinel for promotion.
#
# By default the priority is 100.
replica-priority 100
# It is possible for a master to stop accepting writes if there are less than
# N replicas connected, having a lag less or equal than M seconds.
#
# The N replicas need to be in "online" state.
#
# The lag in seconds, that must be <= the specified value, is calculated from
# the last ping received from the replica, that is usually sent every second.
#
# This option does not GUARANTEE that N replicas will accept the write, but
# will limit the window of exposure for lost writes in case not enough replicas
# are available, to the specified number of seconds.
#
# For example to require at least 3 replicas with a lag <= 10 seconds use:
#
# min-replicas-to-write 3
# min-replicas-max-lag 10
#
# Setting one or the other to 0 disables the feature.
#
# By default min-replicas-to-write is set to 0 (feature disabled) and
# min-replicas-max-lag is set to 10.
# A Redis master is able to list the address and port of the attached
# replicas in different ways. For example the "INFO replication" section
# offers this information, which is used, among other tools, by
# Redis Sentinel in order to discover replica instances.
# Another place where this info is available is in the output of the
# "ROLE" command of a master.
#
# The listed IP and address normally reported by a replica is obtained
# in the following way:
#
# IP: The address is auto detected by checking the peer address
# of the socket used by the replica to connect with the master.
#
# Port: The port is communicated by the replica during the replication
# handshake, and is normally the port that the replica is using to
# listen for connections.
#
# However when port forwarding or Network Address Translation (NAT) is
# used, the replica may be actually reachable via different IP and port
# pairs. The following two options can be used by a replica in order to
# report to its master a specific set of IP and port, so that both INFO
# and ROLE will report those values.
#
# There is no need to use both the options if you need to override just
# the port or the IP address.
#
# replica-announce-ip 5.5.5.5
# replica-announce-port 1234
################################## SECURITY ###################################
# Require clients to issue AUTH <PASSWORD> before processing any other
# commands. This might be useful in environments in which you do not trust
# others with access to the host running redis-server.
#
# This should stay commented out for backward compatibility and because most
# people do not need auth (e.g. they run their own servers).
#
# Warning: since Redis is pretty fast an outside user can try up to
# 150k passwords per second against a good box. This means that you should
# use a very strong password otherwise it will be very easy to break.
#
# requirepass foobared
# Command renaming.
#
# It is possible to change the name of dangerous commands in a shared
# environment. For instance the CONFIG command may be renamed into something
# hard to guess so that it will still be available for internal-use tools
# but not available for general clients.
#
# Example:
#
# rename-command CONFIG b840fc02d524045429941cc15f59e41cb7be6c52
#
# It is also possible to completely kill a command by renaming it into
# an empty string:
#
# rename-command CONFIG ""
#
# Please note that changing the name of commands that are logged into the
# AOF file or transmitted to replicas may cause problems.
################################### CLIENTS ####################################
# Set the max number of connected clients at the same time. By default
# this limit is set to 10000 clients, however if the Redis server is not
# able to configure the process file limit to allow for the specified limit
# the max number of allowed clients is set to the current file limit
# minus 32 (as Redis reserves a few file descriptors for internal uses).
#
# Once the limit is reached Redis will close all the new connections sending
# an error 'max number of clients reached'.
#
# maxclients 10000
############################## MEMORY MANAGEMENT ################################
# Set a memory usage limit to the specified amount of bytes.
# When the memory limit is reached Redis will try to remove keys
# according to the eviction policy selected (see maxmemory-policy).
#
# If Redis can't remove keys according to the policy, or if the policy is
# set to 'noeviction', Redis will start to reply with errors to commands
# that would use more memory, like SET, LPUSH, and so on, and will continue
# to reply to read-only commands like GET.
#
# This option is usually useful when using Redis as an LRU or LFU cache, or to
# set a hard memory limit for an instance (using the 'noeviction' policy).
#
# WARNING: If you have replicas attached to an instance with maxmemory on,
# the size of the output buffers needed to feed the replicas are subtracted
# from the used memory count, so that network problems / resyncs will
# not trigger a loop where keys are evicted, and in turn the output
# buffer of replicas is full with DELs of keys evicted triggering the deletion
# of more keys, and so forth until the database is completely emptied.
#
# In short... if you have replicas attached it is suggested that you set a lower
# limit for maxmemory so that there is some free RAM on the system for replica
# output buffers (but this is not needed if the policy is 'noeviction').
#
# maxmemory <bytes>
# MAXMEMORY POLICY: how Redis will select what to remove when maxmemory
# is reached. You can select among five behaviors:
#
# volatile-lru -> Evict using approximated LRU among the keys with an expire set.
# allkeys-lru -> Evict any key using approximated LRU.
# volatile-lfu -> Evict using approximated LFU among the keys with an expire set.
# allkeys-lfu -> Evict any key using approximated LFU.
# volatile-random -> Remove a random key among the ones with an expire set.
# allkeys-random -> Remove a random key, any key.
# volatile-ttl -> Remove the key with the nearest expire time (minor TTL)
# noeviction -> Don't evict anything, just return an error on write operations.
#
# LRU means Least Recently Used
# LFU means Least Frequently Used
#
# Both LRU, LFU and volatile-ttl are implemented using approximated
# randomized algorithms.
#
# Note: with any of the above policies, Redis will return an error on write
# operations, when there are no suitable keys for eviction.
#
# At the date of writing these commands are: set setnx setex append
# incr decr rpush lpush rpushx lpushx linsert lset rpoplpush sadd
# sinter sinterstore sunion sunionstore sdiff sdiffstore zadd zincrby
# zunionstore zinterstore hset hsetnx hmset hincrby incrby decrby
# getset mset msetnx exec sort
#
# The default is:
#
# maxmemory-policy noeviction
# LRU, LFU and minimal TTL algorithms are not precise algorithms but approximated
# algorithms (in order to save memory), so you can tune it for speed or
# accuracy. For default Redis will check five keys and pick the one that was
# used less recently, you can change the sample size using the following
# configuration directive.
#
# The default of 5 produces good enough results. 10 Approximates very closely
# true LRU but costs more CPU. 3 is faster but not very accurate.
#
# maxmemory-samples 5
# Starting from Redis 5, by default a replica will ignore its maxmemory setting
# (unless it is promoted to master after a failover or manually). It means
# that the eviction of keys will be just handled by the master, sending the
# DEL commands to the replica as keys evict in the master side.
#
# This behavior ensures that masters and replicas stay consistent, and is usually
# what you want, however if your replica is writable, or you want the replica to have
# a different memory setting, and you are sure all the writes performed to the
# replica are idempotent, then you may change this default (but be sure to understand
# what you are doing).
#
# Note that since the replica by default does not evict, it may end using more
# memory than the one set via maxmemory (there are certain buffers that may
# be larger on the replica, or data structures may sometimes take more memory and so
# forth). So make sure you monitor your replicas and make sure they have enough
# memory to never hit a real out-of-memory condition before the master hits
# the configured maxmemory setting.
#
# replica-ignore-maxmemory yes
############################# LAZY FREEING ####################################
# Redis has two primitives to delete keys. One is called DEL and is a blocking
# deletion of the object. It means that the server stops processing new commands
# in order to reclaim all the memory associated with an object in a synchronous
# way. If the key deleted is associated with a small object, the time needed
# in order to execute the DEL command is very small and comparable to most other
# O(1) or O(log_N) commands in Redis. However if the key is associated with an
# aggregated value containing millions of elements, the server can block for
# a long time (even seconds) in order to complete the operation.
#
# For the above reasons Redis also offers non blocking deletion primitives
# such as UNLINK (non blocking DEL) and the ASYNC option of FLUSHALL and
# FLUSHDB commands, in order to reclaim memory in background. Those commands
# are executed in constant time. Another thread will incrementally free the
# object in the background as fast as possible.
#
# DEL, UNLINK and ASYNC option of FLUSHALL and FLUSHDB are user-controlled.
# It's up to the design of the application to understand when it is a good
# idea to use one or the other. However the Redis server sometimes has to
# delete keys or flush the whole database as a side effect of other operations.
# Specifically Redis deletes objects independently of a user call in the
# following scenarios:
#
# 1) On eviction, because of the maxmemory and maxmemory policy configurations,
# in order to make room for new data, without going over the specified
# memory limit.
# 2) Because of expire: when a key with an associated time to live (see the
# EXPIRE command) must be deleted from memory.
# 3) Because of a side effect of a command that stores data on a key that may
# already exist. For example the RENAME command may delete the old key
# content when it is replaced with another one. Similarly SUNIONSTORE
# or SORT with STORE option may delete existing keys. The SET command
# itself removes any old content of the specified key in order to replace
# it with the specified string.
# 4) During replication, when a replica performs a full resynchronization with
# its master, the content of the whole database is removed in order to
# load the RDB file just transferred.
#
# In all the above cases the default is to delete objects in a blocking way,
# like if DEL was called. However you can configure each case specifically
# in order to instead release memory in a non-blocking way like if UNLINK
# was called, using the following configuration directives:
lazyfree-lazy-eviction no
lazyfree-lazy-expire no
lazyfree-lazy-server-del no
replica-lazy-flush no
############################## APPEND ONLY MODE ###############################
# By default Redis asynchronously dumps the dataset on disk. This mode is
# good enough in many applications, but an issue with the Redis process or
# a power outage may result into a few minutes of writes lost (depending on
# the configured save points).
#
# The Append Only File is an alternative persistence mode that provides
# much better durability. For instance using the default data fsync policy
# (see later in the config file) Redis can lose just one second of writes in a
# dramatic event like a server power outage, or a single write if something
# wrong with the Redis process itself happens, but the operating system is
# still running correctly.
#
# AOF and RDB persistence can be enabled at the same time without problems.
# If the AOF is enabled on startup Redis will load the AOF, that is the file
# with the better durability guarantees.
#
# Please check http://redis.io/topics/persistence for more information.
appendonly no
# The name of the append only file (default: "appendonly.aof")
appendfilename "appendonly.aof"
# The fsync() call tells the Operating System to actually write data on disk
# instead of waiting for more data in the output buffer. Some OS will really flush
# data on disk, some other OS will just try to do it ASAP.
#
# Redis supports three different modes:
#
# no: don't fsync, just let the OS flush the data when it wants. Faster.
# always: fsync after every write to the append only log. Slow, Safest.
# everysec: fsync only one time every second. Compromise.
#
# The default is "everysec", as that's usually the right compromise between
# speed and data safety. It's up to you to understand if you can relax this to
# "no" that will let the operating system flush the output buffer when
# it wants, for better performances (but if you can live with the idea of
# some data loss consider the default persistence mode that's snapshotting),
# or on the contrary, use "always" that's very slow but a bit safer than
# everysec.
#
# More details please check the following article:
# http://antirez.com/post/redis-persistence-demystified.html
#
# If unsure, use "everysec".
# appendfsync always
appendfsync everysec
# appendfsync no
# When the AOF fsync policy is set to always or everysec, and a background
# saving process (a background save or AOF log background rewriting) is
# performing a lot of I/O against the disk, in some Linux configurations
# Redis may block too long on the fsync() call. Note that there is no fix for
# this currently, as even performing fsync in a different thread will block
# our synchronous write(2) call.
#
# In order to mitigate this problem it's possible to use the following option
# that will prevent fsync() from being called in the main process while a
# BGSAVE or BGREWRITEAOF is in progress.
#
# This means that while another child is saving, the durability of Redis is
# the same as "appendfsync none". In practical terms, this means that it is
# possible to lose up to 30 seconds of log in the worst scenario (with the
# default Linux settings).
#
# If you have latency problems turn this to "yes". Otherwise leave it as
# "no" that is the safest pick from the point of view of durability.
no-appendfsync-on-rewrite no
# Automatic rewrite of the append only file.
# Redis is able to automatically rewrite the log file implicitly calling
# BGREWRITEAOF when the AOF log size grows by the specified percentage.
#
# This is how it works: Redis remembers the size of the AOF file after the
# latest rewrite (if no rewrite has happened since the restart, the size of
# the AOF at startup is used).
#
# This base size is compared to the current size. If the current size is
# bigger than the specified percentage, the rewrite is triggered. Also
# you need to specify a minimal size for the AOF file to be rewritten, this
# is useful to avoid rewriting the AOF file even if the percentage increase
# is reached but it is still pretty small.
#
# Specify a percentage of zero in order to disable the automatic AOF
# rewrite feature.
auto-aof-rewrite-percentage 100
auto-aof-rewrite-min-size 64mb
# An AOF file may be found to be truncated at the end during the Redis
# startup process, when the AOF data gets loaded back into memory.
# This may happen when the system where Redis is running
# crashes, especially when an ext4 filesystem is mounted without the
# data=ordered option (however this can't happen when Redis itself
# crashes or aborts but the operating system still works correctly).
#
# Redis can either exit with an error when this happens, or load as much
# data as possible (the default now) and start if the AOF file is found
# to be truncated at the end. The following option controls this behavior.
#
# If aof-load-truncated is set to yes, a truncated AOF file is loaded and
# the Redis server starts emitting a log to inform the user of the event.
# Otherwise if the option is set to no, the server aborts with an error
# and refuses to start. When the option is set to no, the user requires
# to fix the AOF file using the "redis-check-aof" utility before to restart
# the server.
#
# Note that if the AOF file will be found to be corrupted in the middle
# the server will still exit with an error. This option only applies when
# Redis will try to read more data from the AOF file but not enough bytes
# will be found.
aof-load-truncated yes
# When rewriting the AOF file, Redis is able to use an RDB preamble in the
# AOF file for faster rewrites and recoveries. When this option is turned
# on the rewritten AOF file is composed of two different stanzas:
#
# [RDB file][AOF tail]
#
# When loading Redis recognizes that the AOF file starts with the "REDIS"
# string and loads the prefixed RDB file, and continues loading the AOF
# tail.
aof-use-rdb-preamble yes
################################ LUA SCRIPTING ###############################
# Max execution time of a Lua script in milliseconds.
#
# If the maximum execution time is reached Redis will log that a script is
# still in execution after the maximum allowed time and will start to
# reply to queries with an error.
#
# When a long running script exceeds the maximum execution time only the
# SCRIPT KILL and SHUTDOWN NOSAVE commands are available. The first can be
# used to stop a script that did not yet called write commands. The second
# is the only way to shut down the server in the case a write command was
# already issued by the script but the user doesn't want to wait for the natural
# termination of the script.
#
# Set it to 0 or a negative value for unlimited execution without warnings.
lua-time-limit 5000
################################ REDIS CLUSTER ###############################
# Normal Redis instances can't be part of a Redis Cluster; only nodes that are
# started as cluster nodes can. In order to start a Redis instance as a
# cluster node enable the cluster support uncommenting the following:
#
# cluster-enabled yes
# Every cluster node has a cluster configuration file. This file is not
# intended to be edited by hand. It is created and updated by Redis nodes.
# Every Redis Cluster node requires a different cluster configuration file.
# Make sure that instances running in the same system do not have
# overlapping cluster configuration file names.
#
# cluster-config-file nodes-6379.conf
# Cluster node timeout is the amount of milliseconds a node must be unreachable
# for it to be considered in failure state.
# Most other internal time limits are multiple of the node timeout.
#
# cluster-node-timeout 15000
# A replica of a failing master will avoid to start a failover if its data
# looks too old.
#
# There is no simple way for a replica to actually have an exact measure of
# its "data age", so the following two checks are performed:
#
# 1) If there are multiple replicas able to failover, they exchange messages
# in order to try to give an advantage to the replica with the best
# replication offset (more data from the master processed).
# Replicas will try to get their rank by offset, and apply to the start
# of the failover a delay proportional to their rank.
#
# 2) Every single replica computes the time of the last interaction with
# its master. This can be the last ping or command received (if the master
# is still in the "connected" state), or the time that elapsed since the
# disconnection with the master (if the replication link is currently down).
# If the last interaction is too old, the replica will not try to failover
# at all.
#
# The point "2" can be tuned by user. Specifically a replica will not perform
# the failover if, since the last interaction with the master, the time
# elapsed is greater than:
#
# (node-timeout * replica-validity-factor) + repl-ping-replica-period
#
# So for example if node-timeout is 30 seconds, and the replica-validity-factor
# is 10, and assuming a default repl-ping-replica-period of 10 seconds, the
# replica will not try to failover if it was not able to talk with the master
# for longer than 310 seconds.
#
# A large replica-validity-factor may allow replicas with too old data to failover
# a master, while a too small value may prevent the cluster from being able to
# elect a replica at all.
#
# For maximum availability, it is possible to set the replica-validity-factor
# to a value of 0, which means, that replicas will always try to failover the
# master regardless of the last time they interacted with the master.
# (However they'll always try to apply a delay proportional to their
# offset rank).
#
# Zero is the only value able to guarantee that when all the partitions heal
# the cluster will always be able to continue.
#
# cluster-replica-validity-factor 10
# Cluster replicas are able to migrate to orphaned masters, that are masters
# that are left without working replicas. This improves the cluster ability
# to resist to failures as otherwise an orphaned master can't be failed over
# in case of failure if it has no working replicas.
#
# Replicas migrate to orphaned masters only if there are still at least a
# given number of other working replicas for their old master. This number
# is the "migration barrier". A migration barrier of 1 means that a replica
# will migrate only if there is at least 1 other working replica for its master
# and so forth. It usually reflects the number of replicas you want for every
# master in your cluster.
#
# Default is 1 (replicas migrate only if their masters remain with at least
# one replica). To disable migration just set it to a very large value.
# A value of 0 can be set but is useful only for debugging and dangerous
# in production.
#
# cluster-migration-barrier 1
# By default Redis Cluster nodes stop accepting queries if they detect there
# is at least an hash slot uncovered (no available node is serving it).
# This way if the cluster is partially down (for example a range of hash slots
# are no longer covered) all the cluster becomes, eventually, unavailable.
# It automatically returns available as soon as all the slots are covered again.
#
# However sometimes you want the subset of the cluster which is working,
# to continue to accept queries for the part of the key space that is still
# covered. In order to do so, just set the cluster-require-full-coverage
# option to no.
#
# cluster-require-full-coverage yes
# This option, when set to yes, prevents replicas from trying to failover its
# master during master failures. However the master can still perform a
# manual failover, if forced to do so.
#
# This is useful in different scenarios, especially in the case of multiple
# data center operations, where we want one side to never be promoted if not
# in the case of a total DC failure.
#
# cluster-replica-no-failover no
# In order to setup your cluster make sure to read the documentation
# available at http://redis.io web site.
########################## CLUSTER DOCKER/NAT support ########################
# In certain deployments, Redis Cluster nodes address discovery fails, because
# addresses are NAT-ted or because ports are forwarded (the typical case is
# Docker and other containers).
#
# In order to make Redis Cluster working in such environments, a static
# configuration where each node knows its public address is needed. The
# following two options are used for this scope, and are:
#
# * cluster-announce-ip
# * cluster-announce-port
# * cluster-announce-bus-port
#
# Each instruct the node about its address, client port, and cluster message
# bus port. The information is then published in the header of the bus packets
# so that other nodes will be able to correctly map the address of the node
# publishing the information.
#
# If the above options are not used, the normal Redis Cluster auto-detection
# will be used instead.
#
# Note that when remapped, the bus port may not be at the fixed offset of
# clients port + 10000, so you can specify any port and bus-port depending
# on how they get remapped. If the bus-port is not set, a fixed offset of
# 10000 will be used as usually.
#
# Example:
#
# cluster-announce-ip 10.1.1.5
# cluster-announce-port 6379
# cluster-announce-bus-port 6380
################################## SLOW LOG ###################################
# The Redis Slow Log is a system to log queries that exceeded a specified
# execution time. The execution time does not include the I/O operations
# like talking with the client, sending the reply and so forth,
# but just the time needed to actually execute the command (this is the only
# stage of command execution where the thread is blocked and can not serve
# other requests in the meantime).
#
# You can configure the slow log with two parameters: one tells Redis
# what is the execution time, in microseconds, to exceed in order for the
# command to get logged, and the other parameter is the length of the
# slow log. When a new command is logged the oldest one is removed from the
# queue of logged commands.
# The following time is expressed in microseconds, so 1000000 is equivalent
# to one second. Note that a negative number disables the slow log, while
# a value of zero forces the logging of every command.
slowlog-log-slower-than 10000
# There is no limit to this length. Just be aware that it will consume memory.
# You can reclaim memory used by the slow log with SLOWLOG RESET.
slowlog-max-len 128
################################ LATENCY MONITOR ##############################
# The Redis latency monitoring subsystem samples different operations
# at runtime in order to collect data related to possible sources of
# latency of a Redis instance.
#
# Via the LATENCY command this information is available to the user that can
# print graphs and obtain reports.
#
# The system only logs operations that were performed in a time equal or
# greater than the amount of milliseconds specified via the
# latency-monitor-threshold configuration directive. When its value is set
# to zero, the latency monitor is turned off.
#
# By default latency monitoring is disabled since it is mostly not needed
# if you don't have latency issues, and collecting data has a performance
# impact, that while very small, can be measured under big load. Latency
# monitoring can easily be enabled at runtime using the command
# "CONFIG SET latency-monitor-threshold <milliseconds>" if needed.
latency-monitor-threshold 0
############################# EVENT NOTIFICATION ##############################
# Redis can notify Pub/Sub clients about events happening in the key space.
# This feature is documented at http://redis.io/topics/notifications
#
# For instance if keyspace events notification is enabled, and a client
# performs a DEL operation on key "foo" stored in the Database 0, two
# messages will be published via Pub/Sub:
#
# PUBLISH [email protected]__:foo del
# PUBLISH [email protected]__:del foo
#
# It is possible to select the events that Redis will notify among a set
# of classes. Every class is identified by a single character:
#
# K Keyspace events, published with __keyspace@<db>__ prefix.
# E Keyevent events, published with __keyevent@<db>__ prefix.
# g Generic commands (non-type specific) like DEL, EXPIRE, RENAME, ...
# $ String commands
# l List commands
# s Set commands
# h Hash commands
# z Sorted set commands
# x Expired events (events generated every time a key expires)
# e Evicted events (events generated when a key is evicted for maxmemory)
# A Alias for g$lshzxe, so that the "AKE" string means all the events.
#
# The "notify-keyspace-events" takes as argument a string that is composed
# of zero or multiple characters. The empty string means that notifications
# are disabled.
#
# Example: to enable list and generic events, from the point of view of the
# event name, use:
#
# notify-keyspace-events Elg
#
# Example 2: to get the stream of the expired keys subscribing to channel
# name [email protected]__:expired use:
#
# notify-keyspace-events Ex
#
# By default all notifications are disabled because most users don't need
# this feature and the feature has some overhead. Note that if you don't
# specify at least one of K or E, no events will be delivered.
notify-keyspace-events ""
############################### ADVANCED CONFIG ###############################
# Hashes are encoded using a memory efficient data structure when they have a
# small number of entries, and the biggest entry does not exceed a given
# threshold. These thresholds can be configured using the following directives.
hash-max-ziplist-entries 512
hash-max-ziplist-value 64
# Lists are also encoded in a special way to save a lot of space.
# The number of entries allowed per internal list node can be specified
# as a fixed maximum size or a maximum number of elements.
# For a fixed maximum size, use -5 through -1, meaning:
# -5: max size: 64 Kb <-- not recommended for normal workloads
# -4: max size: 32 Kb <-- not recommended
# -3: max size: 16 Kb <-- probably not recommended
# -2: max size: 8 Kb <-- good
# -1: max size: 4 Kb <-- good
# Positive numbers mean store up to _exactly_ that number of elements
# per list node.
# The highest performing option is usually -2 (8 Kb size) or -1 (4 Kb size),
# but if your use case is unique, adjust the settings as necessary.
list-max-ziplist-size -2
# Lists may also be compressed.
# Compress depth is the number of quicklist ziplist nodes from *each* side of
# the list to *exclude* from compression. The head and tail of the list
# are always uncompressed for fast push/pop operations. Settings are:
# 0: disable all list compression
# 1: depth 1 means "don't start compressing until after 1 node into the list,
# going from either the head or tail"
# So: [head]->node->node->...->node->[tail]
# [head], [tail] will always be uncompressed; inner nodes will compress.
# 2: [head]->[next]->node->node->...->node->[prev]->[tail]
# 2 here means: don't compress head or head->next or tail->prev or tail,
# but compress all nodes between them.
# 3: [head]->[next]->[next]->node->node->...->node->[prev]->[prev]->[tail]
# etc.
list-compress-depth 0
# Sets have a special encoding in just one case: when a set is composed
# of just strings that happen to be integers in radix 10 in the range
# of 64 bit signed integers.
# The following configuration setting sets the limit in the size of the
# set in order to use this special memory saving encoding.
set-max-intset-entries 512
# Similarly to hashes and lists, sorted sets are also specially encoded in
# order to save a lot of space. This encoding is only used when the length and
# elements of a sorted set are below the following limits:
zset-max-ziplist-entries 128
zset-max-ziplist-value 64
# HyperLogLog sparse representation bytes limit. The limit includes the
# 16 bytes header. When an HyperLogLog using the sparse representation crosses
# this limit, it is converted into the dense representation.
#
# A value greater than 16000 is totally useless, since at that point the
# dense representation is more memory efficient.
#
# The suggested value is ~ 3000 in order to have the benefits of
# the space efficient encoding without slowing down too much PFADD,
# which is O(N) with the sparse encoding. The value can be raised to
# ~ 10000 when CPU is not a concern, but space is, and the data set is
# composed of many HyperLogLogs with cardinality in the 0 - 15000 range.
hll-sparse-max-bytes 3000
# Streams macro node max size / items. The stream data structure is a radix
# tree of big nodes that encode multiple items inside. Using this configuration
# it is possible to configure how big a single node can be in bytes, and the
# maximum number of items it may contain before switching to a new node when
# appending new stream entries. If any of the following settings are set to
# zero, the limit is ignored, so for instance it is possible to set just a
# max entires limit by setting max-bytes to 0 and max-entries to the desired
# value.
stream-node-max-bytes 4096
stream-node-max-entries 100
# Active rehashing uses 1 millisecond every 100 milliseconds of CPU time in
# order to help rehashing the main Redis hash table (the one mapping top-level
# keys to values). The hash table implementation Redis uses (see dict.c)
# performs a lazy rehashing: the more operation you run into a hash table
# that is rehashing, the more rehashing "steps" are performed, so if the
# server is idle the rehashing is never complete and some more memory is used
# by the hash table.
#
# The default is to use this millisecond 10 times every second in order to
# actively rehash the main dictionaries, freeing memory when possible.
#
# If unsure:
# use "activerehashing no" if you have hard latency requirements and it is
# not a good thing in your environment that Redis can reply from time to time
# to queries with 2 milliseconds delay.
#
# use "activerehashing yes" if you don't have such hard requirements but
# want to free memory asap when possible.
activerehashing yes
# The client output buffer limits can be used to force disconnection of clients
# that are not reading data from the server fast enough for some reason (a
# common reason is that a Pub/Sub client can't consume messages as fast as the
# publisher can produce them).
#
# The limit can be set differently for the three different classes of clients:
#
# normal -> normal clients including MONITOR clients
# replica -> replica clients
# pubsub -> clients subscribed to at least one pubsub channel or pattern
#
# The syntax of every client-output-buffer-limit directive is the following:
#
# client-output-buffer-limit <class> <hard limit> <soft limit> <soft seconds>
#
# A client is immediately disconnected once the hard limit is reached, or if
# the soft limit is reached and remains reached for the specified number of
# seconds (continuously).
# So for instance if the hard limit is 32 megabytes and the soft limit is
# 16 megabytes / 10 seconds, the client will get disconnected immediately
# if the size of the output buffers reach 32 megabytes, but will also get
# disconnected if the client reaches 16 megabytes and continuously overcomes
# the limit for 10 seconds.
#
# By default normal clients are not limited because they don't receive data
# without asking (in a push way), but just after a request, so only
# asynchronous clients may create a scenario where data is requested faster
# than it can read.
#
# Instead there is a default limit for pubsub and replica clients, since
# subscribers and replicas receive data in a push fashion.
#
# Both the hard or the soft limit can be disabled by setting them to zero.
client-output-buffer-limit normal 0 0 0
client-output-buffer-limit replica 256mb 64mb 60
client-output-buffer-limit pubsub 32mb 8mb 60
# Client query buffers accumulate new commands. They are limited to a fixed
# amount by default in order to avoid that a protocol desynchronization (for
# instance due to a bug in the client) will lead to unbound memory usage in
# the query buffer. However you can configure it here if you have very special
# needs, such us huge multi/exec requests or alike.
#
# client-query-buffer-limit 1gb
# In the Redis protocol, bulk requests, that are, elements representing single
# strings, are normally limited ot 512 mb. However you can change this limit
# here.
#
# proto-max-bulk-len 512mb
# Redis calls an internal function to perform many background tasks, like
# closing connections of clients in timeout, purging expired keys that are
# never requested, and so forth.
#
# Not all tasks are performed with the same frequency, but Redis checks for
# tasks to perform according to the specified "hz" value.
#
# By default "hz" is set to 10. Raising the value will use more CPU when
# Redis is idle, but at the same time will make Redis more responsive when
# there are many keys expiring at the same time, and timeouts may be
# handled with more precision.
#
# The range is between 1 and 500, however a value over 100 is usually not
# a good idea. Most users should use the default of 10 and raise this up to
# 100 only in environments where very low latency is required.
hz 10
# Normally it is useful to have an HZ value which is proportional to the
# number of clients connected. This is useful in order, for instance, to
# avoid too many clients are processed for each background task invocation
# in order to avoid latency spikes.
#
# Since the default HZ value by default is conservatively set to 10, Redis
# offers, and enables by default, the ability to use an adaptive HZ value
# which will temporary raise when there are many connected clients.
#
# When dynamic HZ is enabled, the actual configured HZ will be used as
# as a baseline, but multiples of the configured HZ value will be actually
# used as needed once more clients are connected. In this way an idle
# instance will use very little CPU time while a busy instance will be
# more responsive.
dynamic-hz yes
# When a child rewrites the AOF file, if the following option is enabled
# the file will be fsync-ed every 32 MB of data generated. This is useful
# in order to commit the file to the disk more incrementally and avoid
# big latency spikes.
aof-rewrite-incremental-fsync yes
# When redis saves RDB file, if the following option is enabled
# the file will be fsync-ed every 32 MB of data generated. This is useful
# in order to commit the file to the disk more incrementally and avoid
# big latency spikes.
rdb-save-incremental-fsync yes
# Redis LFU eviction (see maxmemory setting) can be tuned. However it is a good
# idea to start with the default settings and only change them after investigating
# how to improve the performances and how the keys LFU change over time, which
# is possible to inspect via the OBJECT FREQ command.
#
# There are two tunable parameters in the Redis LFU implementation: the
# counter logarithm factor and the counter decay time. It is important to
# understand what the two parameters mean before changing them.
#
# The LFU counter is just 8 bits per key, it's maximum value is 255, so Redis
# uses a probabilistic increment with logarithmic behavior. Given the value
# of the old counter, when a key is accessed, the counter is incremented in
# this way:
#
# 1. A random number R between 0 and 1 is extracted.
# 2. A probability P is calculated as 1/(old_value*lfu_log_factor+1).
# 3. The counter is incremented only if R < P.
#
# The default lfu-log-factor is 10. This is a table of how the frequency
# counter changes with a different number of accesses with different
# logarithmic factors:
#
# +--------+------------+------------+------------+------------+------------+
# | factor | 100 hits | 1000 hits | 100K hits | 1M hits | 10M hits |
# +--------+------------+------------+------------+------------+------------+
# | 0 | 104 | 255 | 255 | 255 | 255 |
# +--------+------------+------------+------------+------------+------------+
# | 1 | 18 | 49 | 255 | 255 | 255 |
# +--------+------------+------------+------------+------------+------------+
# | 10 | 10 | 18 | 142 | 255 | 255 |
# +--------+------------+------------+------------+------------+------------+
# | 100 | 8 | 11 | 49 | 143 | 255 |
# +--------+------------+------------+------------+------------+------------+
#
# NOTE: The above table was obtained by running the following commands:
#
# redis-benchmark -n 1000000 incr foo
# redis-cli object freq foo
#
# NOTE 2: The counter initial value is 5 in order to give new objects a chance
# to accumulate hits.
#
# The counter decay time is the time, in minutes, that must elapse in order
# for the key counter to be divided by two (or decremented if it has a value
# less <= 10).
#
# The default value for the lfu-decay-time is 1. A Special value of 0 means to
# decay the counter every time it happens to be scanned.
#
# lfu-log-factor 10
# lfu-decay-time 1
########################### ACTIVE DEFRAGMENTATION #######################
#
# WARNING THIS FEATURE IS EXPERIMENTAL. However it was stress tested
# even in production and manually tested by multiple engineers for some
# time.
#
# What is active defragmentation?
# -------------------------------
#
# Active (online) defragmentation allows a Redis server to compact the
# spaces left between small allocations and deallocations of data in memory,
# thus allowing to reclaim back memory.
#
# Fragmentation is a natural process that happens with every allocator (but
# less so with Jemalloc, fortunately) and certain workloads. Normally a server
# restart is needed in order to lower the fragmentation, or at least to flush
# away all the data and create it again. However thanks to this feature
# implemented by Oran Agra for Redis 4.0 this process can happen at runtime
# in an "hot" way, while the server is running.
#
# Basically when the fragmentation is over a certain level (see the
# configuration options below) Redis will start to create new copies of the
# values in contiguous memory regions by exploiting certain specific Jemalloc
# features (in order to understand if an allocation is causing fragmentation
# and to allocate it in a better place), and at the same time, will release the
# old copies of the data. This process, repeated incrementally for all the keys
# will cause the fragmentation to drop back to normal values.
#
# Important things to understand:
#
# 1. This feature is disabled by default, and only works if you compiled Redis
# to use the copy of Jemalloc we ship with the source code of Redis.
# This is the default with Linux builds.
#
# 2. You never need to enable this feature if you don't have fragmentation
# issues.
#
# 3. Once you experience fragmentation, you can enable this feature when
# needed with the command "CONFIG SET activedefrag yes".
#
# The configuration parameters are able to fine tune the behavior of the
# defragmentation process. If you are not sure about what they mean it is
# a good idea to leave the defaults untouched.
# Enabled active defragmentation
# activedefrag yes
# Minimum amount of fragmentation waste to start active defrag
# active-defrag-ignore-bytes 100mb
# Minimum percentage of fragmentation to start active defrag
# active-defrag-threshold-lower 10
# Maximum percentage of fragmentation at which we use maximum effort
# active-defrag-threshold-upper 100
# Minimal effort for defrag in CPU percentage
# active-defrag-cycle-min 5
# Maximal effort for defrag in CPU percentage
# active-defrag-cycle-max 75
# Maximum number of set/hash/zset/list fields that will be processed from
# the main dictionary scan
# active-defrag-max-scan-fields 1000