在生產中已有實踐,本元件僅做個人學習交流分享使用。github:https://github.com/axinSoochow/redis-caffeine-cache-starter
個人水平有限,歡迎大家在評論區輕噴。
快取就是將資料從讀取較慢的媒介上讀取出來放到讀取較快的媒介上,如磁碟-->記憶體。
平時我們會將資料儲存到磁碟上,如:資料庫。如果每次都從資料庫裡去讀取,會因為磁碟本身的IO影響讀取速度,所以就有了像redis這種的記憶體快取。可以將資料讀取出來放到記憶體裡,這樣當需要獲取資料時,就能夠直接從記憶體中拿到資料返回,能夠很大程度的提高速度。
但是一般redis是單獨部署成叢集,所以會有網路IO上的消耗,雖然與redis叢集的連結已經有連線池這種工具,但是資料傳輸上也還是會有一定消耗。所以就有了程序內快取,如:caffeine。當應用內快取有符合條件的資料時,就可以直接使用,而不用通過網路到redis中去獲取,這樣就形成了兩級快取。應用內快取叫做一級快取,遠端快取(如redis)叫做二級快取。
Redis用來儲存熱點資料,Redis中沒有的資料則直接去資料庫存取。
已經有Redis了,幹嘛還需要了解Guava,Caffeine這些程序快取呢:
所以如果僅僅是使用Redis,能滿足我們大部分需求,但是當需要追求更高的效能以及更高的可用性的時候,那就不得不瞭解多級快取。
資料讀流程 | 描述 |
---|---|
redis 與本地快取都查詢不到值的時候,會觸發更新過程,整個過程是加鎖的 |
快取失效流程 | 描述 |
---|---|
redis更新與刪除快取key都會觸發,清除redis快取後 |
元件是基於Spring Cache框架上改造的,在專案中使用分散式快取,僅僅需要在快取註解上增加:cacheManager ="L2_CacheManager",或者 cacheManager = CacheRedisCaffeineAutoConfiguration.分散式二級快取
//這個方法會使用分散式二級快取來提供查詢
@Cacheable(cacheNames = CacheNames.CACHE_12HOUR, cacheManager = "L2_CacheManager")
public Config getAllValidateConfig() {
}
如果你想既使用分散式快取,又想用分散式二級快取元件,那你需要向Spring注入一個 @Primary 的 CacheManager bean
@Primary
@Bean("deaultCacheManager")
public RedisCacheManager cacheManager(RedisConnectionFactory factory) {
// 生成一個預設設定,通過config物件即可對快取進行自定義設定
RedisCacheConfiguration config = RedisCacheConfiguration.defaultCacheConfig();
// 設定快取的預設過期時間,也是使用Duration設定
config = config.entryTtl(Duration.ofMinutes(2)).disableCachingNullValues();
// 設定一個初始化的快取空間set集合
Set<String> cacheNames = new HashSet<>();
cacheNames.add(CacheNames.CACHE_15MINS);
cacheNames.add(CacheNames.CACHE_30MINS);
// 對每個快取空間應用不同的設定
Map<String, RedisCacheConfiguration> configMap = new HashMap<>();
configMap.put(CacheNames.CACHE_15MINS, config.entryTtl(Duration.ofMinutes(15)));
configMap.put(CacheNames.CACHE_30MINS, config.entryTtl(Duration.ofMinutes(30)));
// 使用自定義的快取設定初始化一個cacheManager
RedisCacheManager cacheManager = RedisCacheManager.builder(factory)
.initialCacheNames(cacheNames) // 注意這兩句的呼叫順序,一定要先呼叫該方法設定初始化的快取名,再初始化相關的設定
.withInitialCacheConfigurations(configMap)
.build();
return cacheManager;
}
然後:
//這個方法會使用分散式二級快取
@Cacheable(cacheNames = CacheNames.CACHE_12HOUR, cacheManager = "L2_CacheManager")
public Config getAllValidateConfig() {
}
//這個方法會使用分散式快取
@Cacheable(cacheNames = CacheNames.CACHE_12HOUR)
public Config getAllValidateConfig2() {
}
核心其實就是實現 org.springframework.cache.CacheManager介面與繼承org.springframework.cache.support.AbstractValueAdaptingCache,在Spring快取框架下實現快取的讀與寫。
RedisCaffeineCacheManager.class 主要來管理快取範例,根據不同的 CacheNames 生成對應的快取管理bean,然後放入一個map中。
package com.axin.idea.rediscaffeinecachestarter.support;
import com.axin.idea.rediscaffeinecachestarter.CacheRedisCaffeineProperties;
import com.github.benmanes.caffeine.cache.Caffeine;
import com.github.benmanes.caffeine.cache.stats.CacheStats;
import lombok.extern.slf4j.Slf4j;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.cache.Cache;
import org.springframework.cache.CacheManager;
import org.springframework.data.redis.core.RedisTemplate;
import org.springframework.util.CollectionUtils;
import java.util.*;
import java.util.concurrent.ConcurrentHashMap;
import java.util.concurrent.ConcurrentMap;
import java.util.concurrent.TimeUnit;
@Slf4j
public class RedisCaffeineCacheManager implements CacheManager {
private final Logger logger = LoggerFactory.getLogger(RedisCaffeineCacheManager.class);
private static ConcurrentMap<String, Cache> cacheMap = new ConcurrentHashMap<String, Cache>();
private CacheRedisCaffeineProperties cacheRedisCaffeineProperties;
private RedisTemplate<Object, Object> stringKeyRedisTemplate;
private boolean dynamic = true;
private Set<String> cacheNames;
{
cacheNames = new HashSet<>();
cacheNames.add(CacheNames.CACHE_15MINS);
cacheNames.add(CacheNames.CACHE_30MINS);
cacheNames.add(CacheNames.CACHE_60MINS);
cacheNames.add(CacheNames.CACHE_180MINS);
cacheNames.add(CacheNames.CACHE_12HOUR);
}
public RedisCaffeineCacheManager(CacheRedisCaffeineProperties cacheRedisCaffeineProperties,
RedisTemplate<Object, Object> stringKeyRedisTemplate) {
super();
this.cacheRedisCaffeineProperties = cacheRedisCaffeineProperties;
this.stringKeyRedisTemplate = stringKeyRedisTemplate;
this.dynamic = cacheRedisCaffeineProperties.isDynamic();
}
//——————————————————————— 進行快取工具 ——————————————————————
/**
* 清除所有程序快取
*/
public void clearAllCache() {
stringKeyRedisTemplate.convertAndSend(cacheRedisCaffeineProperties.getRedis().getTopic(), new CacheMessage(null, null));
}
/**
* 返回所有程序快取(二級快取)的統計資訊
* result:{"快取名稱":統計資訊}
* @return
*/
public static Map<String, CacheStats> getCacheStats() {
if (CollectionUtils.isEmpty(cacheMap)) {
return null;
}
Map<String, CacheStats> result = new LinkedHashMap<>();
for (Cache cache : cacheMap.values()) {
RedisCaffeineCache caffeineCache = (RedisCaffeineCache) cache;
result.put(caffeineCache.getName(), caffeineCache.getCaffeineCache().stats());
}
return result;
}
//—————————————————————————— core —————————————————————————
@Override
public Cache getCache(String name) {
Cache cache = cacheMap.get(name);
if(cache != null) {
return cache;
}
if(!dynamic && !cacheNames.contains(name)) {
return null;
}
cache = new RedisCaffeineCache(name, stringKeyRedisTemplate, caffeineCache(name), cacheRedisCaffeineProperties);
Cache oldCache = cacheMap.putIfAbsent(name, cache);
logger.debug("create cache instance, the cache name is : {}", name);
return oldCache == null ? cache : oldCache;
}
@Override
public Collection<String> getCacheNames() {
return this.cacheNames;
}
public void clearLocal(String cacheName, Object key) {
//cacheName為null 清除所有程序快取
if (cacheName == null) {
log.info("清除所有本地快取");
cacheMap = new ConcurrentHashMap<>();
return;
}
Cache cache = cacheMap.get(cacheName);
if(cache == null) {
return;
}
RedisCaffeineCache redisCaffeineCache = (RedisCaffeineCache) cache;
redisCaffeineCache.clearLocal(key);
}
/**
* 範例化本地一級快取
* @param name
* @return
*/
private com.github.benmanes.caffeine.cache.Cache<Object, Object> caffeineCache(String name) {
Caffeine<Object, Object> cacheBuilder = Caffeine.newBuilder();
CacheRedisCaffeineProperties.CacheDefault cacheConfig;
switch (name) {
case CacheNames.CACHE_15MINS:
cacheConfig = cacheRedisCaffeineProperties.getCache15m();
break;
case CacheNames.CACHE_30MINS:
cacheConfig = cacheRedisCaffeineProperties.getCache30m();
break;
case CacheNames.CACHE_60MINS:
cacheConfig = cacheRedisCaffeineProperties.getCache60m();
break;
case CacheNames.CACHE_180MINS:
cacheConfig = cacheRedisCaffeineProperties.getCache180m();
break;
case CacheNames.CACHE_12HOUR:
cacheConfig = cacheRedisCaffeineProperties.getCache12h();
break;
default:
cacheConfig = cacheRedisCaffeineProperties.getCacheDefault();
}
long expireAfterAccess = cacheConfig.getExpireAfterAccess();
long expireAfterWrite = cacheConfig.getExpireAfterWrite();
int initialCapacity = cacheConfig.getInitialCapacity();
long maximumSize = cacheConfig.getMaximumSize();
long refreshAfterWrite = cacheConfig.getRefreshAfterWrite();
log.debug("本地快取初始化:");
if (expireAfterAccess > 0) {
log.debug("設定本地快取存取後過期時間,{}秒", expireAfterAccess);
cacheBuilder.expireAfterAccess(expireAfterAccess, TimeUnit.SECONDS);
}
if (expireAfterWrite > 0) {
log.debug("設定本地快取寫入後過期時間,{}秒", expireAfterWrite);
cacheBuilder.expireAfterWrite(expireAfterWrite, TimeUnit.SECONDS);
}
if (initialCapacity > 0) {
log.debug("設定快取初始化大小{}", initialCapacity);
cacheBuilder.initialCapacity(initialCapacity);
}
if (maximumSize > 0) {
log.debug("設定本地快取最大值{}", maximumSize);
cacheBuilder.maximumSize(maximumSize);
}
if (refreshAfterWrite > 0) {
cacheBuilder.refreshAfterWrite(refreshAfterWrite, TimeUnit.SECONDS);
}
cacheBuilder.recordStats();
return cacheBuilder.build();
}
}
核心是get方法與put方法。
package com.axin.idea.rediscaffeinecachestarter.support;
import com.axin.idea.rediscaffeinecachestarter.CacheRedisCaffeineProperties;
import com.github.benmanes.caffeine.cache.Cache;
import lombok.Getter;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.cache.support.AbstractValueAdaptingCache;
import org.springframework.data.redis.core.RedisTemplate;
import org.springframework.util.StringUtils;
import java.time.Duration;
import java.util.HashMap;
import java.util.Map;
import java.util.Set;
import java.util.concurrent.Callable;
import java.util.concurrent.ConcurrentHashMap;
import java.util.concurrent.TimeUnit;
import java.util.concurrent.locks.ReentrantLock;
public class RedisCaffeineCache extends AbstractValueAdaptingCache {
private final Logger logger = LoggerFactory.getLogger(RedisCaffeineCache.class);
private String name;
private RedisTemplate<Object, Object> redisTemplate;
@Getter
private Cache<Object, Object> caffeineCache;
private String cachePrefix;
/**
* 預設key超時時間 3600s
*/
private long defaultExpiration = 3600;
private Map<String, Long> defaultExpires = new HashMap<>();
{
defaultExpires.put(CacheNames.CACHE_15MINS, TimeUnit.MINUTES.toSeconds(15));
defaultExpires.put(CacheNames.CACHE_30MINS, TimeUnit.MINUTES.toSeconds(30));
defaultExpires.put(CacheNames.CACHE_60MINS, TimeUnit.MINUTES.toSeconds(60));
defaultExpires.put(CacheNames.CACHE_180MINS, TimeUnit.MINUTES.toSeconds(180));
defaultExpires.put(CacheNames.CACHE_12HOUR, TimeUnit.HOURS.toSeconds(12));
}
private String topic;
private Map<String, ReentrantLock> keyLockMap = new ConcurrentHashMap();
protected RedisCaffeineCache(boolean allowNullValues) {
super(allowNullValues);
}
public RedisCaffeineCache(String name, RedisTemplate<Object, Object> redisTemplate,
Cache<Object, Object> caffeineCache, CacheRedisCaffeineProperties cacheRedisCaffeineProperties) {
super(cacheRedisCaffeineProperties.isCacheNullValues());
this.name = name;
this.redisTemplate = redisTemplate;
this.caffeineCache = caffeineCache;
this.cachePrefix = cacheRedisCaffeineProperties.getCachePrefix();
this.defaultExpiration = cacheRedisCaffeineProperties.getRedis().getDefaultExpiration();
this.topic = cacheRedisCaffeineProperties.getRedis().getTopic();
defaultExpires.putAll(cacheRedisCaffeineProperties.getRedis().getExpires());
}
@Override
public String getName() {
return this.name;
}
@Override
public Object getNativeCache() {
return this;
}
@Override
public <T> T get(Object key, Callable<T> valueLoader) {
Object value = lookup(key);
if (value != null) {
return (T) value;
}
//key在redis和快取中均不存在
ReentrantLock lock = keyLockMap.get(key.toString());
if (lock == null) {
logger.debug("create lock for key : {}", key);
keyLockMap.putIfAbsent(key.toString(), new ReentrantLock());
lock = keyLockMap.get(key.toString());
}
try {
lock.lock();
value = lookup(key);
if (value != null) {
return (T) value;
}
//執行原方法獲得value
value = valueLoader.call();
Object storeValue = toStoreValue(value);
put(key, storeValue);
return (T) value;
} catch (Exception e) {
throw new ValueRetrievalException(key, valueLoader, e.getCause());
} finally {
lock.unlock();
}
}
@Override
public void put(Object key, Object value) {
if (!super.isAllowNullValues() && value == null) {
this.evict(key);
return;
}
long expire = getExpire();
logger.debug("put:{},expire:{}", getKey(key), expire);
redisTemplate.opsForValue().set(getKey(key), toStoreValue(value), expire, TimeUnit.SECONDS);
//快取變更時通知其他節點清理本地快取
push(new CacheMessage(this.name, key));
//此處put沒有意義,會收到自己傳送的快取key失效訊息
// caffeineCache.put(key, value);
}
@Override
public ValueWrapper putIfAbsent(Object key, Object value) {
Object cacheKey = getKey(key);
// 使用setIfAbsent原子性操作
long expire = getExpire();
boolean setSuccess;
setSuccess = redisTemplate.opsForValue().setIfAbsent(getKey(key), toStoreValue(value), Duration.ofSeconds(expire));
Object hasValue;
//setNx結果
if (setSuccess) {
push(new CacheMessage(this.name, key));
hasValue = value;
}else {
hasValue = redisTemplate.opsForValue().get(cacheKey);
}
caffeineCache.put(key, toStoreValue(value));
return toValueWrapper(hasValue);
}
@Override
public void evict(Object key) {
// 先清除redis中快取資料,然後清除caffeine中的快取,避免短時間內如果先清除caffeine快取後其他請求會再從redis裡載入到caffeine中
redisTemplate.delete(getKey(key));
push(new CacheMessage(this.name, key));
caffeineCache.invalidate(key);
}
@Override
public void clear() {
// 先清除redis中快取資料,然後清除caffeine中的快取,避免短時間內如果先清除caffeine快取後其他請求會再從redis裡載入到caffeine中
Set<Object> keys = redisTemplate.keys(this.name.concat(":*"));
for (Object key : keys) {
redisTemplate.delete(key);
}
push(new CacheMessage(this.name, null));
caffeineCache.invalidateAll();
}
/**
* 取值邏輯
* @param key
* @return
*/
@Override
protected Object lookup(Object key) {
Object cacheKey = getKey(key);
Object value = caffeineCache.getIfPresent(key);
if (value != null) {
logger.debug("從本地快取中獲得key, the key is : {}", cacheKey);
return value;
}
value = redisTemplate.opsForValue().get(cacheKey);
if (value != null) {
logger.debug("從redis中獲得值,將值放到本地快取中, the key is : {}", cacheKey);
caffeineCache.put(key, value);
}
return value;
}
/**
* @description 清理本地快取
*/
public void clearLocal(Object key) {
logger.debug("clear local cache, the key is : {}", key);
if (key == null) {
caffeineCache.invalidateAll();
} else {
caffeineCache.invalidate(key);
}
}
//————————————————————————————私有方法——————————————————————————
private Object getKey(Object key) {
String keyStr = this.name.concat(":").concat(key.toString());
return StringUtils.isEmpty(this.cachePrefix) ? keyStr : this.cachePrefix.concat(":").concat(keyStr);
}
private long getExpire() {
long expire = defaultExpiration;
Long cacheNameExpire = defaultExpires.get(this.name);
return cacheNameExpire == null ? expire : cacheNameExpire.longValue();
}
/**
* @description 快取變更時通知其他節點清理本地快取
*/
private void push(CacheMessage message) {
redisTemplate.convertAndSend(topic, message);
}
}
現在的線上生產的都是多個節點,如果本節點的快取失效了,是需要通過中介軟體來通知其他節點失效訊息的。本元件考慮到學習分享讓大家引入的依賴少點,就直接通過 redis 來傳送訊息了,實際生產過程中換成成熟的訊息中介軟體(kafka、RocketMQ)來做通知更為穩妥。