对于一些请求服务器的接口,可能存在重复发起请求,如果是查询操作倒是并无大碍,但是如果涉及到写入操作,一旦重复,可能对业务逻辑造成很严重的后果,例如交易的接口如果重复请求可能会重复下单。
这里我们使用过滤器的方式对进入服务器的请求进行过滤操作,实现对相同客户端请求同一个接口的过滤。
@Slf4j
@Component
public class IRequestFilter extends OncePerRequestFilter {
@Resource
private FastMap fastMap;
@Override
protected void doFilterInternal(HttpServletRequest request, HttpServletResponse response, FilterChain chain) throws ServletException, IOException {
ServletRequestAttributes attributes = (ServletRequestAttributes) RequestContextHolder.getRequestAttributes();
String address = attributes != null ? attributes.getRequest().getRemoteAddr() : UUID.randomUUID().toString();
if (Objects.equals(request.getMethod(), "GET")) {
StringBuilder str = new StringBuilder();
str.append(request.getRequestURI()).append("|")
.append(request.getRemotePort()).append("|")
.append(request.getLocalName()).append("|")
.append(address);
String hex = DigestUtil.md5Hex(new String(str));
log.info("请求的MD5值为:{}", hex);
if (fastMap.containsKey(hex)) {
throw new IllegalStateException("请求重复,请稍后重试!");
}
fastMap.put(hex, 10 * 1000L);
fastMap.expired(hex, 10 * 1000L, (key, val) -> System.out.println("map:" + fastMap + ",删除的key:" + key + ",线程名:" + Thread.currentThread().getName()));
}
log.info("请求的 address:{}", address);
chain.doFilter(request, response);
}
}
通过继承Spring中的OncePerRequestFilter过滤器,确保在一次请求中只通过一次filter,而不需要重复的执行
通过获取请求体中的数据,计算出MD5值,存储在基于内存实现的FastMap中,FastMap的键为MD5值,value表示多久以内不能重复请求,这里配置的是10s内不能重复请求。通过调用FastMap的expired()
方法,设置该请求的过期时间和过期时的回调函数
@Component
public class FastMap {
private final TreeMap<Long, List<String>> expireKeysMap = new TreeMap<>();
private final Map<String, Long> keyExpireMap = new ConcurrentHashMap<>();
private final HashMap<String, ExpireCallback<String, Long>> keyExpireCallbackMap = new HashMap<>();
private final ReentrantReadWriteLock readWriteLock = new ReentrantReadWriteLock();
private final Lock dataWriteLock = readWriteLock.writeLock();
private final Lock dataReadLock = readWriteLock.readLock();
private final ReentrantReadWriteLock expireKeysReadWriteLock = new ReentrantReadWriteLock();
private final Lock expireKeysWriteLock = expireKeysReadWriteLock.writeLock();
private final Lock expireKeysReadLock = expireKeysReadWriteLock.readLock();
private volatile ScheduledExecutorService scheduledExecutorService;
private static final int ONE_MILLION = 100_0000;
public FastMap() {
this.init();
}
private void init() {
// 双重校验构造一个单例的scheduledExecutorService
if (scheduledExecutorService == null) {
synchronized (FastMap.class) {
if (scheduledExecutorService == null) {
// 启用定时器,定时删除过期key,1秒后启动,定时1秒, 因为时间间隔计算基于nanoTime,比timer.schedule更靠谱
scheduledExecutorService = new ScheduledThreadPoolExecutor(1, runnable -> {
Thread thread = new Thread(runnable, "expireTask-" + UUID.randomUUID());
thread.setDaemon(true);
return thread;
});
}
}
}
}
public boolean containsKey(Object key) {
dataReadLock.lock();
try {
return this.keyExpireMap.containsKey(key);
} finally {
dataReadLock.unlock();
}
}
public Long put(String key, Long value) {
dataWriteLock.lock();
try {
return this.keyExpireMap.put(key, value);
} finally {
dataWriteLock.unlock();
}
}
public Long remove(Object key) {
dataWriteLock.lock();
try {
return this.keyExpireMap.remove(key);
} finally {
dataWriteLock.unlock();
}
}
public Long expired(String key, Long ms, ExpireCallback<String, Long> callback) {
// 对过期数据写上锁
expireKeysWriteLock.lock();
try {
// 使用nanoTime消除系统时间的影响,转成毫秒存储降低timeKey数量,过期时间精确到毫秒级别
Long expireTime = (System.nanoTime() / ONE_MILLION + ms);
this.keyExpireMap.put(key, expireTime);
List<String> keys = this.expireKeysMap.get(expireTime);
if (keys == null) {
keys = new ArrayList<>();
keys.add(key);
this.expireKeysMap.put(expireTime, keys);
} else {
keys.add(key);
}
if (callback != null) {
// 设置的过期回调函数
this.keyExpireCallbackMap.put(key, callback);
}
// 使用延时服务调用清理key的函数,可以及时调用过期回调函数
// 同key重复调用,会产生多个延时任务,就是多次调用清理函数,但是不会产生多次回调,因为回调取决于过期时间和回调函数)
scheduledExecutorService.schedule(this::clearExpireData, ms, TimeUnit.MILLISECONDS);
//假定系统时间不修改前提下的过期时间
return System.currentTimeMillis() + ms;
} finally {
expireKeysWriteLock.unlock();
}
}
private void clearExpireData() {
// 查找过期key
Long curTimestamp = System.nanoTime() / ONE_MILLION;
Map<Long, List<String>> expiredKeysMap = new LinkedHashMap<>();
expireKeysReadLock.lock();
try {
// 过期时间在【从前至此刻】区间内的都为过期的key
// headMap():获取从头到 curTimestamp 元素的集合:不包含 curTimestamp
SortedMap<Long, List<String>> sortedMap = this.expireKeysMap.headMap(curTimestamp, true);
expiredKeysMap.putAll(sortedMap);
} finally {
expireKeysReadLock.unlock();
}
for (Map.Entry<Long, List<String>> entry : expiredKeysMap.entrySet()) {
for (String key : entry.getValue()) {
// 删除数据
Long val = this.remove(key);
// 首次调用删除(val!=null,前提:val存储值都不为null)
if (val != null) {
// 如果存在过期回调函数,则执行回调
ExpireCallback<String, Long> callback;
expireKeysReadLock.lock();
try {
callback = this.keyExpireCallbackMap.get(key);
} finally {
expireKeysReadLock.unlock();
}
if (callback != null) {
// 回调函数创建新线程调用,防止因为耗时太久影响线程池的清理工作
// 这里为什么不用线程池调用,因为ScheduledThreadPoolExecutor线程池仅支持核心线程数设置,不支持非核心线程的添加
// 核心线程数用一个就可以完成清理工作,添加额外的核心线程数浪费了
new Thread(() -> callback.onExpire(key, val), "callback-thread-" + UUID.randomUUID()).start();
}
}
this.keyExpireCallbackMap.remove(key);
}
this.expireKeysMap.remove(entry.getKey());
}
}
}
FastMap通过ScheduledExecutorService
接口实现定时线程任务的方式对请求处于过期时间的自动删除。
到此这篇关于SpringBoot基于过滤器和内存实现重复请求拦截的文章就介绍到这了,更多相关SpringBoot重复请求拦截内容请搜索编程网以前的文章或继续浏览下面的相关文章希望大家以后多多支持编程网!