ISSN 1000-1239 CN 11-1777/TP

计算机研究与发展 ›› 2022, Vol. 59 ›› Issue (1): 31-46.doi: 10.7544/issn1000-1239.20200503

• 系统结构 • 上一篇    下一篇

基于指令流访存模式预测的缓存替换策略

王玉庆1,2,杨秋松1,李明树1   

  1. 1(中国科学院软件研究所基础软件国家工程研究中心 北京 100190);2(中国科学院大学 北京 100049) (yuqing@nfs.iscas.ac.cn)
  • 出版日期: 2022-01-01
  • 基金资助: 
    “核高基”国家科技重大专项基金项目(2014ZX01029101-002);中国科学院战略性先导科技专项(XDA-Y01-01) Program of Chinese Academy of Sciences (XDA-Y01-01).

A Cache Replacement Policy Based on Instruction Flow Access Pattern Prediction

Wang Yuqing1,2, Yang Qiusong1, Li Mingshu1   

  1. 1(National Engineering Research Center for Fundamental Software, Institute of Software, Chinese Academy of Sciences, Beijing 100190);2(University of Chinese Academy of Sciences, Beijing 100049)
  • Online: 2022-01-01
  • Supported by: 
    This work was supported by the National Science and Technology Major Projects of Hegaoji (2014ZX01029101-002) and the Strategic Priority Research

摘要: 传统的缓存替换策略主要基于经验主义,近年来研究者们使用预测技术推测访存行为,提高缓存替换的准确性,预测技术的应用是当前缓存替换策略研究的热点.由于访存行为自身的复杂性,直接在缓存系统中预测访存行为是困难的,要面对很大的不确定性.当前已有的研究为了解决该问题,使用越来越复杂的预测算法来分析访存行为之间的关联.然而这种方式并未真正减小不确定性,同时现有的缓存替换策略很难避免乱序执行和缓存预取对访存行为分析过程的干扰.为了解决以上问题,提出了一种新的预测缓存访问序列的方法IFAPP(instruction flow access pattern prediction),根据分支预测技术推测程序指令流,定位指令流中的访存指令,进而对其中访存指令的行为逐一进行预测.通过访存序列计算每个替换候选项的重用距离,将重用距离最远的候选项踢出.该方法可以避免乱序执行和缓存预取的干扰,预测对象是行为简单的独立访存指令,减少预测过程中所面对的不确定性.实验结果表明,该算法在一级数据缓存上比LRU算法平均减少3.2%的缓存缺失.相比经典的基于缓存预测的BRRIP和BIP算法,该算法在一级数据缓存上分别减少12.3%和14.4%的缓存缺失.

关键词: 分支预测;缓存替换策略;提前预测;访存序列预测;访存模式

Abstract: Traditional cache replacement policies are mainly based on heuristics. In recent years, researchers have used prediction technologies to improve the performance of cache replacement. The application of prediction technologies is gradually becoming one research focus in cache replacement. Because the behaviors of loads and stores are complex, predicting these behaviors in caching systems is difficult with uncertainty. Some existing approaches have been proposed to resolve the problem with more and more complicated prediction algorithms. However, these methods cannot reduce uncertainty, and these methods cannot avoid the interference of out-of-order execution and cache prefetching at the same time. To solve these problems, we propose an approach to predict future memory reference, named IFAPP (instruction flow access pattern prediction). IFAPP recognizes loads and stores in programs based on the instructions flow predicted by branch prediction, and then IFAPP predicts the behavior of each of the loads and stores. IFAPP calculates reuse distance through predicted memory reference, and evicts the candidate with the largest reuse distance. IFAPP avoids the interference of out-of-order execution and cache prefetching. Besides, the objects of prediction are single loadstore behaviors which are easy to predict. Both of these alleviate the uncertainty of caching predictions. The evaluations prove that IFAPP reduces the cache misses by 3.2% compared with LRU in L1D. Compared with BRRIP and BIP, IFAPP reduces the cache misses by 12.3% and 14.4% in L1D.

Key words: branch prediction, cache replacement policy, ahead prediction, memory access sequence prediction, memory access pattern

中图分类号: