高级检索
    王玉庆, 杨秋松, 李明树. 基于指令流访存模式预测的缓存替换策略[J]. 计算机研究与发展, 2022, 59(1): 31-46. DOI: 10.7544/issn1000-1239.20200503
    引用本文: 王玉庆, 杨秋松, 李明树. 基于指令流访存模式预测的缓存替换策略[J]. 计算机研究与发展, 2022, 59(1): 31-46. DOI: 10.7544/issn1000-1239.20200503
    Wang Yuqing, Yang Qiusong, Li Mingshu. A Cache Replacement Policy Based on Instruction Flow Access Pattern Prediction[J]. Journal of Computer Research and Development, 2022, 59(1): 31-46. DOI: 10.7544/issn1000-1239.20200503
    Citation: Wang Yuqing, Yang Qiusong, Li Mingshu. A Cache Replacement Policy Based on Instruction Flow Access Pattern Prediction[J]. Journal of Computer Research and Development, 2022, 59(1): 31-46. DOI: 10.7544/issn1000-1239.20200503

    基于指令流访存模式预测的缓存替换策略

    A Cache Replacement Policy Based on Instruction Flow Access Pattern Prediction

    • 摘要: 传统的缓存替换策略主要基于经验主义,近年来研究者们使用预测技术推测访存行为,提高缓存替换的准确性,预测技术的应用是当前缓存替换策略研究的热点.由于访存行为自身的复杂性,直接在缓存系统中预测访存行为是困难的,要面对很大的不确定性.当前已有的研究为了解决该问题,使用越来越复杂的预测算法来分析访存行为之间的关联.然而这种方式并未真正减小不确定性,同时现有的缓存替换策略很难避免乱序执行和缓存预取对访存行为分析过程的干扰.为了解决以上问题,提出了一种新的预测缓存访问序列的方法IFAPP(instruction flow access pattern prediction),根据分支预测技术推测程序指令流,定位指令流中的访存指令,进而对其中访存指令的行为逐一进行预测.通过访存序列计算每个替换候选项的重用距离,将重用距离最远的候选项踢出.该方法可以避免乱序执行和缓存预取的干扰,预测对象是行为简单的独立访存指令,减少预测过程中所面对的不确定性.实验结果表明,该算法在一级数据缓存上比LRU算法平均减少3.2%的缓存缺失.相比经典的基于缓存预测的BRRIP和BIP算法,该算法在一级数据缓存上分别减少12.3%和14.4%的缓存缺失.

       

      Abstract: Traditional cache replacement policies are mainly based on heuristics. In recent years, researchers have used prediction technologies to improve the performance of cache replacement. The application of prediction technologies is gradually becoming one research focus in cache replacement. Because the behaviors of loads and stores are complex, predicting these behaviors in caching systems is difficult with uncertainty. Some existing approaches have been proposed to resolve the problem with more and more complicated prediction algorithms. However, these methods cannot reduce uncertainty, and these methods cannot avoid the interference of out-of-order execution and cache prefetching at the same time. To solve these problems, we propose an approach to predict future memory reference, named IFAPP (instruction flow access pattern prediction). IFAPP recognizes loads and stores in programs based on the instructions flow predicted by branch prediction, and then IFAPP predicts the behavior of each of the loads and stores. IFAPP calculates reuse distance through predicted memory reference, and evicts the candidate with the largest reuse distance. IFAPP avoids the interference of out-of-order execution and cache prefetching. Besides, the objects of prediction are single loadstore behaviors which are easy to predict. Both of these alleviate the uncertainty of caching predictions. The evaluations prove that IFAPP reduces the cache misses by 3.2% compared with LRU in L1D. Compared with BRRIP and BIP, IFAPP reduces the cache misses by 12.3% and 14.4% in L1D.

       

    /

    返回文章
    返回