ISSN 1000-1239 CN 11-1777/TP

计算机研究与发展 ›› 2018, Vol. 55 ›› Issue (9): 2050-2065.doi: 10.7544/issn1000-1239.2018.20180269

所属专题: 2018新型存储系统结构前沿技术专题

• 系统结构 • 上一篇    下一篇

一种支持大页的层次化DRAMNVM混合内存系统

陈吉,刘海坤,王孝远,张宇,廖小飞,金海   

  1. (华中科技大学计算机科学与技术学院 武汉 430074) (服务计算技术与系统教育部重点实验室(华中科技大学) 武汉 430074) (集群与网格计算湖北省重点实验室(华中科技大学) 武汉 430074) (湖北省大数据技术与系统工程实验室(华中科技大学) 武汉 430074) (hkliu@hust.edu.cn)
  • 出版日期: 2018-09-01
  • 基金资助: 
    国家重点研发计划项目(2017YFB1001603);国家自然科学基金项目(61672251,61732010,61628204) This work was supported by the National Key Research and Development Program of China (2017YFB1001603) and the National Natural Science Foundation of China (61672251, 61732010, 61628204).

Largepages Supported Hierarchical DRAMNVM Hybrid Memory Systems

Chen Ji, Liu Haikun, Wang Xiaoyuan, Zhang Yu, Liao Xiaofei, Jin Hai   

  1. (School of Computer Science and Technology, Huazhong University of Science and Technology, Wuhan 430074) (Key Laboratory of Services Computing Technology and System(Huazhong University of Science and Technology), Ministry of Education, Wuhan 430074) (Cluster and Grid Computing Laboratory (Huazhong University of Science and Technology), Wuhan 430074) (Big Data Technology and System Laboratory (Huazhong University of Science and Technology), Wuhan 430074)
  • Online: 2018-09-01

摘要: 随着大数据应用的涌现,计算机系统需要更大容量的内存以满足大数据处理的高时效性需求.新型非易失性存储器(non-volatile memory,NVM)结合传统动态随机存储器(dynamic random access memory, DRAM)组成的混合内存系统具有内存容量大、功耗低的优势,因而得到了广泛关注.大数据应用同时也面临着旁路转换缓冲器(translation lookaside buffer, TLB)缺失率过高的性能瓶颈.大页可以有效降低TLB缺失率,然而,在混合内存中支持大页面临着大页迁移开销过大的问题.因此,设计了一种支持大页和大容量缓存的层次化混合内存系统:DRAM和NVM分别使用4KB和2MB粒度的页面分别进行管理,同时在DRAM和NVM之间实现直接映射.设计了基于访存频率的DRAM缓存数据过滤机制,减轻了带宽压力.提出了基于内存实时信息的动态热度阈值调整策略,灵活适应应用访存特征的变化.实验显示:与使用大页的全NVM内存系统和缓存热页(caching hot page, CHOP)系统相比平均有69.9%和15.2%的性能提升,而与使用大页的全DRAM内存系统相比平均只有8.8%的性能差距.

关键词: 动态随机存储器, 非易失性存储器, 混合内存, 大页, 缓存过滤

Abstract: Hybrid memory systems composed of non-volatile memory (NVM) and DRAM can offer large memory capacity and DRAM-like performance. However, with the increasing memory capacity and application footprints, the address translation overhead becomes another system performance bottleneck due to lower translation lookaside buffer (TLB) converge. Large pages can significantly improve the TLB converge, however, they impede fine-grained page migration in hybrid memory systems. In this paper, we propose a hierarchical hybrid memory system that supports both large pages and fine-grained page caching. We manage NVM and DRAM with large pages and small pages, respectively. The DRAM is used as a cache to NVM by using a direct mapping mechanism. We propose a cache filtering mechanism to only fetch frequently-access (hot) data into the DRAM cache. CPUs can still access the cold data directly in NVM through a DRAM bypassing mechanism. We dynamically adjust the threshold of hot data classification to adapt to the diversifying and dynamic memory access patterns of applications. Experimental results show that our strategy improves the application performance by 69.9% and 15.2% compared with a NVM-only system and the state-of-the-art CHOP scheme, respectively. The performance gap is only 8.8% compared with a DRAM-only memory system with large pages support.

Key words: dynamic random access memory (DRAM), non-volatile memory (NVM), hybrid memory, large pages, cache filtering

中图分类号: