ISSN 1000-1239 CN 11-1777/TP

计算机研究与发展 ›› 2015, Vol. 52 ›› Issue (7): 1522-1530.doi: 10.7544/issn1000-1239.2015.20148073

• 系统结构 • 上一篇    下一篇



  1. 1(高性能计算国家重点实验室(国防科学技术大学) 长沙 410073); 2(中国石油集团东方地球物理勘探公司 河北涿州 072751) (
  • 出版日期: 2015-07-01
  • 基金资助: 

A Cache Approach for Large Scale Data-Intensive Computing

Zhou Enqiang1, Zhang Wei1, Lu Yutong1, Hou Hongjun2, Dong Yong1   

  1. 1(State Key Laboratory of High Performance Computing (National University of Defense Technology), Changsha 410073);2(Bureau of Geophysical Prospecting, China National Petroleum Corporation, Zhuozhou, Hebei 072751)
  • Online: 2015-07-01

摘要: 随着高性能计算机逐步应用在大规模数据处理领域,存储系统将成为制约数据处理效率的主要瓶颈.在分析了影响数据密集型计算I/O性能若干关键因素的基础上,提出使用计算结点本地存储构建协作式非易失缓存、以分布式存储架构加速集中式存储架构的方法.该方法基于应用层协同使用分布化的本地存储资源,使用非易失存储介质构成大缓存空间,存放大规模数据分析的中间过程结果,以此实现高缓存命中率,并利用并发度约束控制等手段避免I/O竞争,充分利用本地存储的特定性能优势保证缓存加速效果,从而有效地提高了大规模数据处理过程的I/O效率.基于多平台多种I/O模式的测试结果证实了该方法的有效性,聚合I/O带宽具有高扩展性,典型数据密集应用的整体性能最大可提升6倍.

关键词: 数据密集计算, 缓存, 本地存储, 共享存储, 地震数据处理

Abstract: With HPC systems widely used in today’s modern science computing, more data-intensive applications are generating and analyzing the increasing scale of datasets, which makes HPC storage system facing new challenges. By comparing the different storage architectures with the corresponding approaches of file system, a novel cache approach, named DDCache, is proposed to improve the efficiency of data-intensive computing. DDCache leverages the distributed storage architecture as performance booster for centralized storage architecture by fully exploiting the potential benefits of node-local storage distributed across the system. In order to supply much larger cache volume than volatile memory cache, DDCache aggregates the node-local disks as huge non-volatile cooperative cache. Then high cache hit ratio is achieved through keeping intermediate data in the DDCache as long as possible during overall process of applications. To make the node-local storage efficient enough to act as data cache, locality aware data layout is used to make cached data close to compute tasks and evenly distributed. Furthermore, concurrency control is introduced to throttle I/O requests flowing into or out of DDCache and regain the special advantage of node-local storage. Evaluations on the typical HPC platforms verify the effectiveness of DDCache. Scalable I/O bandwidth is achieved on the well-known HPC scenario of checkpoint/restart and the overall performance of typical data-intensive application is improved up to 6 times.

Key words: data-intensive computing, cache, local storage, shared storage, seismic data processing