高级检索
    周恩强, 张伟, 卢宇彤, 侯红军, 董勇. 一种面向大规模数据密集计算的缓存方法[J]. 计算机研究与发展, 2015, 52(7): 1522-1530. DOI: 10.7544/issn1000-1239.2015.20148073
    引用本文: 周恩强, 张伟, 卢宇彤, 侯红军, 董勇. 一种面向大规模数据密集计算的缓存方法[J]. 计算机研究与发展, 2015, 52(7): 1522-1530. DOI: 10.7544/issn1000-1239.2015.20148073
    Zhou Enqiang, Zhang Wei, Lu Yutong, Hou Hongjun, Dong Yong. A Cache Approach for Large Scale Data-Intensive Computing[J]. Journal of Computer Research and Development, 2015, 52(7): 1522-1530. DOI: 10.7544/issn1000-1239.2015.20148073
    Citation: Zhou Enqiang, Zhang Wei, Lu Yutong, Hou Hongjun, Dong Yong. A Cache Approach for Large Scale Data-Intensive Computing[J]. Journal of Computer Research and Development, 2015, 52(7): 1522-1530. DOI: 10.7544/issn1000-1239.2015.20148073

    一种面向大规模数据密集计算的缓存方法

    A Cache Approach for Large Scale Data-Intensive Computing

    • 摘要: 随着高性能计算机逐步应用在大规模数据处理领域,存储系统将成为制约数据处理效率的主要瓶颈.在分析了影响数据密集型计算I/O性能若干关键因素的基础上,提出使用计算结点本地存储构建协作式非易失缓存、以分布式存储架构加速集中式存储架构的方法.该方法基于应用层协同使用分布化的本地存储资源,使用非易失存储介质构成大缓存空间,存放大规模数据分析的中间过程结果,以此实现高缓存命中率,并利用并发度约束控制等手段避免I/O竞争,充分利用本地存储的特定性能优势保证缓存加速效果,从而有效地提高了大规模数据处理过程的I/O效率.基于多平台多种I/O模式的测试结果证实了该方法的有效性,聚合I/O带宽具有高扩展性,典型数据密集应用的整体性能最大可提升6倍.

       

      Abstract: With HPC systems widely used in today’s modern science computing, more data-intensive applications are generating and analyzing the increasing scale of datasets, which makes HPC storage system facing new challenges. By comparing the different storage architectures with the corresponding approaches of file system, a novel cache approach, named DDCache, is proposed to improve the efficiency of data-intensive computing. DDCache leverages the distributed storage architecture as performance booster for centralized storage architecture by fully exploiting the potential benefits of node-local storage distributed across the system. In order to supply much larger cache volume than volatile memory cache, DDCache aggregates the node-local disks as huge non-volatile cooperative cache. Then high cache hit ratio is achieved through keeping intermediate data in the DDCache as long as possible during overall process of applications. To make the node-local storage efficient enough to act as data cache, locality aware data layout is used to make cached data close to compute tasks and evenly distributed. Furthermore, concurrency control is introduced to throttle I/O requests flowing into or out of DDCache and regain the special advantage of node-local storage. Evaluations on the typical HPC platforms verify the effectiveness of DDCache. Scalable I/O bandwidth is achieved on the well-known HPC scenario of checkpoint/restart and the overall performance of typical data-intensive application is improved up to 6 times.

       

    /

    返回文章
    返回