高级检索
    吴林阳, 罗蓉, 郭雪婷, 郭崎. CPU和DRAM加速任务划分方法:大数据处理中Hash Joins的加速实例[J]. 计算机研究与发展, 2018, 55(2): 289-304. DOI: 10.7544/issn1000-1239.2018.20170842
    引用本文: 吴林阳, 罗蓉, 郭雪婷, 郭崎. CPU和DRAM加速任务划分方法:大数据处理中Hash Joins的加速实例[J]. 计算机研究与发展, 2018, 55(2): 289-304. DOI: 10.7544/issn1000-1239.2018.20170842
    Wu Linyang, Luo Rong, Guo Xueting, Guo Qi. Partitioning Acceleration Between CPU and DRAM: A Case Study on Accelerating Hash Joins in the Big Data Era[J]. Journal of Computer Research and Development, 2018, 55(2): 289-304. DOI: 10.7544/issn1000-1239.2018.20170842
    Citation: Wu Linyang, Luo Rong, Guo Xueting, Guo Qi. Partitioning Acceleration Between CPU and DRAM: A Case Study on Accelerating Hash Joins in the Big Data Era[J]. Journal of Computer Research and Development, 2018, 55(2): 289-304. DOI: 10.7544/issn1000-1239.2018.20170842

    CPU和DRAM加速任务划分方法:大数据处理中Hash Joins的加速实例

    Partitioning Acceleration Between CPU and DRAM: A Case Study on Accelerating Hash Joins in the Big Data Era

    • 摘要: 硬件加速器能够有效地提高当前计算机系统的能效.然而,传统的硬件加速器(如GPU,FPGA和定制的加速器)和内存是相互分离的,加速器和内存之间的数据移动难以避免,这使得如何降低加速器和内存之间数据移动的开销成为极具挑战性的问题.随着靠近数据的处理技术(near-data processing)和3D堆叠DRAM的出现,我们能够将硬件加速器集成到3D堆叠DRAM中,使得数据移动的开销大大降低.然而,由于3D堆叠DRAM对面积、功耗和散热具有严格的限制,所以不可能将一个功能复杂的硬件加速器完整地集成到DRAM中.因此,在设计内存端的硬件加速器时,应该考虑将加速任务在CPU和加速器之间合理地进行划分.以加速大数据系统中的一个关键操作hash joins为例子,阐述了CPU和内存端加速任务划分的设计思想.以减少数据移动为出发点,设计了一个包含内存端定制加速器和处理器端SIMD加速单元的混合加速系统,并对应用进行分析,将加速任务划分到不同的加速器.其中,内存端的加速器用于加速数据移动受限的执行阶段,而处理器端SIMD加速单元则用于加速数据移动开销较低成本的执行阶段.实验结果表明:与英特尔的Haswell处理器和Xeon Phi相比,设计的混合加速系统的能效分别提升了47.52倍和19.81倍.此外,提出的以数据移动为驱动的方法很容易扩展于指导其他应用的加速设计.

       

      Abstract: Hardware acceleration has been very effective in improving energy efficiency of existing computer systems. As traditional hardware accelerator designs (e.g. GPU, FPGA and customized accelerators) remain decoupled from main memory systems, reducing the energy cost of data movement remains a challenging problem, especially in the big data era. The emergence of near-data processing enables acceleration within the 3D-stacked DRAM to greatly reduce the data movement cost. However, due to the stringent area, power and thermal constraints on the 3D-stacked DRAM, it is nearly impossible to integrate all computation units required for a sufficiently complex functionality into the DRAM. Therefore, there is a need to design the memory side accelerator with this partitioning between CPU and accelerator in mind. In this paper, we describe our experience with partitioning the acceleration of hash joins, a key functionality for databases and big data systems, using a data-movement driven approach on a hybrid system, containing both memory-side customized accelerators and processor-side SIMD units. The memory-side accelerators are designed for accelerating execution phases that are bounded by data movements, while the processor-side SIMD units are employed for accelerating execution phases with negligible data movement cost. Experimental results show that the hybrid accelerated system improves energy efficiency up to 47.52x and 19.81x, compared with the Intel Has well and Xeon Phi processor, respectively. Moreover, our data-movement driven design approach can be easily extended to guide the design decisions of accelerating other emerging applications.

       

    /

    返回文章
    返回