Advanced Search
    Wu Linyang, Luo Rong, Guo Xueting, Guo Qi. Partitioning Acceleration Between CPU and DRAM: A Case Study on Accelerating Hash Joins in the Big Data Era[J]. Journal of Computer Research and Development, 2018, 55(2): 289-304. DOI: 10.7544/issn1000-1239.2018.20170842
    Citation: Wu Linyang, Luo Rong, Guo Xueting, Guo Qi. Partitioning Acceleration Between CPU and DRAM: A Case Study on Accelerating Hash Joins in the Big Data Era[J]. Journal of Computer Research and Development, 2018, 55(2): 289-304. DOI: 10.7544/issn1000-1239.2018.20170842

    Partitioning Acceleration Between CPU and DRAM: A Case Study on Accelerating Hash Joins in the Big Data Era

    • Hardware acceleration has been very effective in improving energy efficiency of existing computer systems. As traditional hardware accelerator designs (e.g. GPU, FPGA and customized accelerators) remain decoupled from main memory systems, reducing the energy cost of data movement remains a challenging problem, especially in the big data era. The emergence of near-data processing enables acceleration within the 3D-stacked DRAM to greatly reduce the data movement cost. However, due to the stringent area, power and thermal constraints on the 3D-stacked DRAM, it is nearly impossible to integrate all computation units required for a sufficiently complex functionality into the DRAM. Therefore, there is a need to design the memory side accelerator with this partitioning between CPU and accelerator in mind. In this paper, we describe our experience with partitioning the acceleration of hash joins, a key functionality for databases and big data systems, using a data-movement driven approach on a hybrid system, containing both memory-side customized accelerators and processor-side SIMD units. The memory-side accelerators are designed for accelerating execution phases that are bounded by data movements, while the processor-side SIMD units are employed for accelerating execution phases with negligible data movement cost. Experimental results show that the hybrid accelerated system improves energy efficiency up to 47.52x and 19.81x, compared with the Intel Has well and Xeon Phi processor, respectively. Moreover, our data-movement driven design approach can be easily extended to guide the design decisions of accelerating other emerging applications.
    • loading

    Catalog

      Turn off MathJax
      Article Contents

      /

      DownLoad:  Full-Size Img  PowerPoint
      Return
      Return