Ding Wenlong, Wang Chengning, Tong Wei. Energy-Efficient Floating-Point Memristive In-Memory Processing System Based on Self-Selective Mantissa Compaction[J]. Journal of Computer Research and Development, 2022, 59(3): 533-552. DOI: 10.7544/issn1000-1239.20210580
Citation:
Ding Wenlong, Wang Chengning, Tong Wei. Energy-Efficient Floating-Point Memristive In-Memory Processing System Based on Self-Selective Mantissa Compaction[J]. Journal of Computer Research and Development, 2022, 59(3): 533-552. DOI: 10.7544/issn1000-1239.20210580
Ding Wenlong, Wang Chengning, Tong Wei. Energy-Efficient Floating-Point Memristive In-Memory Processing System Based on Self-Selective Mantissa Compaction[J]. Journal of Computer Research and Development, 2022, 59(3): 533-552. DOI: 10.7544/issn1000-1239.20210580
Citation:
Ding Wenlong, Wang Chengning, Tong Wei. Energy-Efficient Floating-Point Memristive In-Memory Processing System Based on Self-Selective Mantissa Compaction[J]. Journal of Computer Research and Development, 2022, 59(3): 533-552. DOI: 10.7544/issn1000-1239.20210580
1(School of Computer Science and Technology, Huazhong University of Science and Technology, Wuhan 430074)
2(Wuhan National Laboratory for Optoelectronics(Huazhong University of Science and Technology), Wuhan 430074)
Funds: This work was supported by the National Natural Science Foundation of China (61832007, 61821003), the Fundamental Research Funds for the Central Universities (2019kfyXMBZ037), and the Zhejiang Lab Open Fund (2020AA3AB07).
Matrix-vector multiplication (MVM) is a key computing kernel for solving high-performance scientific systems. Recent work by Feinberg et al has proposed a method of deploying high-precision operands on memristive crossbars, showing its great potential on accelerating scientific MVM. Since different types of scientific computing applications have different precision requirements, providing appropriate computation methods for specific applications is an effective way to further reduce energy consumption. This paper proposes a system with mantissa compaction and alignment optimization strategies. Under the premise of implementing the basic function of high-precision floating-point memristive MVM, the proposed system is also possible to properly select the compaction bits of the floating-point mantissa according to application precision requirements. By neglecting the activation of the low-bit crossbars with less mantissa significance and the redundant alignment crossbars when performing computation, the energy consumption of computational crossbars and peripheral circuits are significantly reduced. The evaluation result shows that when the crossbar-based in-memory solutions of sparse linear systems have average solving residual of 0~10\+\{-3\} order of magnitude compared with the software baseline, the average energy consumption of computational crossbars and peripheral analog-to-digital converters are reduced by 5%~65% and 30%~55% compared with the existing work without optimization, respectively.