Advanced Search
    Wang Peiqi, Gao Yuan, Liu Zhenyu, Wang Haixia, Wang Dongsheng. A Comparison Among Different Numeric Representations in Deep Convolution Neural Networks[J]. Journal of Computer Research and Development, 2017, 54(6): 1348-1356. DOI: 10.7544/issn1000-1239.2017.20170098
    Citation: Wang Peiqi, Gao Yuan, Liu Zhenyu, Wang Haixia, Wang Dongsheng. A Comparison Among Different Numeric Representations in Deep Convolution Neural Networks[J]. Journal of Computer Research and Development, 2017, 54(6): 1348-1356. DOI: 10.7544/issn1000-1239.2017.20170098

    A Comparison Among Different Numeric Representations in Deep Convolution Neural Networks

    • Deep convolution neural networks have been widely used in industries as well as academic area because of their triumphant performance. There are tendencies toward deeper and more complex network structures, which leads to demand of substantial computation and memory resources. Customized hardware is an appropriate and feasible option, which is beneficial to maintain high performance in lower energy consumption. Furthermore, customized hardware can also be adopted in some special situations where CPU and GPU cannot be placed. During the hardware-designing processes, we need to address some problems like how to choose different types of numeric representation as well as precision. In this article, we focus on two typical numeric representations, fixed-point and floating-point, and propose corresponding error models. Using these models, we theoretically analyze the influence of different types of data representation on the hardware overhead of neural networks. It is remarkable that floating-point has clear advantages over fixed-point under ordinary circumstances. In general, we verify through experiments that floating-point numbers, which are limited to certain precision, preponderate in both hardware area and power consumption. What’s more, according to the features of floating-point representation, our customized hardware implementation of convolution computation declines the power and area with 14.1× and 4.38× respectively.
    • loading

    Catalog

      Turn off MathJax
      Article Contents

      /

      DownLoad:  Full-Size Img  PowerPoint
      Return
      Return