ISSN 1000-1239 CN 11-1777/TP

Journal of Computer Research and Development ›› 2017, Vol. 54 ›› Issue (6): 1348-1356.doi: 10.7544/issn1000-1239.2017.20170098

Special Issue: 2017计算机体系结构前言技术(一)专题

Previous Articles     Next Articles

A Comparison Among Different Numeric Representations in Deep Convolution Neural Networks

Wang Peiqi1,2, Gao Yuan1,2, Liu Zhenyu2, Wang Haixia2, Wang Dongsheng2   

  1. 1(Department of Computer Science and Technology, Tsinghua University, Beijing 100084); 2(Tsinghua National Laboratory for Information Science and Technology, Beijing 100084)
  • Online:2017-06-01

Abstract: Deep convolution neural networks have been widely used in industries as well as academic area because of their triumphant performance. There are tendencies toward deeper and more complex network structures, which leads to demand of substantial computation and memory resources. Customized hardware is an appropriate and feasible option, which is beneficial to maintain high performance in lower energy consumption. Furthermore, customized hardware can also be adopted in some special situations where CPU and GPU cannot be placed. During the hardware-designing processes, we need to address some problems like how to choose different types of numeric representation as well as precision. In this article, we focus on two typical numeric representations, fixed-point and floating-point, and propose corresponding error models. Using these models, we theoretically analyze the influence of different types of data representation on the hardware overhead of neural networks. It is remarkable that floating-point has clear advantages over fixed-point under ordinary circumstances. In general, we verify through experiments that floating-point numbers, which are limited to certain precision, preponderate in both hardware area and power consumption. What’s more, according to the features of floating-point representation, our customized hardware implementation of convolution computation declines the power and area with 14.1× and 4.38× respectively.

Key words: deep convolution neural network, numeric representation, floating-point computation, fixed-point computation, convolution optimization

CLC Number: