ISSN 1000-1239 CN 11-1777/TP

• 系统结构 •

### 深度卷积神经网络的数据表示方法分析与实践

1. 1(清华大学计算机科学与技术系 北京 100084); 2(清华信息科学与技术国家实验室(筹) 北京 100084) (wpq14@mails.tsinghua.edu.cn)
• 出版日期: 2017-06-01
• 基金资助:
国家自然科学基金项目(61373025)；国家重点研发计划项目(2016YFB1000303)

### A Comparison Among Different Numeric Representations in Deep Convolution Neural Networks

Wang Peiqi1,2, Gao Yuan1,2, Liu Zhenyu2, Wang Haixia2, Wang Dongsheng2

1. 1(Department of Computer Science and Technology, Tsinghua University, Beijing 100084); 2(Tsinghua National Laboratory for Information Science and Technology, Beijing 100084)
• Online: 2017-06-01

Abstract: Deep convolution neural networks have been widely used in industries as well as academic area because of their triumphant performance. There are tendencies toward deeper and more complex network structures, which leads to demand of substantial computation and memory resources. Customized hardware is an appropriate and feasible option, which is beneficial to maintain high performance in lower energy consumption. Furthermore, customized hardware can also be adopted in some special situations where CPU and GPU cannot be placed. During the hardware-designing processes, we need to address some problems like how to choose different types of numeric representation as well as precision. In this article, we focus on two typical numeric representations, fixed-point and floating-point, and propose corresponding error models. Using these models, we theoretically analyze the influence of different types of data representation on the hardware overhead of neural networks. It is remarkable that floating-point has clear advantages over fixed-point under ordinary circumstances. In general, we verify through experiments that floating-point numbers, which are limited to certain precision, preponderate in both hardware area and power consumption. What’s more, according to the features of floating-point representation, our customized hardware implementation of convolution computation declines the power and area with 14.1× and 4.38× respectively.