ISSN 1000-1239 CN 11-1777/TP

计算机研究与发展 ›› 2017, Vol. 54 ›› Issue (6): 1348-1356.doi: 10.7544/issn1000-1239.2017.20170098

所属专题: 2017计算机体系结构前言技术(一)专题

• 系统结构 • 上一篇    下一篇

深度卷积神经网络的数据表示方法分析与实践

王佩琪1,2,高原1,2,刘振宇2,王海霞2,汪东升2   

  1. 1(清华大学计算机科学与技术系 北京 100084); 2(清华信息科学与技术国家实验室(筹) 北京 100084) (wpq14@mails.tsinghua.edu.cn)
  • 出版日期: 2017-06-01
  • 基金资助: 
    国家自然科学基金项目(61373025);国家重点研发计划项目(2016YFB1000303)

A Comparison Among Different Numeric Representations in Deep Convolution Neural Networks

Wang Peiqi1,2, Gao Yuan1,2, Liu Zhenyu2, Wang Haixia2, Wang Dongsheng2   

  1. 1(Department of Computer Science and Technology, Tsinghua University, Beijing 100084); 2(Tsinghua National Laboratory for Information Science and Technology, Beijing 100084)
  • Online: 2017-06-01

摘要: 深度卷积神经网络在多个领域展现了不凡的性能,并被广泛应用.随着网络深度的增加和网络结构不断复杂化,计算资源和存储资源的需求也在不断攀升.专用硬件可以很好地解决对计算和存储的双重需求,在低功耗同时满足较高的计算性能,从而应用在一些无法使用通用CPU和GPU的场景中.在专用硬件设计过程中仍存在着很多亟待解决的问题,例如选择何种数据表示方法、如何平衡数据表示精度与硬件实现代价等.为解决上述问题,针对定点数和浮点数建立误差分析模型,从理论角度分析如何选择表示精度及选择结果对网络准确率的影响,并通过实验探究不同数据表示方法对硬件实现代价的影响.通过理论分析和实验验证可知,在一般情况下,满足同等精度要求时浮点表示方法在硬件实现开销上占有一定优势.除此之外,还根据浮点表示特征对神经网络中卷积操作进行了硬件实现,与定点数相比在功耗和面积上分别降低92.9%和77.2%.

关键词: 深度卷积神经网络, 数据表示方式, 浮点数据表示, 定点数据表示, 卷积操作优化

Abstract: Deep convolution neural networks have been widely used in industries as well as academic area because of their triumphant performance. There are tendencies toward deeper and more complex network structures, which leads to demand of substantial computation and memory resources. Customized hardware is an appropriate and feasible option, which is beneficial to maintain high performance in lower energy consumption. Furthermore, customized hardware can also be adopted in some special situations where CPU and GPU cannot be placed. During the hardware-designing processes, we need to address some problems like how to choose different types of numeric representation as well as precision. In this article, we focus on two typical numeric representations, fixed-point and floating-point, and propose corresponding error models. Using these models, we theoretically analyze the influence of different types of data representation on the hardware overhead of neural networks. It is remarkable that floating-point has clear advantages over fixed-point under ordinary circumstances. In general, we verify through experiments that floating-point numbers, which are limited to certain precision, preponderate in both hardware area and power consumption. What’s more, according to the features of floating-point representation, our customized hardware implementation of convolution computation declines the power and area with 14.1× and 4.38× respectively.

Key words: deep convolution neural network, numeric representation, floating-point computation, fixed-point computation, convolution optimization

中图分类号: