高级检索
    王佩琪, 高原, 刘振宇, 王海霞, 汪东升. 深度卷积神经网络的数据表示方法分析与实践[J]. 计算机研究与发展, 2017, 54(6): 1348-1356. DOI: 10.7544/issn1000-1239.2017.20170098
    引用本文: 王佩琪, 高原, 刘振宇, 王海霞, 汪东升. 深度卷积神经网络的数据表示方法分析与实践[J]. 计算机研究与发展, 2017, 54(6): 1348-1356. DOI: 10.7544/issn1000-1239.2017.20170098
    Wang Peiqi, Gao Yuan, Liu Zhenyu, Wang Haixia, Wang Dongsheng. A Comparison Among Different Numeric Representations in Deep Convolution Neural Networks[J]. Journal of Computer Research and Development, 2017, 54(6): 1348-1356. DOI: 10.7544/issn1000-1239.2017.20170098
    Citation: Wang Peiqi, Gao Yuan, Liu Zhenyu, Wang Haixia, Wang Dongsheng. A Comparison Among Different Numeric Representations in Deep Convolution Neural Networks[J]. Journal of Computer Research and Development, 2017, 54(6): 1348-1356. DOI: 10.7544/issn1000-1239.2017.20170098

    深度卷积神经网络的数据表示方法分析与实践

    A Comparison Among Different Numeric Representations in Deep Convolution Neural Networks

    • 摘要: 深度卷积神经网络在多个领域展现了不凡的性能,并被广泛应用.随着网络深度的增加和网络结构不断复杂化,计算资源和存储资源的需求也在不断攀升.专用硬件可以很好地解决对计算和存储的双重需求,在低功耗同时满足较高的计算性能,从而应用在一些无法使用通用CPU和GPU的场景中.在专用硬件设计过程中仍存在着很多亟待解决的问题,例如选择何种数据表示方法、如何平衡数据表示精度与硬件实现代价等.为解决上述问题,针对定点数和浮点数建立误差分析模型,从理论角度分析如何选择表示精度及选择结果对网络准确率的影响,并通过实验探究不同数据表示方法对硬件实现代价的影响.通过理论分析和实验验证可知,在一般情况下,满足同等精度要求时浮点表示方法在硬件实现开销上占有一定优势.除此之外,还根据浮点表示特征对神经网络中卷积操作进行了硬件实现,与定点数相比在功耗和面积上分别降低92.9%和77.2%.

       

      Abstract: Deep convolution neural networks have been widely used in industries as well as academic area because of their triumphant performance. There are tendencies toward deeper and more complex network structures, which leads to demand of substantial computation and memory resources. Customized hardware is an appropriate and feasible option, which is beneficial to maintain high performance in lower energy consumption. Furthermore, customized hardware can also be adopted in some special situations where CPU and GPU cannot be placed. During the hardware-designing processes, we need to address some problems like how to choose different types of numeric representation as well as precision. In this article, we focus on two typical numeric representations, fixed-point and floating-point, and propose corresponding error models. Using these models, we theoretically analyze the influence of different types of data representation on the hardware overhead of neural networks. It is remarkable that floating-point has clear advantages over fixed-point under ordinary circumstances. In general, we verify through experiments that floating-point numbers, which are limited to certain precision, preponderate in both hardware area and power consumption. What’s more, according to the features of floating-point representation, our customized hardware implementation of convolution computation declines the power and area with 14.1× and 4.38× respectively.

       

    /

    返回文章
    返回