• 中国精品科技期刊
  • CCF推荐A类中文期刊
  • 计算领域高质量科技期刊T1类
高级检索

基于细粒度数据流架构的稀疏神经网络全连接层加速

向陶然, 叶笑春, 李文明, 冯煜晶, 谭旭, 张浩, 范东睿

向陶然, 叶笑春, 李文明, 冯煜晶, 谭旭, 张浩, 范东睿. 基于细粒度数据流架构的稀疏神经网络全连接层加速[J]. 计算机研究与发展, 2019, 56(6): 1192-1204. DOI: 10.7544/issn1000-1239.2019.20190117
引用本文: 向陶然, 叶笑春, 李文明, 冯煜晶, 谭旭, 张浩, 范东睿. 基于细粒度数据流架构的稀疏神经网络全连接层加速[J]. 计算机研究与发展, 2019, 56(6): 1192-1204. DOI: 10.7544/issn1000-1239.2019.20190117
Xiang Taoran, Ye Xiaochun, Li Wenming, Feng Yujing, Tan Xu, Zhang Hao, Fan Dongrui. Accelerating Fully Connected Layers of Sparse Neural Networks with Fine-Grained Dataflow Architectures[J]. Journal of Computer Research and Development, 2019, 56(6): 1192-1204. DOI: 10.7544/issn1000-1239.2019.20190117
Citation: Xiang Taoran, Ye Xiaochun, Li Wenming, Feng Yujing, Tan Xu, Zhang Hao, Fan Dongrui. Accelerating Fully Connected Layers of Sparse Neural Networks with Fine-Grained Dataflow Architectures[J]. Journal of Computer Research and Development, 2019, 56(6): 1192-1204. DOI: 10.7544/issn1000-1239.2019.20190117
向陶然, 叶笑春, 李文明, 冯煜晶, 谭旭, 张浩, 范东睿. 基于细粒度数据流架构的稀疏神经网络全连接层加速[J]. 计算机研究与发展, 2019, 56(6): 1192-1204. CSTR: 32373.14.issn1000-1239.2019.20190117
引用本文: 向陶然, 叶笑春, 李文明, 冯煜晶, 谭旭, 张浩, 范东睿. 基于细粒度数据流架构的稀疏神经网络全连接层加速[J]. 计算机研究与发展, 2019, 56(6): 1192-1204. CSTR: 32373.14.issn1000-1239.2019.20190117
Xiang Taoran, Ye Xiaochun, Li Wenming, Feng Yujing, Tan Xu, Zhang Hao, Fan Dongrui. Accelerating Fully Connected Layers of Sparse Neural Networks with Fine-Grained Dataflow Architectures[J]. Journal of Computer Research and Development, 2019, 56(6): 1192-1204. CSTR: 32373.14.issn1000-1239.2019.20190117
Citation: Xiang Taoran, Ye Xiaochun, Li Wenming, Feng Yujing, Tan Xu, Zhang Hao, Fan Dongrui. Accelerating Fully Connected Layers of Sparse Neural Networks with Fine-Grained Dataflow Architectures[J]. Journal of Computer Research and Development, 2019, 56(6): 1192-1204. CSTR: 32373.14.issn1000-1239.2019.20190117

基于细粒度数据流架构的稀疏神经网络全连接层加速

基金项目: 国家重点研发计划项目(2018YFB1003501);国家自然科学基金项目(61732018,61872335,61802367);中国科学院国际伙伴计划(171111KYSB20170032);计算机体系结构国家重点实验室创新项目(CARCH3303,CARCH3407,CARCH3502,CARCH3505)
详细信息
  • 中图分类号: TP387

Accelerating Fully Connected Layers of Sparse Neural Networks with Fine-Grained Dataflow Architectures

Funds: This work was supported by the National Key Research and Development Plan of China (2018YFB1003501), the National Natural Science Foundation of China (61732018, 61872335, 61802367), the International Partnership Program of Chinese Academy of Sciences (171111KYSB20170032), and the Innovation Project of the State Key Laboratory of Computer Architecture (CARCH3303, CARCH3407, CARCH3502, CARCH3505).
  • 摘要: 深度神经网络(deep neural network, DNN)是目前最先进的图像识别算法,被广泛应用于人脸识别、图像识别、文字识别等领域.DNN具有极高的计算复杂性,为解决这个问题,近年来涌出了大量可以并行运算神经网络的硬件加速器.但是,DNN中的全连接层有大量的权重参数,对加速器的带宽提出了很高的要求.为了减轻加速器的带宽压力,一些DNN压缩算法被提出.然而基于FPGA和ASIC的DNN专用加速器,通常是通过牺牲硬件的灵活性获得更高的加速比和更低的能耗,很难实现稀疏神经网络的加速.而另一类基于CPU,GPU的CNN加速方案虽然较为灵活,但是带来很高的能耗.细粒度数据流体系结构打破了传统的控制流结构的限制,展示出了加速DNN的天然优势,它在提供高性能的运算能力的同时也保持了一定的灵活性.为此,提出了一种在基于细粒度数据流体系结构的硬件加速器上加速稀疏的DNN全连接层的方案.该方案相较于原有稠密的全连接层的计算减少了2.44×~ 6.17×的峰值带宽需求.此外细粒度数据流加速器在运行稀疏全连接层时的计算部件利用率远超过其他硬件平台对稀疏全连接层的实现,平均比CPU,GPU和mGPU分别高了43.15%,34.57%和44.24%.
    Abstract: Deep neural network (DNN) is a hot and state-of-the-art algorithm which is widely used in applications such as face recognition, intelligent monitoring, image recognition and text recognition. Because of its high computational complexity, many efficient hardware accelerators have been proposed to exploit high degree of parallel processing for DNN. However, the fully connected layers in DNN have a large number of weight parameters, which imposes high requirements on the bandwidth of the accelerator. In order to reduce the bandwidth pressure of the accelerator, some DNN compression algorithms are proposed. But accelerators which are implemented on FPGAs and ASICs usually sacrifice generality for higher performance and lower power consumption, making it difficult to accelerate sparse neural networks. Other accelerators, such as GPUs, are general enough, but they lead to higher power consumption. Fine-grained dataflow architectures, which break conventional Von Neumann architectures, show natural advantages in processing DNN-like algorithms with high computational efficiency and low power consumption. At the same time, it remains broadly applicable and adaptable. In this paper, we propose a scheme to accelerate the sparse DNN fully connected layers on a hardware accelerator based on fine-grained dataflow architecture. Compared with the original dense fully connected layers, the scheme reduces the peak bandwidth requirement of 2.44×~ 6.17×. In addition, the utilization of the computational resource of the fine-grained dataflow accelerator running the sparse fully-connected layers far exceeds the implementation by other hardware platforms, which is 43.15%, 34.57%, and 44.24% higher than the CPU, GPU, and mGPU, respectively.
  • 期刊类型引用(1)

    1. 陈玉标,李建中,李英姝. SBS:基于固态盘内部并行性的R-树高效查询算法. 计算机研究与发展. 2020(11): 2404-2418 . 本站查看

    其他类型引用(6)

计量
  • 文章访问数:  1640
  • HTML全文浏览量:  10
  • PDF下载量:  792
  • 被引次数: 7
出版历程
  • 发布日期:  2019-05-31

目录

    /

    返回文章
    返回