• 中国精品科技期刊
  • CCF推荐A类中文期刊
  • 计算领域高质量科技期刊T1类
高级检索

基于粗粒度数据流架构的稀疏卷积神经网络加速

吴欣欣, 欧焱, 李文明, 王达, 张浩, 范东睿

吴欣欣, 欧焱, 李文明, 王达, 张浩, 范东睿. 基于粗粒度数据流架构的稀疏卷积神经网络加速[J]. 计算机研究与发展, 2021, 58(7): 1504-1517. DOI: 10.7544/issn1000-1239.2021.20200112
引用本文: 吴欣欣, 欧焱, 李文明, 王达, 张浩, 范东睿. 基于粗粒度数据流架构的稀疏卷积神经网络加速[J]. 计算机研究与发展, 2021, 58(7): 1504-1517. DOI: 10.7544/issn1000-1239.2021.20200112
Wu Xinxin, Ou Yan, Li Wenming, Wang Da, Zhang Hao, Fan Dongrui. Acceleration of Sparse Convolutional Neural Network Based on Coarse-Grained Dataflow Architecture[J]. Journal of Computer Research and Development, 2021, 58(7): 1504-1517. DOI: 10.7544/issn1000-1239.2021.20200112
Citation: Wu Xinxin, Ou Yan, Li Wenming, Wang Da, Zhang Hao, Fan Dongrui. Acceleration of Sparse Convolutional Neural Network Based on Coarse-Grained Dataflow Architecture[J]. Journal of Computer Research and Development, 2021, 58(7): 1504-1517. DOI: 10.7544/issn1000-1239.2021.20200112
吴欣欣, 欧焱, 李文明, 王达, 张浩, 范东睿. 基于粗粒度数据流架构的稀疏卷积神经网络加速[J]. 计算机研究与发展, 2021, 58(7): 1504-1517. CSTR: 32373.14.issn1000-1239.2021.20200112
引用本文: 吴欣欣, 欧焱, 李文明, 王达, 张浩, 范东睿. 基于粗粒度数据流架构的稀疏卷积神经网络加速[J]. 计算机研究与发展, 2021, 58(7): 1504-1517. CSTR: 32373.14.issn1000-1239.2021.20200112
Wu Xinxin, Ou Yan, Li Wenming, Wang Da, Zhang Hao, Fan Dongrui. Acceleration of Sparse Convolutional Neural Network Based on Coarse-Grained Dataflow Architecture[J]. Journal of Computer Research and Development, 2021, 58(7): 1504-1517. CSTR: 32373.14.issn1000-1239.2021.20200112
Citation: Wu Xinxin, Ou Yan, Li Wenming, Wang Da, Zhang Hao, Fan Dongrui. Acceleration of Sparse Convolutional Neural Network Based on Coarse-Grained Dataflow Architecture[J]. Journal of Computer Research and Development, 2021, 58(7): 1504-1517. CSTR: 32373.14.issn1000-1239.2021.20200112

基于粗粒度数据流架构的稀疏卷积神经网络加速

基金项目: 国家自然科学基金项目(61732018,61872335,61802367,61672499);中国科学院战略性先导科技专项(C类)(XDC05000000);中国科学院国际伙伴计划(171111KYSB20170032);计算机体系结构国家重点实验室创新项目(CARCH4408,CARCH4412)
详细信息
  • 中图分类号: TP387

Acceleration of Sparse Convolutional Neural Network Based on Coarse-Grained Dataflow Architecture

Funds: This work was supported by the National Natural Science Foundation of China (61732018, 61872335, 61802367, 61672499), the Strategic Priority Research Program of Chinese Academy of Sciences (XDC05000000), the International Partnership Program of Chinese Academy of Sciences (171111KYSB20170032), and the Innovation Project of the State Key Laboratory of Computer Architecture (CARCH4408, CARCH4412).
  • 摘要: 卷积神经网络(convolutional neural network, CNN)在图像处理、语音识别、自然语言处理等领域实现了很好的性能.大规模的神经网络模型通常遭遇计算、存储等资源限制,稀疏神经网络的出现有效地缓解了对计算和存储的需求.尽管现有的领域专用加速器能够有效处理稀疏网络,它们通过算法和结构的紧耦合实现高能效,却丧失了结构的灵活性.粗粒度数据流架构通过灵活的指令调度可以实现不同的神经网络应用.基于该架构,密集卷积规则的计算特性使不同通道共享相同的一套指令执行,然而稀疏网络中存在权值稀疏,使得这些指令中存在0值相关的无效指令,而现有的指令执行方式无法自动跳过它们从而产生无效计算.同时在执行不规则的稀疏网络时,现有的指令映射方法造成了计算阵列的负载不均衡.这些问题阻碍了稀疏网络性能的提升.基于不同通道共享一套指令的前提下,根据稀疏网络的数据和指令特征增加指令控制单元实现权值数据中0值相关指令的检测和跳过,同时使用负载均衡的指令映射算法解决稀疏网络中指令执行不均衡问题.实验表明:与密集网络相比稀疏网络实现了平均1.55倍的性能提升和63.77%的能耗减少.同时比GPU(cuSparse)和Cambricon-X实现的稀疏网络分别快2.39倍(Alexnet)、2.28倍(VGG16)和1.14倍(Alexnet)、1.23倍(VGG16).
    Abstract: Convolutional neural network (CNN) achieves good performance in image processing, speech recognition, natural language processing and other fields. Large-scale neural network models often encounter resource constraints such as computing and storage. The emergence of sparse neural networks effectively relieves the need for computing and storage. Although existing domain-specific accelerators can effectively handle sparse networks, they achieve high energy efficiency through tight coupling of algorithms and structures, and lose the flexibility of the structure. The coarse-grained dataflow architecture can implement different neural network applications through flexible instruction scheduling. Based on this architecture, the regular computing characteristics of dense convolution allow different channels to share the same set of instruction to execute. However, there are sparse weights in sparse networks, making these instructions have 0-value-related invalid instructions, which makes the existing instruction execution method cannot automatically skip them, resulting in invalid calculations. At the same time, when executing an irregular sparse network, existing instruction mapping methods cause an unbalanced load on the computing array. These problems hinder the improvement of sparse network performance. In this paper, based on the premise that different channels share a set of instructions, we add an instruction control unit based on the data and instruction characteristics of the sparse network to achieve detection and skipping of 0-value related instructions in the weight data, while using the load balanced instruction mapping algorithm to solve the problem of uneven instruction execution in sparse networks. Experiments show that compared with dense networks, sparse networks achieve an average performance increase of 1.55X and an energy reduction of 63.77%. In addition, it achieves 2.39X(Alexnet), 2.28X(VGG16) and 1.14X(Alexnet), 1.23X(VGG16) speedup over GPU (cuSparse) and Cambricon-X, respectively.
  • 期刊类型引用(11)

    1. 徐宁,李静秋,王岚君,刘安安. 时序特性引导下的谣言事件检测方法评测. 南京大学学报(自然科学). 2025(01): 71-82 . 百度学术
    2. 关昌珊,邴万龙,刘雅辉,顾鹏飞,马洪亮. 基于图卷积网络的多特征融合谣言检测方法. 郑州大学学报(工学版). 2024(04): 70-78 . 百度学术
    3. 帅训波,冯梅,李青,董之光,张文博. 文本信息检索质量评估技术发展趋势及展望. 网络新媒体技术. 2024(04): 1-7+25 . 百度学术
    4. 王友卫,王炜琦,凤丽洲,朱建明,李洋. 基于广度-深度采样和图卷积网络的谣言检测方法. 浙江大学学报(工学版). 2024(10): 2040-2052 . 百度学术
    5. 陈鑫,荣欢,郭尚斌,杨彬. 用于谣言检测的图卷积时空注意力融合与图重构方法. 计算机科学. 2024(11): 54-64 . 百度学术
    6. 丁浩,刘清,齐江蕾,胡广伟. 基于网络突发公共卫生事件早期谣言识别研究——以新冠疫情谣言为例. 情报科学. 2023(04): 156-163 . 百度学术
    7. 吴越,温欣,袁雪. ParallelGAT:网络谣言检测方法. 情报杂志. 2023(05): 94-101+93 . 百度学术
    8. 曹健,陈怡梅,李海生,蔡强. 基于图神经网络的行人轨迹预测研究综述. 计算机工程与科学. 2023(06): 1040-1053 . 百度学术
    9. 王友卫,凤丽洲,王炜琦,侯玉栋. 基于事件-词语-特征异质图的微博谣言检测新方法. 中文信息学报. 2023(09): 161-174 . 百度学术
    10. 王莉. 网络虚假信息检测技术研究与展望. 太原理工大学学报. 2022(03): 397-404 . 百度学术
    11. 王友卫,童爽,凤丽洲,朱建明,李洋,陈福. 基于图卷积网络的归纳式微博谣言检测新方法. 浙江大学学报(工学版). 2022(05): 956-966 . 百度学术

    其他类型引用(16)

计量
  • 文章访问数:  598
  • HTML全文浏览量:  8
  • PDF下载量:  324
  • 被引次数: 27
出版历程
  • 发布日期:  2021-06-30

目录

    /

    返回文章
    返回