ISSN 1000-1239 CN 11-1777/TP

计算机研究与发展 ›› 2019, Vol. 56 ›› Issue (6): 1170-1181.doi: 10.7544/issn1000-1239.2019.20190111

所属专题: 2019面向人工智能的计算机体系结构专题

• 系统结构 • 上一篇    下一篇

多层神经网络算法的计算特征建模方法

方荣强1,王晶1,4,姚治成2,刘畅1,张伟功3,4   

  1. 1(首都师范大学信息工程学院 北京 100048);2(体系结构国家重点实验室(中国科学院计算技术研究所) 北京 100190);3(高可靠嵌入式系统技术北京市工程研究中心(首都师范大学) 北京 100048);4(北京成像理论与技术高精尖创新中心(首都师范大学) 北京 100048) (zwg771@cnu.edu.cn)
  • 出版日期: 2019-06-01
  • 基金资助: 
    国家自然科学基金项目(61772350);共有信息系统装备预先研究项目(公开)(JZX2017-0988/Y300);北京市科技新星计划项目(Z181100006218093);体系结构国家重点实验室开放课题(CARCH201607);北京未来芯片技术高精尖创新中心科研基金资助项目(KYJJ2018008);北京市高水平教师队伍建设计划(CIT&TCD201704082);科技创新服务能力建设-基本科研业务费(科研类)(19530050173,02518530500)

Modeling Computational Feature of Multi-Layer Neural Network

Fang Rongqiang1, Wang Jing1,4, Yao Zhicheng2, Liu Chang1, Zhang Weigong3,4   

  1. 1(College of Information Engineering, Capital Normal University, Beijing 100048);2(State Key Laboratory of Computer Architecture (Institute of Computing Technology, Chinese Academy of Sciences), Beijing 100190);3(Beijing Engineering Research Center of High Reliable Embedded System (Capital Normal University), Beijing 100048);4(Beijing Advanced Innovation Center for Imaging Theory and Technology (Capital Normal University), Beijing 100048)
  • Online: 2019-06-01
  • Supported by: 
    This work was supported by the National Natural Science Foundation of China(61772350), the Common Information System Equipment Pre-research Funds (Open Project) (JZX2017-0988/Y300), Beijing Nova Program (Z181100006218093), the Open Project of State Key Laboratory of Computer Architecture (CARCH201607), the Research Fund from Beijing Innovation Center for Future Chips (KYJJ2018008), the Construction Plan of Beijing High-level Teacher Team (CIT&TCD201704082), and the Capacity Building for Sci-Tech Innovation Fundamental Scientific Research Funds (19530050173, 025185305000).

摘要: 随着深度学习算法在语音和图像等领域中的成功运用,能够有效提取目标特征并做出最优决策的神经网络再次得到了广泛的关注.然而随着数据量的增加和识别精度需求的提升,神经网络模型的复杂度不断提高,因此采用面向特定领域的专用硬件加速器是高效运行神经网络的有效途径.然而如何根据网络规模设计高能效的加速器,以及基于有限硬件资源如何提高网络性能并最大化资源利用率是当今体系结构领域研究的重要问题.为此,提出基于计算特征的神经网络分析和优化方法,基于“层”的粒度解析典型神经网络模型并提取模型通用表达,根据通用表达式和基本操作属性提取模型运算量和存储空间需求等特征.提出了基于最大值更替的运行调度算法,利用所提取的特征分析结果对神经网络在特定硬件资源下的运行调度方案进行优化.实验结果显示:所提方法能够有效分析对比网络特征,并指导所设计调度算法实现性能和系统资源利用率的提升.

关键词: 神经网络, 特征提取, 硬件加速器, 计算机体系结构, 资源调度

Abstract: Deep neural networks (DNNs) have become increasingly popular as machine learning technique in applications, due to their ability to achieve high accuracy for tasks such as speech/image recognition. However, with the rapid growth on the scale of data and precision of recognition, the topology of neural network is becoming more and more complicated. Thus, how to design the energy-efficiency and programmability, neural or deep learning accelerator plays an essential role in next generation computer. In this paper, we propose a layer granularity analysis method, which could extract computation operations and memory requirement features through general expression and basic operation attributions. We also propose a max value replacement schedule strategy, which schedules the computation hardware resource based on the network feature we extract. Evaluation results show our method can increase computational efficiency and lead to a higher resource utilization.

Key words: neural network, features extraction, hardware accelerator, computer architecture, resource scheduling

中图分类号: