ISSN 1000-1239 CN 11-1777/TP

Journal of Computer Research and Development ›› 2019, Vol. 56 ›› Issue (6): 1170-1181.doi: 10.7544/issn1000-1239.2019.20190111

Special Issue: 2019面向人工智能的计算机体系结构专题

Previous Articles     Next Articles

Modeling Computational Feature of Multi-Layer Neural Network

Fang Rongqiang1, Wang Jing1,4, Yao Zhicheng2, Liu Chang1, Zhang Weigong3,4   

  1. 1(College of Information Engineering, Capital Normal University, Beijing 100048);2(State Key Laboratory of Computer Architecture (Institute of Computing Technology, Chinese Academy of Sciences), Beijing 100190);3(Beijing Engineering Research Center of High Reliable Embedded System (Capital Normal University), Beijing 100048);4(Beijing Advanced Innovation Center for Imaging Theory and Technology (Capital Normal University), Beijing 100048)
  • Online:2019-06-01
  • Supported by: 
    This work was supported by the National Natural Science Foundation of China(61772350), the Common Information System Equipment Pre-research Funds (Open Project) (JZX2017-0988/Y300), Beijing Nova Program (Z181100006218093), the Open Project of State Key Laboratory of Computer Architecture (CARCH201607), the Research Fund from Beijing Innovation Center for Future Chips (KYJJ2018008), the Construction Plan of Beijing High-level Teacher Team (CIT&TCD201704082), and the Capacity Building for Sci-Tech Innovation Fundamental Scientific Research Funds (19530050173, 025185305000).

Abstract: Deep neural networks (DNNs) have become increasingly popular as machine learning technique in applications, due to their ability to achieve high accuracy for tasks such as speech/image recognition. However, with the rapid growth on the scale of data and precision of recognition, the topology of neural network is becoming more and more complicated. Thus, how to design the energy-efficiency and programmability, neural or deep learning accelerator plays an essential role in next generation computer. In this paper, we propose a layer granularity analysis method, which could extract computation operations and memory requirement features through general expression and basic operation attributions. We also propose a max value replacement schedule strategy, which schedules the computation hardware resource based on the network feature we extract. Evaluation results show our method can increase computational efficiency and lead to a higher resource utilization.

Key words: neural network, features extraction, hardware accelerator, computer architecture, resource scheduling

CLC Number: