Advanced Search
    Ji Rongrong, Lin Shaohui, Chao Fei, Wu Yongjian, Huang Feiyue. Deep Neural Network Compression and Acceleration: A Review[J]. Journal of Computer Research and Development, 2018, 55(9): 1871-1888. DOI: 10.7544/issn1000-1239.2018.20180129
    Citation: Ji Rongrong, Lin Shaohui, Chao Fei, Wu Yongjian, Huang Feiyue. Deep Neural Network Compression and Acceleration: A Review[J]. Journal of Computer Research and Development, 2018, 55(9): 1871-1888. DOI: 10.7544/issn1000-1239.2018.20180129

    Deep Neural Network Compression and Acceleration: A Review

    • In recent years, deep neural networks (DNNs) have achieved remarkable success in many artificial intelligence (AI) applications, including computer vision, speech recognition and natural language processing. However, such DNNs have been accompanied by significant increase in computational costs and storage services, which prohibits the usages of DNNs on resource-limited environments such as mobile or embedded devices. To this end, the studies of DNN compression and acceleration have recently become more emerging. In this paper, we provide a review on the existing representative DNN compression and acceleration methods, including parameter pruning, parameter sharing, low-rank decomposition, compact filter designed, and knowledge distillation. Specifically, this paper provides an overview of DNNs, describes the details of different DNN compression and acceleration methods, and highlights the properties, advantages and drawbacks. Furthermore, we summarize the evaluation criteria and datasets widely used in DNN compression and acceleration, and also discuss the performance of the representative methods. In the end, we discuss how to choose different compression and acceleration methods to meet the needs of different tasks, and envision future directions on this topic.
    • loading

    Catalog

      Turn off MathJax
      Article Contents

      /

      DownLoad:  Full-Size Img  PowerPoint
      Return
      Return