高级检索
    纪荣嵘, 林绍辉, 晁飞, 吴永坚, 黄飞跃. 深度神经网络压缩与加速综述[J]. 计算机研究与发展, 2018, 55(9): 1871-1888. DOI: 10.7544/issn1000-1239.2018.20180129
    引用本文: 纪荣嵘, 林绍辉, 晁飞, 吴永坚, 黄飞跃. 深度神经网络压缩与加速综述[J]. 计算机研究与发展, 2018, 55(9): 1871-1888. DOI: 10.7544/issn1000-1239.2018.20180129
    Ji Rongrong, Lin Shaohui, Chao Fei, Wu Yongjian, Huang Feiyue. Deep Neural Network Compression and Acceleration: A Review[J]. Journal of Computer Research and Development, 2018, 55(9): 1871-1888. DOI: 10.7544/issn1000-1239.2018.20180129
    Citation: Ji Rongrong, Lin Shaohui, Chao Fei, Wu Yongjian, Huang Feiyue. Deep Neural Network Compression and Acceleration: A Review[J]. Journal of Computer Research and Development, 2018, 55(9): 1871-1888. DOI: 10.7544/issn1000-1239.2018.20180129

    深度神经网络压缩与加速综述

    Deep Neural Network Compression and Acceleration: A Review

    • 摘要: 深度神经网络在人工智能的应用中,包括计算机视觉、语音识别、自然语言处理方面,取得了巨大成功.但这些深度神经网络需要巨大的计算开销和内存存储,阻碍了在资源有限环境下的使用,如移动或嵌入式设备端.为解决此问题,在近年来产生大量关于深度神经网络压缩与加速的研究工作.对现有代表性的深度神经网络压缩与加速方法进行回顾与总结,这些方法包括了参数剪枝、参数共享、低秩分解、紧性滤波设计及知识蒸馏.具体地,将概述一些经典深度神经网络模型,详细描述深度神经网络压缩与加速方法,并强调这些方法的特性及优缺点.此外,总结了深度神经网络压缩与加速的评测方法及广泛使用的数据集,同时讨论分析一些代表性方法的性能表现.最后,根据不同任务的需要,讨论了如何选择不同的压缩与加速方法,并对压缩与加速方法未来发展趋势进行展望.

       

      Abstract: In recent years, deep neural networks (DNNs) have achieved remarkable success in many artificial intelligence (AI) applications, including computer vision, speech recognition and natural language processing. However, such DNNs have been accompanied by significant increase in computational costs and storage services, which prohibits the usages of DNNs on resource-limited environments such as mobile or embedded devices. To this end, the studies of DNN compression and acceleration have recently become more emerging. In this paper, we provide a review on the existing representative DNN compression and acceleration methods, including parameter pruning, parameter sharing, low-rank decomposition, compact filter designed, and knowledge distillation. Specifically, this paper provides an overview of DNNs, describes the details of different DNN compression and acceleration methods, and highlights the properties, advantages and drawbacks. Furthermore, we summarize the evaluation criteria and datasets widely used in DNN compression and acceleration, and also discuss the performance of the representative methods. In the end, we discuss how to choose different compression and acceleration methods to meet the needs of different tasks, and envision future directions on this topic.

       

    /

    返回文章
    返回