• 中国精品科技期刊
  • CCF推荐A类中文期刊
  • 计算领域高质量科技期刊T1类
Advanced Search
Shi Ruiwen, Li Guanghui, Dai Chenglong, Zhang Feifei. Feature-Oriented and Decoupled Network Structure Based Filter Pruning Method[J]. Journal of Computer Research and Development, 2024, 61(7): 1836-1849. DOI: 10.7544/issn1000-1239.202330085
Citation: Shi Ruiwen, Li Guanghui, Dai Chenglong, Zhang Feifei. Feature-Oriented and Decoupled Network Structure Based Filter Pruning Method[J]. Journal of Computer Research and Development, 2024, 61(7): 1836-1849. DOI: 10.7544/issn1000-1239.202330085

Feature-Oriented and Decoupled Network Structure Based Filter Pruning Method

Funds: This work was supported by the National Natural Science Foundation of China(62072216)and the Science and Technology Program of Suzhou(SGC2021070).
More Information
  • Author Bio:

    Shi Ruiwen: born in 1999. Master. His main research interests include model compression and deep learning

    Li Guanghui: born in 1970. PhD, professor and PhD supervisor. Senior member of CCF. His main research interests include wireless sensor network, model compression and intelligent nondestructive detection technology

    Dai Chenglong: born in 1992. Lecturer. His main research interests include electroencephalogram processing, electroencephalogram analyzing, and model compression

    Zhang Feifei: born in 1982. Master. His main research interests include the hardware acceleration implementation of image processing algorithm and SoC chip design

  • Received Date: February 15, 2023
  • Revised Date: October 11, 2023
  • Available Online: April 09, 2024
  • Many existing pruning methods for deep neural network models require modifying the loss function or embedding additional variables in the network, thus they can’t benefit from the pre-trained network directly, and complicate the forward inference and training process. So far, most of the feature-oriented pruning work only use the intra-channel information to analyze the importance of filters, which makes it impossible to use the potential connections among channels during the pruning process. To address these issues, we consider the feature-oriented filter pruning task from an inter-channel perspective. The proposed method uses geometric distance to measure the potential correlation among channels, defines filter pruning as an optimization problem, and applies a greedy strategy to find an approximate solution to the optimal solution. The method achieves the decoupling of pruning from network and pruning from training, thus simplifying the pruning task. Extensive experiments demonstrate that the proposed pruning method achieves high performance for various network structures, for example, on CIFAR-10 dataset, the number of parameters and floating point operations of VGG-16 are reduced by 87.1% and 63.7%, respectively, while still has an accuracy of 93.81%. We also evaluate the proposed method using MobileFaceNets, a lightweight network, on CASIA-WebFace large dataset, and the evaluation results show that, when the number of parameters and floating-point operations are reduced by 58.0% and 63.6%, respectively, MobileFaceNets achieves an accuracy of 99.02% on LFW dataset without loss of inference accuracy (The code is available at: https://github.com/SSriven/FOAD).

  • [1]
    Pohlen T, Hermans A, Mathias M, et al. Full-Resolution residual networks for semantic segmentation in street scenes [C] // Proc of the 2017 IEEE Conf on Computer Vision and Pattern Recognition. Piscataway, NJ: IEEE, 2017: 3309−3318
    [2]
    Dettmers T. 8-bit approximations for parallelism in deep learning[J]. arXiv preprint, arXiv: 1511.04561, 2016
    [3]
    Hwang K, Sung W. Fixed-point feedforward deep neural network design using weights +1, 0, and −1[C] // Proc of the 2014 IEEE Workshop on Signal Processing Systems. Piscataway, NJ: IEEE, 2014: 174−179
    [4]
    Courbariaux M, Bengio Y, David J. BinaryConnect: Training deep neural networks with binary weights during propagations [C] // Proc of the Annual Conf on Neural Information Processing Systems 2015. Piscataway, NJ: IEEE 2015: 3123−3131
    [5]
    Courbariaux M, Bengio Y. BinaryNet: Training deep neural networks with weights and activations constrained to +1 or -1 [J]. arXiv preprint, arXiv: 1602.02830, 2016
    [6]
    Rastegari M, Ordonez V, Redmon J, et al. XNOR-Net: ImageNet classification using binary convolutional neural networks [C] // Proc of the 14th European Conf on Computer Vision. Berlin: Springer, 2016: 525−542
    [7]
    龚成,卢冶,代素蓉,等. 一种超低损失的深度神经网络量化压缩方法[J]. 软件学报,2021,32(8):2391−2407

    Gong Cheng, Lu Ye, Dai Surong, et al. Ultra-low loss quantization method for deep neural network compression[J]. Journal of Software, 2021, 32(8): 2391−2407 (in Chinese)
    [8]
    Romero A, Ballas N, Kahou S, et al. FitNets: Hints for thin deep nets[J]. arXiv preprint, arXiv: 1412.6550, 2015
    [9]
    Hinton G, Vinyals O, Dean J. Distilling the knowledge in a neural network[J]. arXiv preprint, arXiv: 1503.02531, 2015
    [10]
    张晶,王子铭,任永功. A3C深度强化学习模型压缩及知识抽取[J]. 计算机研究与发展,2023,60(6):1373−1384

    Zhang Jing, Wang Ziming, Ren Yonggong. A3C deep reinforcement learning model compression and knowledge extraction[J]. Journal of Computer Research and Development, 2023, 60(6): 1373−1384 (in Chinese)
    [11]
    林振元,林绍辉,姚益武,等. 多教师对比知识反演的无数据模型压缩方法[J/OL]. 计算机科学与探索,2022[2023-09-13]. http://fcst.ceaj.org/CN/10.3778/j.issn.1673-9418.2204107

    Lin Zhenyuan, Lin Shaohui, Yao Yiwu, et al. Multi-teacher contrastive knowledge inversion for data-free distillation[J/OL]. Journal of Frontiers of Computer Science and Technology, 2022[2023-09-13]. http://fcst.ceaj.org/CN/10.3778/j.issn.1673-9418.2204107 (in Chinese)
    [12]
    Han Song, Mao Huizi, William J, et al. Deep compression: Compressing deep neural network with pruning, trained quantization and Huffman coding[J]. arXiv preprint, arXiv: 1510.00149, 2016
    [13]
    Li Hao, Kadav A, Durdanovic I, et al. Pruning filters for efficient ConvNets [C/OL] // Proc of the 5th Int Conf on Learning Representations. Berlin: Springer, 2017[2023-09-13]. https://openreview.net/forum?id=rJqFGTslg
    [14]
    Lin Tao, U. Stich S , Barba L, et al. Dynamic model pruning with feedback [C/OL] // Proc of the 8th Int Conf on Learning Representations. Berlin: Springer, 2020[2023-09-13]. https://openreview.net/forum?id=SJem8lSFwB
    [15]
    Zhu M, Gupta S. To prune, or not to prune: Exploring the efficacy of pruning for model compression [C/OL] // Proc of the 6th Int Conf on Learning Representations. Berlin: Springer, 2018[2023-09-13]. https://openreview.net/forum?id=Sy1iIDkPM
    [16]
    Frankle J, Carbin M. The lottery ticket hypothesis: Finding sparse, trainable neural networks [C/OL]// Proc of the 7th Int Conf on Learning Representations. Berlin: Springer, 2019[2023-09-13]. https://openreview.net/forum?id=rJl-b3RcF7
    [17]
    Guo Yiwen, Yao Anbang, Chen Yurong. Dynamic network surgery for efficient DNNs [C]// Proc of the Annual Conf on Neural Information Processing Systems 2016. Piscataway, NJ: IEEE, 2016: 1379−1387
    [18]
    Han Song, Liu Xingyu, Mao Huizi, et al. EIE: Efficient inference engine on compressed deep neural network [C] // Proc of the 43rd ACM/IEEE Annual Int Symp on Computer Architecture. Piscataway, NJ: IEEE, 2016: 243−254
    [19]
    He Yang, Kang Guoliang, Dong Xuanyi, et al. Soft filter pruning for accelerating deep convolutional neural networks [C] // Proc of the 27th Int Joint Conf on Artificial Intelligence. Berlin: Springer, 2018: 2234−2240
    [20]
    Liu Zhuang, Li Jianguo, Shen Zhiqiang, et al. Learning efficient convolutional networks through network slimming [C]// Proc of the IEEE Int Conf on Computer Vision. Piscataway, NJ: IEEE, 2017: 2755−2763
    [21]
    Meng Fanxu, Cheng Hao, Li Ke, et al. Pruning filter in filter [C/OL] // Proc of the Annual Conf on Neural Information Processing Systems 2020. Piscataway, NJ: IEEE, 2020[2023-09-13]. https://proceedings.neurips.cc/paper/2020/hash/ccb1d45fb76f7c5a0bf619f979c6cf36-Abstract.html
    [22]
    Lin Mingbao, Ji Rongrong, Wang Yan, et al. HRank: Filter pruning using high-rank feature map [C] // Proc of the 2020 IEEE/CVF Conf on Computer Vision and Pattern Recognition. Piscataway, NJ: IEEE, 2020: 1526−1535
    [23]
    Tang Yehui, Wang Yunhe, Xu Yixing, et al. SCOP: Scientific control for reliable neural network pruning [C/OL] // Proc of the Annual Conf on Neural Information Processing Systems 2020. Piscataway, NJ: IEEE, 2020[2023-09-13]. https://proceedings.neurips.cc/paper/2020/hash/7bcdf75ad237b8e02e301f4091fb6bc8-Abstract.html
    [24]
    Suau X, Zappella L, Apostoloff N, et al. Network compression using correlation analysis of layer responses [C/OL] // Proc of the 2018 IEEE Conf on Computer Vision and Pattern Recognition. Piscataway, NJ: IEEE, 2018[2023-09-13]. https://readpaper.com/pdf-annotate/note?noteId=1959569544894668800
    [25]
    Sui Yang, Yin Miao, Xie Yi, et al. CHIP: Channel independence-based pruning for compact neural networks [C] // Proc of the Annual Conf on Neural Information Processing Systems 2021. Piscataway, NJ: IEEE , 2021: 24604−24616
    [26]
    Jiang Di, Cao Yuan, Yang Qiang. On the channel pruning using graph convolution network for convolutional neural network acceleration[C] // Proc of the 21st Int Joint Conf on Artificial Intelligence. Berlin: Springer, 2022: 3107−3113
    [27]
    Zhuang Tao, Zhang Zhixuan, Huang Yuheng, et al. Neuron-level structured pruning using polarization regularizer [C/OL] // Proc of the Annual Conf on Neural Information Processing Systems 2020. Piscataway, NJ: IEEE, 2020[2023-09-13]. https://proceedings.neurips.cc/paper/2020/hash/703957b6dd9e3a7980e040bee50ded65-Abstract.html
    [28]
    You Zhonghui, Yan Kun, Ye Jinmian , et al. Gate Decorator: Global filter pruning method for accelerating deep convolutional neural networks [C] // Proc of the Annual Conf on Neural Information Processing Systems 2019. Piscataway, NJ: IEEE, 2019: 2130−2141
    [29]
    He Yang , Liu Ping , Wang Ziwei , et al. Filter pruning via geometric median for deep convolutional neural networks acceleration [C] // Proc of the Conf on Computer Vision and Pattern Recognition. Piscataway, NJ: IEEE, 2019: 4340−4349
    [30]
    Dubey A, Moitreya C, Ahuja N, et al. Coreset-based neural network compression [C] // Proc of the 15th European Conf Computer Vision. Piscataway, NJ: IEEE, 2018: 469−486
    [31]
    Wang Wenxiao, Cong Fu, Guo Jishun, et al. COP: Customized deep model compression via regularized correlation-based filter-level pruning [C] // Proc of the 28th Int Joint Conf on Artificial Intelligence. Berlin: Springer, 2019: 3785−3791
    [32]
    Luo Jianhao, Wu Jianxin, Lin Weiyao, et al. ThiNet: A filter level pruning method for deep neural network compression [C] // Proc of the IEEE Int Conf on Computer Vision. Piscataway, NJ: IEEE, 2017: 5068−5076
    [33]
    He Yihui, Zhang Xiangyu, Sun Jian. Channel pruning for accelerating very deep neural networks [C] // Proc of the IEEE Int Conf on Computer Vision. Piscataway, NJ: IEEE, 2017: 1398−1406
    [34]
    Zhuang Zhuangwei, Tan Mingkui, Zhuang Bohan, et al. Discrimination-aware channel pruning for deep neural networks [C] // Proc of the Annual Conf on Neural Information Processing Systems 2018. Piscataway, NJ: IEEE, 2018: 883−894
    [35]
    Li Yawei, Gu Shuhang, Mayer C, et al. Group sparsity: The hinge between filter pruning and decomposition for network compression [C] // Proc of the 2020 IEEE/CVF Conf on Computer Vision and Pattern Recognition. Piscataway, NJ: IEEE, 2020: 8015−8024
    [36]
    Tiwari R, Bamba U, Chavan A, et al. ChipNet: Budget-aware pruning with heaviside continuous approximations [C/OL] // Proc of the 9th Int Conf on Learning Representations. Berlin: Springer, 2021[2023-09-13]. https://openreview.net/forum?id=xCxXwTzx4L1
    [37]
    Chen Tianyi, Ji Bo, Ding Tianyu, et al. Only train once: A one-shot neural network training and pruning framework [C] // Proc of the Annual Conf on Neural Information Processing Systems 2021. Piscataway, NJ: IEEE, 2021: 19637−19651
    [38]
    Lin Mingbao, Ji Rongrong, Zhang Yuxin, et al. Channel pruning via automatic structure search [C] // Proc of the 29th Int Joint Conf on Artificial Intelligence. Berlin: Springer, 2020: 673−679
    [39]
    Dong Xuanyi, Yang Yi. Network pruning via transformable architecture search [C] // Proc of the Annual Conf on Neural Information Processing Systems 2019. Piscataway, NJ: IEEE, 2019: 759−770
    [40]
    Edouard Y, Arnaud D, Matthieu C, et al. RED : Looking for redundancies for data-free structured compression of deep neural networks[J]. arXiv preprint, arXiv: 2105.14797, 2021
    [41]
    Liu Shiwei, Chen Tianlong, Chen Xiaohan, et al. Sparse training via boosting pruning plasticity with neuroregeneration [C] // Proc of the Annual Conf on Neural Information Processing Systems 2021. Piscataway, NJ: IEEE, 2021: 9908−9922
    [42]
    Krizhevsky A, Hinton G. Learning multiple layers of features from tiny images[J/OL]. Handbook of Systemic Autoimmune Diseases, 2009[2023-09-13]. https://xueshu.baidu.com/usercenter/paper/show?paperid=c55665fb879e98e130fce77052d4c8e8&site=xueshu_se
    [43]
    Chen Sheng, Liu Yang, Gao Xiang, et al. MobileFaceNets: Efficient CNNs for accurate real-time face verification on mobile devices [C] // Proc of the 13th Chinese Conf on Biometric Recognition. Berlin: Springer, 2018: 428−438
    [44]
    Huang G, Mattar M, Berg T, et al. Labeled faces in the wild: A database for studying face recognition in unconstrained environments [C/OL] // Proc of the Workshop on Faces in 'Real-Life' Images: Detection, Alignment, and Recognition. 2008[2023-09-13]. https://cs.brown.edu/courses/csci1430/2011/proj4/papers/lfw.pdf
    [45]
    Huang Zehao, Wang Naiyan. Data-driven sparse structure selection for deep neural networks [C] // Proc of the 15th European Conf on Computer Vision. Piscataway, NJ: IEEE, 2018: 317−334
    [46]
    Lin Shaohui, Ji Rongrong, Yan Chenqian, et al. Towards optimal structured CNN pruning via generative adversarial learning [C] // Proc of the IEEE Conf on Computer Vision and Pattern Recognition. Piscataway, NJ: IEEE, 2019: 2790−2799
    [47]
    Yu Ruichi, Li Ang, Chen Chunfu, et al. NISP: Pruning networks using neuron importance score propagation [C] // Proc of the 2018 IEEE Conf on Computer Vision and Pattern Recognition. Piscataway, NJ: IEEE, 2018: 9194–9203
    [48]
    Lin Mingbao, Cao Liujuan, Li Shaojie, et al. Filter sketch for network pruning[J]. IEEE Transactions on Neural Networks and Learning Systems, 2022, 33(12): 7091−7100 doi: 10.1109/TNNLS.2021.3084206
    [49]
    Fernandes F, Yen G. Pruning deep convolutional neural net-works architectures with evolution strategy [C/OL] // Proc of the Information Sciences. Amsterdam: Elsevier, 2021[2023-09-13].https://doi.org/10.1016/j.ins.2020.11.009
    [50]
    Cai Linhang, An Zhulin, Yang Chuanguang, et al. Prior gradient mask guided pruning-aware fine-tuning [C/OL] // Proc of the AAAI Conf on Artificial Intelligence. Palo Alto, CA: AAAI, 2022[2023-09-13]. http://dx. doi.org/10.1609/aaai.v36i1.19888
  • Cited by

    Periodical cited type(7)

    1. 马辉,王瑞琴,杨帅. 一种渐进式增长条件生成对抗网络模型. 电信科学. 2023(06): 105-113 .
    2. 杨华芬. 云存储环境下大数据实时动态迁移算法研究. 机械设计与制造工程. 2021(02): 117-122 .
    3. 何少芳,沈陆明,谢红霞. 生成式对抗网络的土壤有机质高光谱估测模型. 光谱学与光谱分析. 2021(06): 1905-1911 .
    4. 卢锦玲,张梦雪,郭鲁豫. 基于GAN的不平衡负荷数据类型辨识方法. 电力科学与工程. 2021(06): 26-34 .
    5. 刘言林. 基于条件生成对抗网络的小样本机器学习数据处理算法研究. 宁夏师范学院学报. 2021(10): 66-73 .
    6. 杨彦荣,宋荣杰,周兆永. 基于GAN-PSO-ELM的网络入侵检测方法. 计算机工程与应用. 2020(12): 66-72 .
    7. 金秋,林馥. 定向网络中隐藏可逆数据的分层追踪算法. 计算机仿真. 2020(10): 226-229+277 .

    Other cited types(23)

Catalog

    Article views (80) PDF downloads (35) Cited by(30)

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return