Advanced Search
    Shi Ruiwen, Li Guanghui, Dai Chenglong, Zhang Feifei. Feature-Oriented and Decoupled Network Structure Based Filter Pruning Method[J]. Journal of Computer Research and Development, 2024, 61(7): 1836-1849. DOI: 10.7544/issn1000-1239.202330085
    Citation: Shi Ruiwen, Li Guanghui, Dai Chenglong, Zhang Feifei. Feature-Oriented and Decoupled Network Structure Based Filter Pruning Method[J]. Journal of Computer Research and Development, 2024, 61(7): 1836-1849. DOI: 10.7544/issn1000-1239.202330085

    Feature-Oriented and Decoupled Network Structure Based Filter Pruning Method

    • Many existing pruning methods for deep neural network models require modifying the loss function or embedding additional variables in the network, thus they can’t benefit from the pre-trained network directly, and complicate the forward inference and training process. So far, most of the feature-oriented pruning work only use the intra-channel information to analyze the importance of filters, which makes it impossible to use the potential connections among channels during the pruning process. To address these issues, we consider the feature-oriented filter pruning task from an inter-channel perspective. The proposed method uses geometric distance to measure the potential correlation among channels, defines filter pruning as an optimization problem, and applies a greedy strategy to find an approximate solution to the optimal solution. The method achieves the decoupling of pruning from network and pruning from training, thus simplifying the pruning task. Extensive experiments demonstrate that the proposed pruning method achieves high performance for various network structures, for example, on CIFAR-10 dataset, the number of parameters and floating point operations of VGG-16 are reduced by 87.1% and 63.7%, respectively, while still has an accuracy of 93.81%. We also evaluate the proposed method using MobileFaceNets, a lightweight network, on CASIA-WebFace large dataset, and the evaluation results show that, when the number of parameters and floating-point operations are reduced by 58.0% and 63.6%, respectively, MobileFaceNets achieves an accuracy of 99.02% on LFW dataset without loss of inference accuracy (The code is available at: https://github.com/SSriven/FOAD).
    • loading

    Catalog

      Turn off MathJax
      Article Contents

      /

      DownLoad:  Full-Size Img  PowerPoint
      Return
      Return