高级检索
    施瑞文, 李光辉, 代成龙, 张飞飞. 一种基于特征导向解耦网络结构的滤波器修剪方法[J]. 计算机研究与发展.
    引用本文: 施瑞文, 李光辉, 代成龙, 张飞飞. 一种基于特征导向解耦网络结构的滤波器修剪方法[J]. 计算机研究与发展.
    Shi Ruiwen, Li Guanghui, Dai Chenglong, Zhang Feifei. Feature-Oriented and Decoupled Network Structure Based Filter Pruning[J]. Journal of Computer Research and Development.
    Citation: Shi Ruiwen, Li Guanghui, Dai Chenglong, Zhang Feifei. Feature-Oriented and Decoupled Network Structure Based Filter Pruning[J]. Journal of Computer Research and Development.

    一种基于特征导向解耦网络结构的滤波器修剪方法

    Feature-Oriented and Decoupled Network Structure Based Filter Pruning

    • 摘要: 现有的很多深度神经网络模型剪枝方法需要修改损失函数或在网络中嵌入额外的变量,无法直接受益于预训练网络,而且复杂化了前向推理和训练过程. 到目前为止,大部分特征导向的剪枝工作仅利用通道内信息分析滤波器的重要性,使得剪枝过程无法利用通道间的潜在联系. 针对上述问题,基于特征导向从通道间的角度考虑滤波器修剪任务,使用几何距离度量通道间的潜在相关性,将滤波器修剪定义为一个优化问题,并引入贪婪策略寻求最优解的近似解. 该方法实现了剪枝与网络、剪枝与训练的解耦,从而简化了修剪任务. 大量的实验证明了该方法对于各种网络结构都有良好的性能,例如在CIFAR-10数据集上,将VGG-16的参数量和浮点运算量分别降低了87.1%和63.7%,并且达到93.81%的高精度. 还使用轻量型网络MobileFaceNet和CASIA-WebFace数据集评估该方法的性能,结果显示使用该剪枝方法后,MobileFaceNet在参数量和浮点运算量分别降低58.0%和63.6%的情况下,在LFW上的测试精度仍然达到99.02%,而且推理精度几乎没有损失(源代码发布在:https://github.com/SSriven/FOAD).

       

      Abstract: Many existing pruning methods for deep neural network models require modifying the loss function or embedding additional variables in the network, thus they can’t benefit from the pre-trained network directly, and complicate the forward inference and training process. So far, most of the feature-oriented pruning work only use the intra-channel information to analyze the importance of filters, which makes it impossible to use the potential connections among channels during the pruning process. To address these issues, this paper considers the feature-oriented filter pruning task from an inter-channel perspective. The proposed method uses geometric distance to measure the potential correlation among channels, defines filter pruning as an optimization problem, and applies a greedy strategy to find an approximate solution to the optimal solution. The method achieves the decoupling of pruning from network and pruning from training, thus simplifying the pruning task. Extensive experiments demonstrate that the proposed pruning method achieves high performance for various network structures, for example, on the CIFAR-10 dataset, the number of parameters and floating point operations of VGG-16 are reduced by 87.1% and 63.7%, respectively, while still has an accuracy of 93.81%. This paper also evaluates the proposed method using MobileFaceNet, a lightweight network, on the CASIA-WebFace large dataset, and the evaluation results show that, when the number of parameters and floating-point operations are reduced by 58.0% and 63.6%, respectively, MobileFaceNet achieves an accuracy of 99.02% on LFW dataset without loss of inference accuracy(The code is available at: https://github.com/SSriven/FOAD).

       

    /

    返回文章
    返回