高级检索

    基于跨模块知识蒸馏的3阶段少样本目标检测

    Three-Stage Few-Shot Object Detection via Cross-Module Knowledge Distillation

    • 摘要: 少样本学习是少样本目标检测的主流方法,但其存在2个问题:1) 新类样本的极度缺乏导致新类特征分布存在偏差;2) 由于微调过程中的鲁棒性假设不一定适用于新类样本,因此,特征提取网络无法提取未失真的新类样本特征. 为解决上述2个问题,提出基于跨模块知识蒸馏的3阶段少样本目标检测方法. 首先,利用特征分布校准策略,校准2步微调过程中新类样本的特征分布;其次,利用提出的首项偏差削减策略,有效缓解线性探测阶段(微调过程的第1阶段)中权重参数的偏差估计问题;然后,利用提出的基于逆首项偏差削减的整体微调策略,有效缓解整体微调过程(微调过程的第2阶段)中特征提取网络的过拟合问题;最后,利用提出的跨模块知识蒸馏策略,引导模型的浅层模块学习深层特征,以便捕获更具辨别力的新类样本特征. 大量实验结果表明,所提的3阶段微调的少样本目标检测方法有效提高了少样本目标检测的准确性和鲁棒性.

       

      Abstract: Few-shot learning is the mainstream method for few-shot object detection, but it has serious shortcomings. 1) The extreme lack of new class samples leads to biased distribution of new class features; 2) Due to the robustness assumption during the fine-tuning process are not necessarily applicable to new class samples, the feature extraction network is unable to extract unbiased new class sample features. To deal with the above two issues, three-stage fine-tuning few-shot object detection method based on cross-module knowledge distillation is proposed. Firstly, the feature distribution calibration strategy is designed to calibrate the feature distribution of new class during the two-step fine-tuning process. Secondly, the proposed first bias reduction strategy, effectively alleviates the bias estimation problem of weight parameters in the linear probing (the first stage of the fine-tuning process), and the proposed inverse first bias reduction effectively alleviates the over-fitting problem of feature extraction network during overall fine-tuning (the second stage of the fine-tuning process). Finally, the proposed cross-module knowledge distillation strategy is utilized to guide the shallow modules of the model to learn deep features to capture more discriminative new class features. A large number of experimental results show that the proposed three-stage fine-tuning few-shot object detection method effectively improves the accuracy and robustness of few-shot object detection.

       

    /

    返回文章
    返回