Advanced Search
    Cross-Label Assignment Knowledge Distillation for End-to-End Object DetectionJ. Journal of Computer Research and Development. DOI: 10.7544/issn1000-1239.202550737
    Citation: Cross-Label Assignment Knowledge Distillation for End-to-End Object DetectionJ. Journal of Computer Research and Development. DOI: 10.7544/issn1000-1239.202550737

    Cross-Label Assignment Knowledge Distillation for End-to-End Object Detection

    • Recently, the one-to-one label assignment rule has played a great role in removing non-maximum suppression (NMS) in the post-processing step and building an NMS-free end-to-end detection paradigm.However,this strict sample matching rule leads to a significant reduction in the number of positive samples during the training process,making it difficult for the model to fully mine the latent semantic information in the data during the feature representation learning stage, resulting in low learning efficiency. To this end, this paper proposes a cross-label assignment knowledge distillation (CAKD) method for end-to-end object detection. This method combines the transfer learning mechanism of knowledge distillation with the end-to-end object detection algorithm to make up for the defects of the existing one-to-one label assignment rule and establishes an effective knowledge transfer path for the knowledge distillation framework to train end-to-end detection models. Specifically, the multi-scale features of the student model are first transferred to the detection head of the teacher model across models. Then, the student-teacher cross-label assignment predictions are compared with the teacher's predictions to calculate the distillation loss. This method avoids directly forcing the student to imitate the teacher's features, which leads to feature confusion and semantic misalignment. In addition, we also design an effective task-aware matching metric that comprehensively considers the quality of classification and regression, thus avoiding the problem of irrelevance between classification and localization tasks. We conduct a large number of experiments on the COCO dataset to demonstrate the effectiveness of this method. By applying the cross-label assignment knowledge distillation method proposed in this paper to the non-end-to-end FCOS baseline model,it achieves a 38.8% non-NMS detection performance in terms of mean average precision (mAP),which is a 2.1% accuracy gain compared to the original detection performance with NMS.
    • loading

    Catalog

      Turn off MathJax
      Article Contents

      /

      DownLoad:  Full-Size Img  PowerPoint
      Return
      Return