Advanced Search
    Yuan Xiaoxin, Hu Jun, Huang Yonghong. False Positive Adversarial Example Against Object Detectors[J]. Journal of Computer Research and Development, 2022, 59(11): 2534-2548. DOI: 10.7544/issn1000-1239.20210658
    Citation: Yuan Xiaoxin, Hu Jun, Huang Yonghong. False Positive Adversarial Example Against Object Detectors[J]. Journal of Computer Research and Development, 2022, 59(11): 2534-2548. DOI: 10.7544/issn1000-1239.20210658

    False Positive Adversarial Example Against Object Detectors

    • Object detectors have been widely applied in various intelligent systems, and mainly used to classify and locate objects in images. However, recent studies show that object detectors are as susceptible to digital adversarial examples and physical adversarial examples as DNNs classifiers. YOLOv3 is a mainstream object detector used in real-time detection tasks. Most of the existing physical adversarial examples for attacking YOLOv3 are constructed by printing out the large adversarial perturbations and pasting them on the surface of a specific class of object. The false positive adversarial example(FPAE) that appeared in recent research can be directly generated by the target model, which are unrecognizable to humans but can cause the object detectors to recognize them as the target class specified by the attacker with high confidence. The existing method to generate FPAE with YOLOv3 as the target model is only the AA(appearing attack) method. In the process of generating FPAE through the AA method, in order to improve the robustness of FPAE, EOT(expectation over transformation) image transformation will be added in the iterative optimization process to simulate various physical conditions, but motion blur that may occur during shooting is not considered, which in turn affects the attack effect of adversarial examples. In addition, the generated FPAE has a low attack success rate when it performs black-box attack on object detectors other than YOLOv3. In order to generate a better performance FPAE to reveal the weaknesses of existing object detectors and test the security of existing object detectors, we take the YOLOv3 object detector as the target model, and propose the RTFP(robust and transferable false positive) adversarial attack method. In the iterative optimization process of this method, in addition to using typical image transformation, motion blur transformation is added. At the same time, in the design of the loss function, this method draws on the design idea of the loss function in the C&W attack, and takes the IOU(intersection over union) between the bounding boxes predicted by the target model in the grid cell where the center of the FPAE is located and the real bounding box where the FPAE is located as the weight item of the classification loss of the predicted bounding boxes. In the experiment of the multiple distance-angle combinations shooting tests and the driving shooting tests in the real world, the FPAE generated by RTFP method can maintain good robustness, and its transferability is better than the FPAE generated by the existing methods.
    • loading

    Catalog

      Turn off MathJax
      Article Contents

      /

      DownLoad:  Full-Size Img  PowerPoint
      Return
      Return