• 中国精品科技期刊
  • CCF推荐A类中文期刊
  • 计算领域高质量科技期刊T1类
Advanced Search
Zheng Junhao, Lin Chenhao, Zhao Zhengyu, Jia Ziyi, Wu Libing, Shen Chao. Towards Transferable and Stealthy Attacks Against Object Detection in Autonomous Driving Systems[J]. Journal of Computer Research and Development. DOI: 10.7544/issn1000-1239.202440097
Citation: Zheng Junhao, Lin Chenhao, Zhao Zhengyu, Jia Ziyi, Wu Libing, Shen Chao. Towards Transferable and Stealthy Attacks Against Object Detection in Autonomous Driving Systems[J]. Journal of Computer Research and Development. DOI: 10.7544/issn1000-1239.202440097

Towards Transferable and Stealthy Attacks Against Object Detection in Autonomous Driving Systems

Funds: This work was supported by the National Key Research and Development Program of China (2021YFB3100700), the National Natural Science Foundation of China (T2341003, 62006181, 62161160337, 62132011, U21B2018, U20A20177, 62206217), and the Key Research and Development Program of Shaanxi Province (2023-ZDLGY-38, 2021ZDLGY01-02).
More Information
  • Author Bio:

    Zheng Junhao: born in 2002. PhD candidate. His main research interests include artificial intelligence security and autonomous driving test

    Lin Chenhao: born in 1989. PhD, research fellow, PhD supervisor. His main research interests include artificial intelligence security, adversarial machine learning, deepfake and identity authentication

    Zhao Zhengyu: born in 1992. PhD, research fellow, PhD supervisor. His main research interests include AI security and privacy, adversarial machine learning

    Jia Ziyi: born in 2003. Master candidate. His main research interests include artificial intelligence security and autonomous driving test

    Wu Libing: born in 1972. PhD, professor and PhD supervisor. Member of CCF. His main research interests include wireless sensor networks, network manage- ment and distributed computing

    Shen Chao: born in 1985. PhD, professor, PhD supervisor. Member of CCF. His main research interests include trusted artificial intelligence, artificial intelligence security and network security

  • Received Date: February 20, 2024
  • Revised Date: October 21, 2024
  • Accepted Date: November 27, 2024
  • Available Online: December 11, 2024
  • Deep learning-based object detection algorithms have been widely applied, while recent research indicates that these algorithms are vulnerable to adversarial attacks, causing detectors to either misidentify or miss the target. Nonetheless, research focusing on the transferability of adversarial attacks in autonomous driving is limited, and few studies address the stealthiness of such attacks in this scenario. To address these limitations in current research, an algorithmic module to enhance attack transferability is designed by drawing an analogy between optimizing adversarial examples and the training process of machine learning models. Additionally, through employing style transfer techniques and neural rendering, a transferable and stealthy attack method (TSA) is proposed and implemented. Specifically, the adversarial examples are first repeatedly stitched together and combined with masks to generate the final texture, which is then applied to the entire vehicle surface. To simulate real-world conditions, a physical transformation function is used to embed the rendered camouflaged vehicle into realistic scenes. Finally, the adversarial examples are optimized using a designed loss function. Simulation experiments demonstrate that the TSA method surpasses existing methods in attack transferability and exhibits a certain level of stealthiness in appearance. Furthermore, physical domain experiments validate that the TSA method maintains effective attack performance in real-world scenarios.

  • [1]
    Szegedy C, Zaremba W, Sutskever I, et al. Intriguing properties of neural networks[J]. arXiv preprint, arXiv: 1312.6199, 2014
    [2]
    Goodfellow I, Shlens J, Szegedy C. Explaining and harnessing adversarial examples[J]. arXiv preprint, arXiv: 1412.6572, 2015
    [3]
    Madry A, Makelov A, Schmidt L, et al. Towards deep learning models resistant to adversarial attacks[J]. arXiv preprint, arXiv: 1706.06083, 2018
    [4]
    Kurakin A, Goodfellow I, Bengio S. Adversarial examples in the physical world[J]. arXiv preprint, arXiv: 1607.02533, 2017
    [5]
    Carlini N, Wagner D. Towards evaluating the robustness of neural networks[C]//Proc of the 38th IEEE Symp on Security and Privacy. Piscataway, NJ: IEEE, 2017: 39−57
    [6]
    Moosavi-Dezfooli S, Fawzi A, Fawzi O, et al. Universal adversarial perturbations[C]//Proc of the 35th IEEE/CVF Conf on Computer Vision and Pattern Recognition. Piscataway, NJ: IEEE, 2017: 86−94
    [7]
    Kurakin A, Goodfellow I, Bengio S. Adversarial machine learning at scale[C/OL]//Proc of the 5th Int Conf on Learning Representations. 2017[2023-10-09]. https://openreview.net/forum?id=BJm4T4Kgx
    [8]
    Wiyatno R, Xu Anqi. Maximal jacobian-based saliency map attack[J]. arXiv preprint, arXiv: 1808.07945, 2018
    [9]
    Zhao Zhengyu, Zhang Hanwei, Li Renjue, et al. Revisiting transferable adversarial image examples: Attack categorization, evaluation guidelines, and new insights[J]. arXiv preprint, arXiv: 2310.11850, 2023
    [10]
    Zhao Zhengyu, Liu Zhuoran, Larson M. On success and simplicity: A second look at transferable targeted attacks[C]//Proc of the 35th Advances in Neural Information Processing Systems. La Jolla, CA: NIPS, 2021: 6115−6128
    [11]
    Zhao Zhengyu, Liu Zhuoran, Larson M. Towards large yet imperceptible adversarial image perturbations with perceptual color distance[C]//Proc of the 38th IEEE/CVF Conf on Computer Vision and Pattern Recognition. Piscataway, NJ: IEEE, 2020: 1036−1045
    [12]
    Zhao Zhengyu, Liu Zhuoran, Larson M. Adversarial image color transformations in explicit color filter space[J]. IEEE Transactions on Information Forensics and Security, 2023, 18: 3185−3197 doi: 10.1109/TIFS.2023.3275057
    [13]
    Yang Yulong, Lin Chenhao, Li Qian, et al. Quantization aware attack: Enhancing transferable adversarial attacks by model quantization[J]. IEEE Transactions on Information Forensics and Security, 2024, 19: 3265−3278 doi: 10.1109/TIFS.2024.3360891
    [14]
    Zheng Junhao, Lin Chenhao, Sun Jiahao, et al. Physical 3D adversarial attacks against monocular depth estimation in autonomous driving[C]//Proc of the 42nd IEEE/CVF Conf on Computer Vision and Pattern Recognition. Piscataway, NJ: IEEE, 2024: 24452−24461
    [15]
    靳然. 文本对抗样本生成技术的研究与实现[D]. 北京:北京邮电大学,2021

    Jin Ran. Research and implementation of text adversarial example generation method[D]. Beijing: Beijing University of Posts and Telecommunications, 2021 (in Chinese)
    [16]
    Liu Xin, Yang Huanrui, Liu Ziwei, et al. Dpatch: An adversarial patch attack on object detectors[J]. arXiv preprint, arXiv: 1806.02299, 2018
    [17]
    Brown T B, Man D, Roy A, et al. Adversarial patch[J]. arXiv preprint, arXiv: 1712.09665, 2017
    [18]
    Lang Dapeng, Chen Deyun, Shi Ran, et al. Attention-guided digital adversarial patches on visual detection[J]. Security and Communication Networks, 2021, 2021: 1−11
    [19]
    Wu Zuxuan, Lim S, Davis L S, et al. Making an invisibility cloak: Real world adversarial attacks on object detectors[C]//Proc of the 16th European Conf on Computer Vision. Berlin: Springer, 2020: 12349−12366
    [20]
    Hu Zhanhao, Huang Siyuan, Zhu Xiaopei, et al. Adversarial texture for fooling person detectors in the physical world[C]//Proc of the 40th IEEE/CVF Conf on Computer Vision and Pattern Recognition. Piscataway, NJ: IEEE, 2022: 13307−13316
    [21]
    Chen S, Cornelius C, Martin J, et al. Shapeshifter: Robust physical adversarial attack on faster r-cnn object detector[C]//Proc of the European Conf on Machine Learning and Principles and Practice of Knowledge Discovery in Databases. Berlin: Springer, 2019: 52−68
    [22]
    Huang Lifeng, Gao Chengying, Zhou Yuyin, et al. Universal physical camouflage attacks on object detectors[C]//Proc of the 38th IEEE/CVF Conf on Computer Vision and Pattern Recognition. Piscataway, NJ: IEEE, 2020: 720−729
    [23]
    Wang Jiakai, Liu Aishan, Yin Zixin, et al. Dual attention suppression attack: Generate adversarial camouflage in physical world[C]//Proc of the 39th IEEE/CVF Conf on Computer Vision and Pattern Recognition. Piscataway, NJ: IEEE, 2021: 8565−8574
    [24]
    Wang Donghua, Jiang Tingsong, Sun Jialiang, et al. FCA: Learning a 3d full-coverage vehicle camouflage for multi-view physical adversarial attack[C]//Proc of the 36th AAAI Conf on Artificial Intelligence. Palo Alto, CA: AAAI, 2022: 2414−2422
    [25]
    Zhang Yang, Hassan F, Philip D, et al. CAMOU: Learning physical vehicle camouflages to adversarially attack detectors in the wild[C/OL]//Proc of the 7th Int Conf on Learning Representations. 2019[2023-11-05]. https://openreview.net/forum?id=SJgEl3A5tm
    [26]
    Xiao Chaowei, Yang Dawei, Li Bo, et al. Meshadv: Adversarial meshes for visual recognition[C]//Proc of the 37th IEEE/CVF Conf on Computer Vision and Pattern Recognition. Piscataway, NJ: IEEE, 2019: 6891−6900
    [27]
    Wu Tong, Ning Xuefei, Li Wenshuo, et al. Physical adversarial attack on vehicle detector in the carla simulator[J]. arXiv preprint, arXiv: 2007.16118, 2020
    [28]
    Duan Yexin, Chen Jialin, Zhou Xingyu, et al. Learning coated adversarial camouflages for object detectors[C]//Proc of the 31st Int Joint Conf on Artificial Intelligence. San Francisco, CA: Morgan Kaufmann, 2021: 891−897
    [29]
    Kato H, Ushiku Y, Harada T. Neural 3d mesh renderer[C]//Proc of the 36th IEEE/CVF Conf on Computer Vision and Pattern Recognition. Piscataway, NJ: IEEE, 2018: 3907−3916
    [30]
    Hu Y, Kung B, Tan D S, et al. Naturalistic physical adversarial patch for object detectors[C]//Proc of the 20th IEEE/CVF Int Conf on Computer Vision. Piscataway, NJ: IEEE, 2021: 7848−7857
    [31]
    Creswell A, White T, Dumoulin V, et al. Generative adversarial networks: An overview[J]. IEEE Signal Processing Magazine, 2018, 35(1): 53−65 doi: 10.1109/MSP.2017.2765202
    [32]
    Tan Jia, Ji Nan, Xie Haidong, et al. Legitimate adversarial patches: Evading human eyes and detection models in the physical world[C]//Proc of the 29th ACM Int Conf on Multimedia. New York: ACM, 2021: 5307−5315
    [33]
    Duan Ranjie, Ma Xingjun, Wang Yisen, et al. Adversarial camouflage: Hiding physical-world attacks with natural styles[C]//Proc of the 38th IEEE/CVF Conf on Computer Vision and Pattern Recognition. Piscataway, NJ: IEEE, 2020: 1000−1008
    [34]
    Liu Aishan, Huang Tairan, Liu Xianglong, et al. Spatiotemporal Attacks for Embodied Agents[C]//Proc of the 16th European Conf on Computer Vision. Berlin: Springer, 2020: 122−138
    [35]
    Lin T, Maire M, Belongie Se, et al. Microsoft coco: Common objects in context[C]//Proc of the 13th European Conf on Computer Vision. Berlin: Springer, 2014: 740−755
    [36]
    Gatys L A, Ecker A S, Bethge M. Image style transfer using convolutional neural networks[C]//Proc of the 34th IEEE/CVF Conf on Computer Vision and Pattern Recognition. Piscataway, NJ: IEEE, 2016: 2414−2423
    [37]
    Simonyan K, Zisserman A. Very deep convolutional networks for large-scale image recognition[J]. arXiv preprint, arXiv: 1409.1556, 2015
    [38]
    He Kaiming, Zhang Xiangyu, Ren Shaoqing, et al. Deep residual learning for image recognition[C]//Proc of the 34th IEEE/CVF Conf on Computer Vision and Pattern Recognition. Piscataway, NJ: IEEE, 2016: 770−778
    [39]
    Sharif M, Bhagavatula S, Bauer L, et al. Accessorize to a crime: Real and stealthy attacks on state-of-the-art face recognition[C]//Proc of the 23rd ACM SIGSAC Conf on Computer and Communications Security. New York: ACM, 2016: 1528−1540
    [40]
    Huang Hao, Chen Ziyan, Chen Huanran, et al. T-sea: Transfer-based self-ensemble attack on object detection[C]//Proc of the 41st IEEE/CVF Conf on Computer Vision and Pattern Recognition. Piscataway, NJ: IEEE, 2023: 20514−20523
    [41]
    Athalye A, Engstrom L, Ilyas A, et al. Synthesizing robust adversarial examples[C]//Proc of the 35th Int Conf on Machine Learning. New York: ACM, 2018: 284−293
    [42]
    Alexey D, German R, Felipe C, et al. CARLA: An open urban driving simulator[J]. arXiv preprint, arXiv: 1711.03938, 2017
    [43]
    Kingma D P, Ba J. Adam: A method for stochastic optimization[J]. arXiv preprint, arXiv: 1412.6980, 2014
    [44]
    Redmon J, Farhadi A. Yolov3: An incremental improvement[J]. arXiv preprint, arXiv: 1804.02767, 2018
    [45]
    Wang C, Bochkovskiy A, Liao H. Yolov7: Trainable bag-of-freebies sets new state-of-the-art for real-time object detectors[J]. arXiv preprint, arXiv: 220702696, 2022
    [46]
    Liu Wei, Anguelov D, Erhan D, et al. SSD: Single shot multibox detector[C]//Proc of the 14th European Conf on Computer Vision. Berlin: Springer, 2016: 21−37
    [47]
    Ren Shaoqing, He Kaiming, Girshick Ross, et al. Faster R-CNN: Towards real-time object detection with region proposal networks[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2015, 39(6): 1137−1149
    [48]
    He Kaiming, Gkioxari G, Dollr P, et al. Mask r-cnn[C]//Proc of the 14th IEEE/CVF Int Conf on Computer Vision. Piscataway, NJ: IEEE, 2017: 2980−2988
    [49]
    Vaswani A, Shazeer N, Parmar N. Attention is all you need[J]. arXiv preprint, arXiv: 1706.03762, 2017
    [50]
    Zhu Xizhou, Su Weijie, Lu Lewei, et al. Deformable DETR: Deformable transformers for end-to-end object detection[J]. arXiv preprint, arXiv: 2010.04159, 2020
    [51]
    Wang Wenhai, Xie E, Li Xiang, et al. Pyramid vision transformer: A versatile backbone for dense prediction without convolutions[C]//Proc of the 20th IEEE/CVF Int Conf on Computer Vision. Piscataway, NJ: IEEE, 2021: 548−558
    [52]
    Li Simin, Zhang Shuning, Chen Gujun, et al. Towards benchmarking and assessing visual naturalness of physical world adversarial attacks[C]//Proc of the 41st IEEE/CVF Conf on Computer Vision and Pattern Recognition. Piscataway, NJ: IEEE, 2023: 12324−12333
  • Related Articles

    [1]Yan Xinkai, Huo Yuchi, Bao Hujun. Survey on Neural Rendering and Its Hardware Acceleration[J]. Journal of Computer Research and Development, 2024, 61(11): 2846-2869. DOI: 10.7544/issn1000-1239.202330483
    [2]Zhu Suxia, Wang Jinyin, Sun Guanglu. Perceptual Similarity-Based Multi-Objective Optimization for Stealthy Image Backdoor Attack[J]. Journal of Computer Research and Development, 2024, 61(5): 1182-1192. DOI: 10.7544/issn1000-1239.202330521
    [3]Ma Xinyu, Fan Yixing, Guo Jiafeng, Zhang Ruqing, Su Lixin, Cheng Xueqi. An Empirical Investigation of Generalization and Transfer in Short Text Matching[J]. Journal of Computer Research and Development, 2022, 59(1): 118-126. DOI: 10.7544/issn1000-1239.20200626
    [4]Liu Yifan, Xu Kun. A Survey on Many-Lights Rendering Methods[J]. Journal of Computer Research and Development, 2020, 57(1): 17-31. DOI: 10.7544/issn1000-1239.2020.20190208
    [5]Li Ying, Tang Yong, Zhang Haoran, Liu Ding, Zhou Shengteng, Wang Sai. Real Time Rendering of Large Scale Realistic Ocean Scenes Driven by Time and Space[J]. Journal of Computer Research and Development, 2019, 56(2): 375-384. DOI: 10.7544/issn1000-1239.2019.20170895
    [6]Zheng Liping, Chan Bin, Wang Wenping, Liu Xiaoping, Cao Li, Kuang Zhengzheng. Remote Visualization Based on Distributed Rendering Framework[J]. Journal of Computer Research and Development, 2012, 49(7): 1438-1449.
    [7]Qian Xiaoyan, Xiao Liang, Wu Huizhong. An Adaptive LIC Rendering Method for Fluid Artistic Style[J]. Journal of Computer Research and Development, 2007, 44(9): 1588-1594.
    [8]Hu Wei and Qin Kaihuai. A New Rendering Technology of GPU-Accelerated Radiosity[J]. Journal of Computer Research and Development, 2005, 42(6): 945-950.
    [9]Ma Renan, Zhang Erhua, Yang Jinyu, and Zhao Chunxia. Research on Segmenting and Volume Rendering of Irregular Seismic Events[J]. Journal of Computer Research and Development, 2005, 42(5): 883-887.
    [10]Huang Hua, Luo Siwei, Liu Yunhui, and Li Aijun. Knowledge Increase Ability of Artificial Neural Network[J]. Journal of Computer Research and Development, 2005, 42(2): 224-229.

Catalog

    Article views (73) PDF downloads (19) Cited by()

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return