Towards Transferable and Stealthy Attacks Against Object Detection in Autonomous Driving Systems
-
Graphical Abstract
-
Abstract
Deep learning-based object detection algorithms have been widely applied, while recent research indicates that these algorithms are vulnerable to adversarial attacks, causing detectors to either misidentify or miss the target. Nonetheless, research focusing on the transferability of adversarial attacks in autonomous driving is limited, and few studies address the stealthiness of such attacks in this scenario. To address these limitations in current research, an algorithmic module to enhance attack transferability is designed by drawing an analogy between optimizing adversarial examples and the training process of machine learning models. Additionally, through employing style transfer techniques and neural rendering, a transferable and stealthy attack method (TSA) is proposed and implemented. Specifically, the adversarial examples are first repeatedly stitched together and combined with masks to generate the final texture, which is then applied to the entire vehicle surface. To simulate real-world conditions, a physical transformation function is used to embed the rendered camouflaged vehicle into realistic scenes. Finally, the adversarial examples are optimized using a designed loss function. Simulation experiments demonstrate that the TSA method surpasses existing methods in attack transferability and exhibits a certain level of stealthiness in appearance. Furthermore, physical domain experiments validate that the TSA method maintains effective attack performance in real-world scenarios.
-
-