Abstract:
Deep neural networks (DNNs) have demonstrated remarkable success in numerous application areas, including computer vision, natural language processing, and speech recognition. However, recent studies have revealed that DNNs are vulnerable to adversarial attacks, particularly targeted attacks capable of precisely manipulating the outputs of unknown models, which poses significant risks to data privacy, model trustworthiness, and system security. Generative attack methods, which can efficiently create adversarial examples, have become a critical tool in advancing targeted attack techniques due to their ability to automate the generation process and reduce manual effort. Despite their potential, most existing generative attack approaches mainly focus on crafting adversarial examples for a single target class. This design results in clear limitations, such as low efficiency, restricted flexibility, and poor scalability in multi-target tasks, making them difficult to apply in complex scenarios where multiple targets must be addressed simultaneously. To address these challenges, this paper proposes a Multi-Target Generative Attack based on Dual-Information (MTGA-DI). MTGA-DI employs a conditional generative model that integrates both semantic and visual information from multiple target classes, enabling efficient and adaptable multi-target attacks. Furthermore, the method significantly improves the transferability and stability of generated adversarial examples across different models and defense settings. Experimental results demonstrate that MTGA-DI surpasses previous methods on standard models as well as models enhanced with robust training and input preprocessing defenses, achieving higher attack success rates and better generalization.