Survey on Detecting and Defending Adversarial Examples for Image Data
-
Graphical Abstract
-
Abstract
Adversarial examples, formed by adding small perturbation to the clean examples, are the current hotspot of deep neural network as a powerful security threat. At present, the researches on adversarial examples mainly focus on two points: generating adversarial examples to attack deep neural network and detecting and defending adversarial examples. So far, the researches on generating adversarial examples for images have been comprehensive while researches on detecting and defending adversarial examples haven’t yet. For the first time, we summarize and analyze the technology of detecting and defending adversarial examples based on an overview of the technology of generating adversarial examples. According to the summary of various methods of the detection and defense of adversarial examples, they can be classified from six aspects: feature learning, distribution statistics, input dissociation, adversarial training, knowledge transferring and noise reduction. We explore different technologies of detection and defense of adversarial examples, explain the principles and analyzing the application scenarios of each. Besides this, this survey researches on the relationship among different methods to introduce the evolution of detection and defense technologies of adversarial examples, analyzes the characteristics and performance of each technique, lists the advantages and disadvantages of various approaches. Also, the comprehensive evaluations of detection and defense methods are given. Finally, the current research on the detection and defense of adversarial examples is summarized and prospected.
-
-