ISSN 1000-1239 CN 11-1777/TP

### Visual Feature Attribution Based on Adversarial Feature Pairs

Zhang Xian1, Shi Canghong2, Li Xiaojie1

1. 1(College of Computer Science, Chengdu University of Information Technology, Chengdu 610103);2(School of Information Science and Technology, Southwest Jiatong University, Chengdu 611765)
• Online:2020-03-01
• Supported by:
This work was supported by the National Natural Science Foundation of China (61602066, 61702058), the Outstanding Young Talents Project of Sichuan Provincal Department of Science and Technology (19JCQN0003), the Key Project of Natural Science of Sichuan Provincial Education Department (17ZA0063), and the Natural Science Foundation for Young Scientists of Chengdu University of Information Technology (J201704).

Abstract: Visualizing the key feature of images is an important issue and requires in-depth study for computer vision. Its application ranges from weak supervision in the object localization task to understanding in the hidden features of the data. In medical and natural images data sets, the convolutional neural network-based model has become the latest technology for visualizing the regions of input, which are important for predictions from these models or visual explanations. However, their feature location is not accurate. In view of the limitations of the traditional neural network classifier in the region of the visual image key characteristics, we propose an effective adversarial feature pairs based method for visual feature attribution. In the proposed method, We firstly construct adversarial pair of key feature areas as the input of generative adversarial network (GAN). This makes the generator produce high corresponding key features, and can effectively filter redundant information and achieve accurate position. However, traditional GAN is difficult to produce images that are similar to real images. Therefore, Wasserstein distance and gradient penalty are employed to solve the problem and accelerate the convergence process. Experimental results on synthetic datasets, lung datasets and heart datasets show that our proposed method produces convincing real-world effects in both qualitative and quantitative visual displays.

CLC Number: