ISSN 1000-1239 CN 11-1777/TP

Journal of Computer Research and Development ›› 2021, Vol. 58 ›› Issue (5): 1075-1091.doi: 10.7544/issn1000-1239.2021.20200935

Special Issue: 2021人工智能安全与隐私保护技术专题

Previous Articles     Next Articles

GRD-GNN: Graph Reconstruction Defense for Graph Neural Network

Chen Jinyin1,2, Huang Guohan2, Zhang Dunjie2, Zhang Xuhong3, Ji Shouling4   

  1. 1(Institute of Cyberspace Security, Zhejiang University of Technology, Hangzhou 310023);2(College of Information Engineering, Zhejiang University of Technology, Hangzhou 310023);3(College of Control Science and Engineering, Zhejiang University, Hangzhou 310007);4(College of Computer Science and Technology, Zhejiang University, Hangzhou 310007)
  • Online:2021-05-01
  • Supported by: 
    This work was supported by the National Natural Science Foundation of China (62072406), the Natural Science Foundation of Zhejiang Province of China (LY19F020025), and the Key Laboratory of the Public Security Ministry Open Project in 2020 (2020DSJSYS001).

Abstract: Recent years, graph neural network (GNN) has been widely applied in our daily life for its satisfying performance in graph representation learning, and such as e-commerce, social media and biology, etc. However, research has suggested that GNNs are vulnerable to adversarial attacks carefully crafted, leading the GNN model to fail. Therefore, it is essential to improve the robustness of graph neural network. Several defense methods have been proposed to improve the robustness of GNNs. However, how to reduce the attack success rate of adversarial attacks while ensuring the performance of the main task of the GNN still remains a challenge. Through the observation of various adversarial samples, it is concluded that the node pairs connected by adversarial edges have characteristics of low structural similarity and low node feature similarity compared with the clean ones. Based on the observation, we propose a graph reconstruction defense for graph neural network named GRD-GNN. Considering both graph structure and node features, both the number of common neighbors and the similarity of nodes are applied to guide the graph reconstruction. GRD-GNN not only removes the adversarial edges, but also adds edges that are beneficial to the performance of the GNN to enhance the graph structure. At last, comprehensive experiments on three real-world datasets verify the art-of-the-state defense performance of proposed GRD-GNN compared with baselines. Additionally, the explanation of the results of experiments and analysis of effectiveness of the method are shown in the paper.

Key words: graph reconstruction, adversarial attack, graph neural network, graph representation learning, node classification

CLC Number: