Abstract:
Recent years, graph neural network (GNN) has been widely applied in our daily life for its satisfying performance in graph representation learning, and such as e-commerce, social media and biology, etc. However, research has suggested that GNNs are vulnerable to adversarial attacks carefully crafted, leading the GNN model to fail. Therefore, it is essential to improve the robustness of graph neural network. Several defense methods have been proposed to improve the robustness of GNNs. However, how to reduce the attack success rate of adversarial attacks while ensuring the performance of the main task of the GNN still remains a challenge. Through the observation of various adversarial samples, it is concluded that the node pairs connected by adversarial edges have characteristics of low structural similarity and low node feature similarity compared with the clean ones. Based on the observation, we propose a graph reconstruction defense for graph neural network named GRD-GNN. Considering both graph structure and node features, both the number of common neighbors and the similarity of nodes are applied to guide the graph reconstruction. GRD-GNN not only removes the adversarial edges, but also adds edges that are beneficial to the performance of the GNN to enhance the graph structure. At last, comprehensive experiments on three real-world datasets verify the art-of-the-state defense performance of proposed GRD-GNN compared with baselines. Additionally, the explanation of the results of experiments and analysis of effectiveness of the method are shown in the paper.