ISSN 1000-1239 CN 11-1777/TP

计算机研究与发展 ›› 2021, Vol. 58 ›› Issue (5): 1075-1091.doi: 10.7544/issn1000-1239.2021.20200935

所属专题: 2021人工智能安全与隐私保护技术专题

• 信息安全 • 上一篇    下一篇

一种面向图神经网络的图重构防御方法

陈晋音1,2,黄国瀚2,张敦杰2,张旭鸿3,纪守领4   

  1. 1(浙江工业大学网络空间安全研究院 杭州 310023);2(浙江工业大学信息工程学院 杭州 310023);3(浙江大学控制科学与工程学院 杭州 310007);4(浙江大学计算机科学与技术学院 杭州 310007) (chenjinyin@zjut.edu.cn)
  • 出版日期: 2021-05-01
  • 基金资助: 
    国家自然科学基金项目(62072406);浙江省自然科学基金项目(LY19F020025);公安部重点实验室2020年开放课题(2020DSJSYS001)

GRD-GNN: Graph Reconstruction Defense for Graph Neural Network

Chen Jinyin1,2, Huang Guohan2, Zhang Dunjie2, Zhang Xuhong3, Ji Shouling4   

  1. 1(Institute of Cyberspace Security, Zhejiang University of Technology, Hangzhou 310023);2(College of Information Engineering, Zhejiang University of Technology, Hangzhou 310023);3(College of Control Science and Engineering, Zhejiang University, Hangzhou 310007);4(College of Computer Science and Technology, Zhejiang University, Hangzhou 310007)
  • Online: 2021-05-01
  • Supported by: 
    This work was supported by the National Natural Science Foundation of China (62072406), the Natural Science Foundation of Zhejiang Province of China (LY19F020025), and the Key Laboratory of the Public Security Ministry Open Project in 2020 (2020DSJSYS001).

摘要: 近年来,图神经网络在图表示学习领域中取得了较好表现广泛应用于日常生活中,例如电子商务、社交媒体和生物学等.但是研究表明,图神经网络容易受到精心设计的对抗攻击迷惑,使其无法正常工作.因此,提高图神经网络的鲁棒性至关重要.已有研究提出了一些提高图神经网络鲁棒性的防御方法,然而如何在确保模型主任务性能的前提下降低对抗攻击的攻击成功率仍存在挑战.通过观察不同攻击产生的对抗样本发现,对抗攻击生成的对抗连边所对应的节点对之间通常存在低结构相似性和低节点特征相似性的特点.基于上述发现,提出了一种面向图神经网络的图重构防御方法GRD-GNN,分别从图结构和节点特征考虑,采用共同邻居数和节点相似度2种相似度指标检测对抗连边并实现图重构,使得重构的图结构删除对抗连边,且添加了增强图结构关键特征的连边,从而实现有效防御.最后,论文在3个真实数据集上展开防御实验,验证了GRD-GNN相比其他防御方法均能取得最佳的防御性能,且不影响正常图数据的分类任务.此外,利用可视化方法对防御结果做解释,解析方法的有效性.

关键词: 图重构, 对抗攻击, 图神经网络, 图表示学习, 节点分类

Abstract: Recent years, graph neural network (GNN) has been widely applied in our daily life for its satisfying performance in graph representation learning, and such as e-commerce, social media and biology, etc. However, research has suggested that GNNs are vulnerable to adversarial attacks carefully crafted, leading the GNN model to fail. Therefore, it is essential to improve the robustness of graph neural network. Several defense methods have been proposed to improve the robustness of GNNs. However, how to reduce the attack success rate of adversarial attacks while ensuring the performance of the main task of the GNN still remains a challenge. Through the observation of various adversarial samples, it is concluded that the node pairs connected by adversarial edges have characteristics of low structural similarity and low node feature similarity compared with the clean ones. Based on the observation, we propose a graph reconstruction defense for graph neural network named GRD-GNN. Considering both graph structure and node features, both the number of common neighbors and the similarity of nodes are applied to guide the graph reconstruction. GRD-GNN not only removes the adversarial edges, but also adds edges that are beneficial to the performance of the GNN to enhance the graph structure. At last, comprehensive experiments on three real-world datasets verify the art-of-the-state defense performance of proposed GRD-GNN compared with baselines. Additionally, the explanation of the results of experiments and analysis of effectiveness of the method are shown in the paper.

Key words: graph reconstruction, adversarial attack, graph neural network, graph representation learning, node classification

中图分类号: