Citation: | Li Ping, Song Shuhan, Zhang Yuan, Cao Huawei, Ye Xiaochun, Tang Zhimin. HSEGRL: A Hierarchical Self-Explainable Graph Representation Learning Model[J]. Journal of Computer Research and Development, 2024, 61(8): 1993-2007. DOI: 10.7544/issn1000-1239.202440142 |
In recent years, with the extensive application of graph neural network (GNN) technology in the fields such as social network, information, chemistry and biology, the interpretability of GNN has attracted widespread attention. However, prevailing explanation methods fail to capture the hierarchical explanation information, and these hierarchical information has not been fully utilized to improve the classification accuracy of graph tasks. To address this issue, we propose a hierarchical self-explanation graph representation learning model called HSEGRL (hierarchical self-explanation graph representation learning). This model, by discovering hierarchical information in the graph structure, predicts graph classifications while outputting hierarchical self-explanation results. Specifically, we design the basic unit for extracting hierarchical information—interpreters. These interpreters consist of an encoder that extracts node features, a pooling layer that perceives hierarchical explanation-aware subgraphs, and a decoder that refines higher-order explanation information. We refine the pooling mechanism with an explanation-aware strategy, enabling the hierarchical selection of subgraphs based on the evaluation of the model’s topology and feature importance, thereby facilitating hierarchical self-explanation in conjunction with graph classification. HSEGRL is a functionally comprehensive and transferable self-explanatory graph representation learning framework that can hierarchically consider the model’s topological information and node feature information. We conduct extensive experiments on datasets from molecular, protein, and social network, and demonstrate that HSEGRL surpasses existing advanced self-explanatory graph neural network models and graph neural network models in terms of graph classification performance. Furthermore, the visualization of layered explanation outcomes substantiates the credibility of our proposed explanation methodology.
[1] |
杨东华,何 王,王金宝. 面向知识图谱的图嵌入学习研究进展[J]. 软件学报,2022,33(9):3370−3390
Yang Donghua, He Wang, Wang Jinbao. Survey on knowledge graph embedding learning[J]. Journal of Software, 2022, 33(9): 3370−3390 (in Chinese)
|
[2] |
张雄涛,祝娜,郭玉慧. 基于图神经网络的会话推荐方法综述 [J]. 数据分析与知识发现,2024, 8(2):1−20
Zhang Xiongtao, Zhu Na, Guo Yuhui. A survey on session-based recommendation methods with graph neural network [J]. Data Analysis and Knowledge Discovery, 2024, 8(2): 1−20 (in Chinese)
|
[3] |
张伟,李 张,王建勇. 融合时空行为与社交关系的用户轨迹识别模型[J]. 计算机学报,2021,44(11):2173−2188 doi: 10.11897/SP.J.1016.2021.02173
Zhang Wei, Li Zhang, Wang Jianyong. A user trajectory identification model with fusion of spatio-temporal behavior and social relation[J]. Chinese Journal of Computers, 2021, 44(11): 2173−2188 (in Chinese) doi: 10.11897/SP.J.1016.2021.02173
|
[4] |
Chen Fenxiao, Wang Y, Wang Bin, et al. Graph representation learning: A survey[J]. APSIPA Transactions on Signal and Information Processing, 2020, e15((9): ): 1−21
|
[5] |
徐冰冰,岑科廷,黄俊杰,等. 图卷积神经网络综述[J]. 计算机学报,2020,43(5):755−780
Xu Bingbing, Cen Keyan, Huang Junjie, et al. A survey on graph neural network[J]. Chinese Journal of Computers, 2020, 43(5): 755−780
|
[6] |
马帅,刘建伟,左信. 图神经网络综述[J]. 计算机研究与发展,2022,59(1):47−80 doi: 10.7544/issn1000-1239.20201055
Ma Shuai, Liu Jianwei, Zuo Xin. Survey on graph convolution neural network[J]. Journal of Computer Research and Development, 2022, 59(1): 47−80 (in Chinese) doi: 10.7544/issn1000-1239.20201055
|
[7] |
Kakkad J, Jannu J, Sharma K, et al. A survey on explainability of graph neural networks [J]. arXiv preprint, arXiv: 2306.01958, 2023
|
[8] |
Ying Zhitao, Bourgeois D, You Jiaxuan, et al. GNNExplainer: Generating explanations for graph neural networks[C]//Proc of the 33rd Int Conf on Neural Information Processing Systems. New York: Curran Associates Inc, 2019: 9244–9255
|
[9] |
Luo Dongsheng, Cheng Wei, Xu Dongkuan, et al. Parameterized explainer for graph neural network[J]. Advances in Neural Information Processing Systems, 2020, 33(1): 19620−19631
|
[10] |
Dai Enyan, Wang Suhang. Towards self-explainable graph neural network[C]//Proc of the 30th ACM Int Conf on Information & Knowledge Management. New York: ACM, 2021: 302−311
|
[11] |
Sui Yongduo, Wang Xiang, Wu Jiancan, et al. Causal attention for interpretable and generalizable graph classification[C]//Proc of the 28th ACM SIGKDD Conf on Knowledge Discovery and Data Mining. New York: ACM, 2022: 1696−1705
|
[12] |
Lin Wanyu, Lan Hao, Wang Hao, et al. Orphicx: A causality-inspired latent variable model for interpreting graph neural networks[C]//Proc of the IEEE/CVF Conf on Computer Vision and Pattern Recognition. Piscataway, NJ: IEEE, 2022: 13729−13738
|
[13] |
Lin Wanyu, Lan Hao, Li Baochun. Generative causal explanations for graph neural networks[C]//Proc of Int Conf on Machine Learning. New York: PMLR, 2021: 6666−6679
|
[14] |
Pope P E, Kolouri S, Rostami M, et al. Explainability methods for graph convolutional neural networks[C]//Proc of The IEEE/CVF Conf on Computer Vision and Pattern Recognition. Piscataway, NJ: IEEE, 2019: 10772−10781
|
[15] |
Huang Qiang, Yamada M, Tian Yuan, et al. GraphLime: Local interpretable model explanations for graph neural networks[J]. IEEE Transactions on Knowledge and Data Engineering, 2022, 35(7): 6968−6972
|
[16] |
Feng Aosong, You Chenyu, Wang Shiqiang, et al. KerGNNs: Interpretable graph neural networks with graph kernels[C]//Proc of the AAAI Conf on Artificial Intelligence. Palo Alto, CA: AAAI, 2022: 6614−6622
|
[17] |
Han Xuanyuan, Barbiero P, Georgiev D, et al. Global concept-based interpretability for graph neural networks via neuron analysis[C]//Proc of the AAAI Conf on Artificial Intelligence. Palo Alto, CA: AAAI, 2023: 10675−10683
|
[18] |
Khoshraftar S, An Aijun. A survey on graph representation learning methods[J]. ACM Transactions on Intelligent Systems and Technology, 2024, 15(1): 1−55
|
[19] |
Gao Hongyang, Ji Shuiwang. Graph U-nets[C]//Proc of Int Conf on Machine Learning. New York: PMLR, 2019: 2083−2092
|
[20] |
Velickovic P, Cucurull G, Casanova A, et al. Graph attention networks[J]. Stat, 2017, 1050(20): 48510−48550
|
[21] |
Cangea C, Veličković P, Jovanović N, et al. Towards sparse hierarchical graph classifiers [J]. arXiv preprint, arXiv: 1811.01287, 2018
|
[22] |
Grattarola D, Zambon D, Bianchi F, et al. Understanding pooling in graph neural networks[J]. IEEE Transactions on Neural Networks and Learning Systems, 2022, 35(2): 2708−2718
|
[23] |
Ying Zhitao, You Jiaxuan, Morris C, et al. Hierarchical graph representation learning with differentiable pooling[C]//Advances in Neural Information Peocessing Systems. La Jolla, California: Curran Associates Inc, 2018, 1−10
|
[24] |
Lee J, Lee I, Kang J. Self-attention graph pooling[C]//Proc of Int Conf on Machine Learning. New York: PMLR, 2019: 3734−3743
|
[25] |
Tang Haoteng, Ma Guixiang, He Lifang, et al. CommPool: An interpretable graph pooling framework for hierarchical graph representation learning[J]. Neural Networks, 2021, 143(1): 669−677
|
[26] |
Gilmer J, Schoenholz S S, Riley P F, et al. Neural message passing for Muaum chemistry[C]// Proc of Int Conf on Machine Learning. New York: PMLR, 2017: 1263−1272
|
[27] |
Kipf T N, Welling M. Semi-supervised classification with graph convolutional networks[C]//Proc of Int Conf on Learning Representations. 2016 [2024−05−18].https://openreview.net/forum?id=SJU4ayYgl
|
[28] |
Xu Keyulu, Hu Weihua, Leskovec J, et al. How powerful are graph neural networks?[C]//Proc of Int Conf on Learning Representations. 2019 [2024−05−18].https://openreview.net/forum?id=ryGs6iA5Km
|
[29] |
Duval A, Malliaros F. Higher-order clustering and pooling for graph neural networks[C]//Proc of the 31st ACM Int Conf on Information & Knowledge Management. New York: ACM, 2022: 426−435
|
[30] |
Debnath A K, Lopez de Compadre R L, Debnath G, et al. Structure-activity relationship of mutagenic aromatic and heteroaromatic nitro compounds. correlation with molecular orbital energies and hydrophobicity[J]. Journal of Medicinal Chemistry, 1991, 34(2): 786−797 doi: 10.1021/jm00106a046
|
[31] |
Wale N, Watson I A, Karypis G. Comparison of descriptor spaces for chemical compound retrieval and classification[J]. Knowledge and Information Systems, 2008, 14(1): 347−375
|
[32] |
Dobson P D, Doig A J. Distinguishing enzyme structures from non-enzymes without alignments[J]. Journal of Molecular Biology, 2003, 330(4): 771−83 doi: 10.1016/S0022-2836(03)00628-4
|
[33] |
Borgwardt K M, Ong C S, Schnauer S, et al. Protein function prediction via graph kernels[J]. Bioinformatics, 2005, 21(suppl_1): i47−i56
|
[34] |
Yanardag P, Vishwanathan S V N. Deep graph kernels[C]//Proc of the 21st ACM SIGKDD Int Conf on Knowledge Discovery and Data Mining. New York: ACM, 2015: 1365−1374
|
[35] |
Zhang Muhan, Cui Zhicheng, Neumann M, et al. An end-to-end deep learning architecture for graph classification[C]//Proc of the AAAI Conf on Artificial Intelligence. Palo Alto: Assoc Advancement Artificial Intelligence. Palo Alto, CA: AAAI, 2018: 32−40
|
[36] |
Bouritsas G, Frasca F, Zafeiriou S, Bronstein M M. Improving graph neural network expressivity via subgraph isomorphism counting[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2022, 45(1): 657−668
|
[37] |
Zhao Lingxiao, Jin Wei, Akoglu L, et al. From stars to subgraphs: Uplifting any GNN with local structure awareness [J]. arXiv preprint, arXiv: 211003753, 2021
|
[38] |
Zhang Zaixi, Liu Qi, Wang Hao, et al. ProtGNN: Towards self-explaining graph neural networks[C]//Proc of the AAAI Conf on Artificial Intelligence. Palo Alto, AC: AAAI, 2022: 9127−9135
|
[39] |
Yu Junchi, Cao Jie, He Ran. Improving subgraph recognition with variational graph information bottleneck[C]//Proc of the IEEE/CVF Conf on Computer Vision and Pattern Recognition. Piscataway, NJ: IEEE, 2022: 19396−19405
|
[40] |
Yu Junchi, Xu Tingyang, Rong Yu, et al. Graph information bottleneck for subgraph recognition [J]. arXiv preprint, arXiv: 201005563, 2020
|
[41] |
Yuan Hao, Yu Haiyang, Gui Shurui, et al. Explainability in graph neural networks: A taxonomic survey[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2022, 45(5): 5782−99
|
[42] |
Yan Xifeng, Cheng Hong, Han Jiawei, et al. Mining significant graph patterns by leap search[C]//Proc of the 2008 ACM SIGMOD Int Conf on Management of Data. New York: ACM, 2008: 433−444
|