Abstract:
In recent years, with the extensive application of graph neural network (GNN) technology in the fields such as social network, information, chemistry and biology, the interpretability of GNN has attracted widespread attention. However, prevailing explanation methods fail to capture the hierarchical explanation information, and these hierarchical information has not been fully utilized to improve the classification accuracy of graph tasks. To address this issue, we propose a hierarchical self-explanation graph representation learning model called HSEGRL (hierarchical self-explanation graph representation learning). This model, by discovering hierarchical information in the graph structure, predicts graph classifications while outputting hierarchical self-explanation results. Specifically, we design the basic unit for extracting hierarchical information—interpreters. These interpreters consist of an encoder that extracts node features, a pooling layer that perceives hierarchical explanation-aware subgraphs, and a decoder that refines higher-order explanation information. We refine the pooling mechanism with an explanation-aware strategy, enabling the hierarchical selection of subgraphs based on the evaluation of the model’s topology and feature importance, thereby facilitating hierarchical self-explanation in conjunction with graph classification. HSEGRL is a functionally comprehensive and transferable self-explanatory graph representation learning framework that can hierarchically consider the model’s topological information and node feature information. We conduct extensive experiments on datasets from molecular, protein, and social network, and demonstrate that HSEGRL surpasses existing advanced self-explanatory graph neural network models and graph neural network models in terms of graph classification performance. Furthermore, the visualization of layered explanation outcomes substantiates the credibility of our proposed explanation methodology.