• 中国精品科技期刊
  • CCF推荐A类中文期刊
  • 计算领域高质量科技期刊T1类
Advanced Search
Cui Yuanning, Sun Zequn, Hu Wei. A Pre-trained Universal Knowledge Graph Reasoning Model Based on Rule Prompts[J]. Journal of Computer Research and Development, 2024, 61(8): 2030-2044. DOI: 10.7544/issn1000-1239.202440133
Citation: Cui Yuanning, Sun Zequn, Hu Wei. A Pre-trained Universal Knowledge Graph Reasoning Model Based on Rule Prompts[J]. Journal of Computer Research and Development, 2024, 61(8): 2030-2044. DOI: 10.7544/issn1000-1239.202440133

A Pre-trained Universal Knowledge Graph Reasoning Model Based on Rule Prompts

Funds: This work was supported by the National Natural Science Foundation of China (62272219).
More Information
  • Author Bio:

    Cui Yuanning: born in 1996. PhD candidate. His main research interests include knowledge graph, representation learning, and graph machine learning

    Sun Zequn: born in 1992. PhD, Yuxiu Young Scholar. Member of CCF. His main research interests include knowledge graph, representation learning, and entity alignment

    Hu Wei: born in 1982. PhD, professor, PhD supervisor. Senior member of CCF. His main research interests include knowledge graph, database, and intelligent software

  • Received Date: March 14, 2024
  • Revised Date: April 25, 2024
  • Available Online: May 16, 2024
  • A knowledge graph (KG) is a structured knowledge base that stores a massive amount of real-world knowledge, providing data support for numerous knowledge-driven downstream tasks. KGs often suffer from incompleteness, with many missing facts. Therefore, the KG reasoning task aims to infer new conclusions based on known facts to complete the KG. With the research and development of knowledge engineering and its commercial applications, numerous general and domain-specific KGs have been constructed. Existing KG reasoning models mostly focus on completing a single KG but lack general reasoning capabilities. Inspired by the general capabilities of pre-trained large language models in recent years, some pre-trained universal KG reasoning models have been proposed. Addressing the issue of existing pre-trained model being unable to identify high-quality reasoning patterns, we introduce a rule-based pre-trained universal KG reasoning model called RulePreM. It discovers and filters high-quality reasoning rules to enhance the reasoning abilities. The proposed model first constructs a relational IO graph based on reasoning rules and uses an encoder, RuleGNN, to encode the relations. The encoded relations are then used as prompts to encode entities in the KG. Finally, candidate entities are scored for prediction. Additionally, an attention mechanism that combines rule confidence is introduced to further reduce the impact of low-quality reasoning patterns. Experimental results demonstrate that the proposed model exhibits universal reasoning abilities on 43 different KGs, with average performance surpassing existing supervised and pre-trained models.

  • [1]
    刘知远,孙茂松,林衍凯,等. 知识表示学习研究进展[J]. 计算机研究与发展,2016,53(2):247−261 doi: 10.7544/issn1000-1239.2016.20160020

    Liu Zhiyuan, Sun Maosong, Lin Yankai, et al. Knowledge representation learning: A review[J]. Journal of Computer Research and Development, 2016, 53(2): 247−261 (in Chinese) doi: 10.7544/issn1000-1239.2016.20160020
    [2]
    刘峤,李杨,段宏,等. 知识图谱构建技术综述[J]. 计算机研究与发展,2016,53(3):582−600 doi: 10.7544/issn1000-1239.2016.20148228

    Liu Qiao, Li Yang, Duan Hong, et al. Knowledge graph construction techniques[J]. Journal of Computer Research and Development, 2016, 53(3): 582−600 (in Chinese) doi: 10.7544/issn1000-1239.2016.20148228
    [3]
    马昂,于艳华,杨胜利,等. 基于强化学习的知识图谱综述[J]. 计算机研究与发展,2022,59(8):1694−1722 doi: 10.7544/issn1000-1239.20211264

    Ma Ang, Yu Yanhua, Yang Shengli, et al. Survey of knowledge graph based on reinforcement learning[J]. Journal of Computer Research and Development, 2022, 59(8): 1694−1722 (in Chinese) doi: 10.7544/issn1000-1239.20211264
    [4]
    王萌,王昊奋,李博涵,等. 新一代知识图谱关键技术综述[J]. 计算机研究与发展,2022,59(9):1947−1965 doi: 10.7544/issn1000-1239.20210829

    Wang Meng, Wang Haofen, Li Bohan, et al. Survey on key technologies of new generation knowledge graph[J]. Journal of Computer Research and Development, 2022, 59(9): 1947−1965 (in Chinese) doi: 10.7544/issn1000-1239.20210829
    [5]
    Lehmann J, Isele R, Jakob M, et al. DBpedia-A large-scale, multilingual knowledge base extracted from Wikipedia[J]. Semantic Web, 2015, 6(2): 167−195 doi: 10.3233/SW-140134
    [6]
    Vrandecic D, Krötzsch M. Wikidata: A free collaborative knowledgebase[J]. Communications of the ACM, 2014, 57(10): 78−85 doi: 10.1145/2629489
    [7]
    Ji Shaoxiong, Pan Shirui, Cambria E, et al. A survey on knowledge graphs: Representation, acquisition, and applications[J]. IEEE Transactions on Neural Networks and Learning Systems, 2022, 33(2): 494−514 doi: 10.1109/TNNLS.2021.3070843
    [8]
    王萌,王靖婷,江胤霖,等. 人机混合的知识图谱主动搜索[J]. 计算机研究与发展,2020,57(12):2501−2513 doi: 10.7544/issn1000-1239.2020.20200750

    Wang Meng, Wang Jingting, Jiang Yinlin, et al. Hybrid human-machine active search over knowledge graph[J]. Journal of Computer Research and Development, 2020, 57(12): 2501−2513 (in Chinese) doi: 10.7544/issn1000-1239.2020.20200750
    [9]
    Wang Quan, Mao Zhendong, Wang Bin, et al. Knowledge graph embedding: A survey of approaches and applications[J]. IEEE Transactions on Knowledge and Data Engineering, 2017, 29(12): 2724−2743 doi: 10.1109/TKDE.2017.2754499
    [10]
    Guo Qingyu, Zhang Fuzhen, Qin Chuan, et al. A survey on knowlege graph-based recommender sysrem[C]// Proc of the 39th IEEE Int conf on Data Engineering. Piscataway, NJ: IEEE,2023:3803-3804
    [11]
    Rossi A, Barbosa D, Firmani D, et al. Knowledge graph embedding for link prediction: A comparative analysis[J]. ACM Transactions on Knowledge Discovery from Data, 2021, 15(2): 1−49
    [12]
    Brown T B, Mann B, Ryder N, et al. Language models are few-shot learners[C]//Proc of Annual Conf on Neural Information Processing Systems 2020. New York: Curran Associates, 2020: 1877−1901
    [13]
    Sun Zequn, Huang Jiacheng, Lin Jinghao, et al. Joint pre-training and local re-training: Transferable representation learning on multi-source knowledge graphs[C]//Proc of the 29th ACM SIGKDD Int Conf on Knowledge Discovery and Data Mining. New York: ACM, 2023: 2132−2144
    [14]
    Galkin M, Yuan Xinyu, Mostafa H, et al. Towards foundation models for knowledge graph reasoning[C]//Proc of 11th Int Conf on Learning Representations. Washington DC: OpenReview. net, 2023: 1−14
    [15]
    Chen Mingyang, Zhang Wen, Geng Yuxia, et al. Generalizing to unseen elements: A survey on knowledge extrapolation for knowledge graphs[C]//Proc of the 32nd Int Joint Conf on Artificial Intelligence. Freiburg: IJCAI, 2023: 6574−6582
    [16]
    Hamaguchi T, Oiwa H, Shimbo M, et al. Knowledge transfer for out-of-knowledge-base entities: A graph neural network approach[C]//Proc of the 26th Int Joint Conf on Artificial Intelligence. Freiburg: IJCAI, 2017: 1802−1808
    [17]
    Wang Peifeng, Han Jialong, Li Chenliang, et al. Logic attention based neighborhood aggregation for inductive knowledge graph embedding[C]//Proc of the 33rd AAAI Conf on Artificial Intelligence. Palo Alto, CA: AAAI, 2019: 7152−7159
    [18]
    Galkin M, Denis E G, Wu Jiapeng, et al. NodePiece: Compositional and parameter-efficient representations of large knowledge graphs[C]//Proc of the 10th Int Conf on Learning Representations. Washington DC: OpenReview. net, 2022: 1−14
    [19]
    Teru K, Denis E, Hamilton W. Inductive relation prediction by subgraph reasoning[C]//Proc of the 37th Int Conf on Machine Learning. New York: PMLR, 2020: 9448−9457
    [20]
    Chen Jiajun, He Huarui, Wu Feng, et al. Topology-aware correlations between relations for inductive link prediction in knowledge graphs[C]//Proc of the 35th AAAI Conf on Artificial Intelligence. Palo Alto, CA: AAAI, 2021: 6271−6278
    [21]
    Mai Sijie, Zheng Shuangjia, Yang Yuedong, et al. Communicative message passing for inductive relation reasoning[C]//Proc of the 35th AAAI Conf on Artificial Intelligence. Palo Alto, CA: AAAI, 2021: 4294−4302
    [22]
    Xu Xiaohan, Zhang Peng, He Yongquan, et al. Subgraph neighboring relations infomax for inductive link prediction on knowledge graphs[C]//Proc of the 31st Int Joint Conf on Artificial Intelligence. Freiburg: IJCAI, 2022: 2341−2347
    [23]
    Zhu Zhaocheng, Zhang Zuobai, Xhonneux L P, et al. Neural Bellman-Ford networks: A general graph neural network framework for link prediction[C]//Proc of Annual Conf on Neural Information Processing Systems 2021. New York: Curran Associates, 2021: 29476−29490
    [24]
    Zhang Yongqi, Yao Quanming. Knowledge graph reasoning with relational digraph[C]//Proc of the 31st Int Conf on World Wide Web. New York: ACM, 2022: 912−924
    [25]
    Zhu Zhaocheng, Yuan Xinyu, Galkin M, et al. A*Net: A scalable path-based reasoning approach for knowledge graphs[J]. arXiv preprint, arXiv: 2206.04798, 2022
    [26]
    Zhang Yongqi, Zhou Zhanke, Yao Quanming, et al. AdaProp: Learning adaptive propagation for graph neural network based knowledge graph reasoning[C]//Proc of the 29th ACM SIGKDD Int Conf on Knowledge Discovery and Data Mining. New York: ACM, 2023: 3446−3457
    [27]
    Lee J, Chung C, Whang J J. InGram: Inductive knowledge graph embedding via relation graphs[C]//Proc of the 40th Int Conf on Machine Learning. New York: PMLR, 2023: 18796−18809
    [28]
    Geng Yuxia, Chen Jiaoyan, Pan J Z, et al. Relational message passing for fully inductive knowledge graph completion[C]//Proc of the 39th Int Conf on Data Engineering. Piscataway, NJ: IEEE, 2023: 1221−1233
    [29]
    Gao Jianfei, Zhou Yangze, Ribeiro B. Double permutation equivariance for knowledge graph completion[J]. arXiv preprint, arXiv: 2302.01313, 2023.
    [30]
    Zhou Jincheng, Bevilacqua B, Ribeiro B. An OOD multi-task perspective for link prediction with new relation types and nodes[J]. arXiv preprint, arXiv: 2307.06046, 2023
    [31]
    方阳,赵翔,谭真,等. 一种改进的基于翻译的知识图谱表示方法[J]. 计算机研究与发展,2018,55(1):139−150 doi: 10.7544/issn1000-1239.2018.20160723

    Fang Yang, Zhao Xiang, Tan Zhen, et al. A Revised translation-based method for knowledge graph representation[J]. Journal of Computer Research and Development, 2018, 55(1): 139−150 (in Chinese) doi: 10.7544/issn1000-1239.2018.20160723
    [32]
    杨晓慧,万睿,张海滨,等. 基于符号语义映射的知识图谱表示学习算法[J]. 计算机研究与发展,2018,55(8):1773−1784 doi: 10.7544/issn1000-1239.2018.20180248

    Yang Xiaohui, Wan Rui, Zhang Haibin, et al. Semantical symbol mapping embedding learning algorithm for knowledge graph[J]. Journal of Computer Research and Development, 2018, 55(8): 1773−1784 (in Chinese) doi: 10.7544/issn1000-1239.2018.20180248
    [33]
    Dettmers T, Minervini P, Stenetorp P, et al. Convolutional 2D knowledge graph embeddings[C]//Proc of the 32nd AAAI Conf on Artificial Intelligence. Palo Alto, CA: AAAI, 2018: 1811−1818
    [34]
    Bordes A, Usunier N, García-Durán A, et al. Translating embeddings for modeling multi-relational data[C]//Proc of Annual Conf on Neural Information Processing Systems 2013. New York: Curran Associates, 2013: 2787−2795
    [35]
    Sun Zhiqing, Deng Zhihong, Nie Jianyun, et al. RotatE: Knowledge graph embedding by relational rotation in complex space[C]//Proc of the 7th Int Conf on Learning Representations. Washington DC: OpenReview. net, 2019: 1−18
    [36]
    Vashishth S, Sanyal S, Nitin V, et al. Composition-based multi-relational graph convolutional networks[C]//Proc of the 8th Int Conf on Learning Representations. Washington DC: OpenReview. net, 2020: 1−16
    [37]
    Galárraga L A, Teflioudi C, Hose K, et al. AMIE: Association rule mining under incomplete evidence in ontological knowledge bases[C]//Proc of the 22nd Int Conf on World Wide Web. New York: ACM, 2013: 413−422
    [38]
    Yang Fan, Yang Zhilin, Cohen W W. Differentiable learning of logical rules for knowledge base reasoning[C]//Proc of Annual Conf on Neural Information Processing Systems 2017. New York: Curran Associates, 2017: 2319−2328
    [39]
    Sadeghian A, Armandpour M, Ding P, et al. DRUM: End-to-End differentiable rule mining on knowledge graphs[C]//Proc of Annual Conf on Neural Information Processing Systems 2019. New York: Curran Associates, 2019: 15321−15331
    [40]
    Meilicke C, Chekol M W, Ruffinelli D, et al. Anytime bottom-up rule learning for knowledge graph completion[C]//Proc of the 28th Int Joint Conf on Artificial Intelligence. Freiburg: IJCAI, 2019: 3137−3143
    [41]
    Devlin J, Chang Mingwei, Lee K, et al. BERT: Pre-Training of deep bidirectional transformers for language understanding[C]//Proc of the 2019 Conf the North American Chapter of the Association for Computational Linguistics. Stroudsburg, PA: ACL, 2019: 4171−4186
    [42]
    Hou Zhenyu, Liu Xiao, Cen Yukuo, et al. GraphMAE: Self-supervised masked graph autoencoders[J]. arXiv preprint, arXiv: 2205.10803, 2022
    [43]
    Hu Weihua, Liu Bowen, Gomes J, et al. Strategies for pre-training graph neural networks[C]//Proc of the 8th Int Conf on Learning Representations. Washington DC: OpenReview. net, 2020: 1−15
    [44]
    Rong Yu, Bian Yatao, Xu Tingyang, et al. Self-Supervised graph transformer on large-scale molecular data[C]//Proc of Annual Conf on Neural Information Processing Systems 2020. New York: Curran Associates, 2020: 1−13
    [45]
    Sun Fanyun, Hoffmann J, Verma V, et al. InfoGraph: Unsupervised and semi-supervised graph-level representation learning via mutual information maximization[C]//Proc of the 8th Int Conf on Learning Representations. Washington DC: OpenReview. net, 2020: 1−13
    [46]
    Velickovic P, Fedus W, Hamilton W L, et al. Deep graph infomax[C]//Proc of the 7th Int Conf on Learning Representations. Washington DC: OpenReview. net, 2019: 1−13
    [47]
    Fang Taoran, Zhang Yunchao, Yang Yang, et al. Prompt tuning for graph neural networks[J]. arXiv preprint, arXiv: 2209.15240, 2022
    [48]
    Liu Zemin, Yu Xingtong, Fang Yuan, et al. GraphPrompt: Unifying pre-training and downstream tasks for graph neural networks[C]//Proc of the 32nd Int Conf on World Wide Web. New York: ACM, 2023: 417−428
    [49]
    Gong Chenghua, Li Xiang, Yu Jianxiang, et al. Prompt tuning for multi-view graph contrastive learning[J]. arXiv preprint, arXiv: 2310.10362, 2023
    [50]
    Zhu Yun, Guo Jianhao, Tang Siliang. SGL-PT: A strong graph learner with graph prompt tuning[J]. arXiv preprint, arXiv: 2302.12449, 2023
    [51]
    Shirkavand R, Huang Heng. Deep prompt tuning for graph transformers[J]. arXiv preprint, arXiv: 2309.10131, 2023
    [52]
    Ma Yihong, Yan Ning, Li Jiayu, et al. HetGPT: Harnessing the power of prompt tuning in pre-trained heterogeneous graph neural networks[J]. arXiv preprint, arXiv: 2310.15318, 2023
    [53]
    Ge Qingqing, Zhao Zeyuan, Liu Yiding, et al. Enhancing graph neural networks with structure-based prompt[J]. arXiv preprint, arXiv: 2310.17394, 2023
    [54]
    Sun Xiangguo, Cheng Hong, Li Jia, et al. All in one: Multi-task prompting for graph neural networks[C]//Proc of the 29th ACM SIGKDD Int Conf on Knowledge Discovery and Data Mining. New York: ACM, 2023: 2120−2131
    [55]
    Chen Mouxiang, Liu Zemin, Liu Chenghao, et al. ULTRA-DP: ULTRA-unifying graph pre-training with multi-task graph dual prompt[J]. arXiv preprint, arXiv: 2310.14845, 2023
    [56]
    Sun Xiangguo, Zhang Jiawen, Wu Xixi, et al. Graph prompt learning: A comprehensive survey and beyond[J]. arXiv preprint, arXiv: 2311.16534, 2023
    [57]
    Chen Xuelu, Chen Muhao, Fan Changjun, et al. Multilingual knowledge graph completion via ensemble knowledge transfer[C]//Proc of the 2020 Conf on Empirical Methods in Natural Language Processing. Stroudsburg, PA: ACL, 2020: 3227−3238
    [58]
    Singh H, Chakrabarti S, Jain P, et al. Multilingual knowledge graph completion with joint relation and entity alignment[C]//Proc of the 3rd Conf on Automated Knowledge Base Construction. Virtual: Online, 2021: 1−7
    [59]
    Huang Zijie, Li Zheng, Jiang Haoming, et al. Multilingual knowledge graph completion with self-supervised adaptive graph alignment[C]//Proc of the 60th Annual Meeting of the Association for Computational Linguistics. Stroudsburg, PA: ACL, 2022: 474−485
    [60]
    Chakrabarti S, Singh H, Lohiya S, et al. Joint completion and alignment of multilingual knowledge graphs[C]//Proc of the 2022 Conf on Empirical Methods in Natural Language Processing. Stroudsburg, PA: ACL, 2022: 11922−11938
    [61]
    Sun Zequn, Zhang Qingheng, Hu Wei, et al. A benchmarking study of embedding-based entity alignment for knowledge graphs[J]. Proceedings of the VLDB Endowment, 2020, 13(11): 2326−2340
    [62]
    Chen Muhao, Tian Yingtao, Yang Mohan, et al. Multilingual knowledge graph embeddings for cross-lingual knowledge alignment[C]//Proc of the 26th Int Joint Conf on Artificial Intelligence. Freiburg: IJCAI, 2017: 1511−1517
    [63]
    Wu Yuting, Liu Xiao, Feng Yansong, et al. Relation-aware entity alignment for heterogeneous knowledge graphs[C]//Proc of the 28th Int Joint Conf on Artificial Intelligence. Freiburg: IJCAI, 2019: 5278−5284
    [64]
    Mao Xin, Wang Wenting, Wu Yuanbin, et al. Boosting the speed of entity alignment 10×: Dual attention matching network with normalized hard sample mining[C]//Proc of the 30th Int Conf on World Wide Web. New York: ACM, 2021: 821−832
    [65]
    Sun Zequn, Wang Chengming, Hu Wei, et al. Knowledge graph alignment network with gated multi-hop neighborhood aggregation[C]//Proc of the 34th AAAI Conf on Artificial Intelligence. Palo Alto, CA: AAAI, 2020: 222−229
    [66]
    Sun Zequn, Hu Wei, Zhang Qingheng, et al. Bootstrapping entity alignment with knowledge graph embedding[C]//Proc of the 27th Int Joint Conf on Artificial Intelligence. Freiburg: IJCAI, 2018: 4396−4402
    [67]
    Wang Zhichun, Lv Qingsong, Lan Xiaohan, et al. Cross-lingual knowledge graph alignment via graph convolutional networks[C]//Proc of the 2018 Conf on Empirical Methods in Natural Language Processing. Stroudsburg, PA: ACL, 2018: 349−357
    [68]
    Zhang Wen, Zhu Yushan, Chen Mingyang, et al. Structure pretraining and prompt tuning for knowledge graph transfer[C]//Proc of the 32nd Int Conf on World Wide Web. New York: ACM, 2023: 2581−2590
    [69]
    Galkin M, Berrendorf M, Hoyt C T. An open challenge for inductive link prediction on knowledge graphs[J]. arXiv preprint, arXiv: 2203.01520, 2022
    [70]
    Toutanova K, Chen D. Observed versus latent features for knowledge base and text inference[C]//Proc of the 3rd Workshop on Continuous Vector Space Models and Their Compositionality. Stroudsburg, PA: ACL, 2015: 57−66
    [71]
    Xiong Wenhan, Hoang Thien, Wang W Y. Deeppath: A reinforcement learning method for knowledge graph reasoning[C]//Proc of the 2017 Conf on Empirical Methods in Natural Language Processing. Stroudsburg, PA: ACL, 2017: 564−573
    [72]
    Mahdisoltani F, Biega J, Suchanek F M. YAGO3: A knowledge base from multilingual Wikipedias[C]//Proc of the 7th Biennial Conf on Innovative Data Systems Research (CIDR). https://www.cidrdb.org/cidr2015/papers/CIDR15_paperl.pdf
    [73]
    Safavi T, Koutra D. CoDEx: A comprehensive knowledge graph completion benchmark[C]//Proc of the 2020 Conf on Empirical Methods in Natural Language Processing. Stroudsburg, PA: ACL, 2020: 8328−8350
    [74]
    Lv Xin, Han Xu, Hou Lei, et al. Dynamic anticipation and completion for multi-hop reasoning over sparse knowledge graph[C]//Proc of the 2020 Conf on Empirical Methods in Natural Language Processing. Stroudsburg, PA: ACL, 2020: 5694−5703
    [75]
    Chen Yihong, Minervini P, Riedel S, et al. Relation prediction as an auxiliary training objective for improving multi-relational graph representations[C]//Proc of the 3rd Conf on Automated Knowledge Base Construction. Virtual: Proceeding, 2021: 1−21
    [76]
    Ding Boyang, Wang Quan, Wang Bin, et al. Improving knowledge graph embedding using simple constraints[C]//Proc of the 56th Annual Meeting of the Association for Computational Linguistics. Stroudsburg, PA: ACL, 2018: 110−121
    [77]
    Malaviya C, Bhagavatula C, Bosselut A, et al. Commonsense knowledge base completion with structural and semantic context[C]//Proc of the 34th AAAI Conf on Artificial Intelligence. Palo Alto, CA: AAAI, 2020: 2925−2933
    [78]
    Himmelstein D S, Lizee A, Hessler C, et al. Systematic integration of biomedical knowledge prioritizes drugs for repurposing[J]. eLife, 2017, 6: e26726 doi: 10.7554/eLife.26726
    [79]
    Chen Yihong, Minervini P, Riedel S, et al. Relation prediction as an auxiliary training objective for improving multi-relational graph representations[C]//Proc of the 3rd Conf on Automated Knowledge Base Construction. Virtual: Proceeding, 2021: 1−13
    [80]
    He Tao, Liu Ming, Cao Yixin, et al. Exploring & exploiting high-order graph structure for sparse knowledge graph completion[J]. arXiv preprint, arXiv: 2306.17034, 2023
    [81]
    Guo Jia, Kok S. BiQUE: Biquaternionic embeddings of knowledge graphs[C]//Proc of the 2021 Conf on Empirical Methods in Natural Language Processing. Stroudsburg, PA: ACL, 2021: 8338−8351
  • Cited by

    Periodical cited type(0)

    Other cited types(4)

Catalog

    Article views (217) PDF downloads (95) Cited by(4)

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return