高级检索

    基于规则提示的知识图谱通用推理预训练模型

    A Pre-trained Universal Knowledge Graph Reasoning Model Based on Rule Prompts

    • 摘要: 知识图谱是存储真实世界海量知识的图数据库,为大量知识驱动的下游任务提供了数据支持. 知识图谱往往具有不完备性,存在大量缺失的事实,因此知识图谱推理任务基于已知事实推理新结论来补全知识图谱. 随着知识工程及其商业应用的研究与发展,大量通用和领域知识图谱被构建. 现有知识图谱推理方法大多面向单一知识图谱的补全,不具备通用推理能力. 近年来,受预训练大语言模型通用能力的启发,一些通用的知识图谱推理预训练模型被提出. 针对现有预训练模型无法识别高质量推理模式的问题,提出一个基于规则提示的知识图谱通用推理预训练模型——RulePreM,该模型筛选与利用高质量推理规则来提高知识图谱上的推理能力. 首先基于推理规则构建关系IO图和一个编码器RuleGNN对关系进行编码,然后将关系编码作为提示来编码知识图谱中的实体,最后对候选实体进行打分预测. 还提出一种结合规则置信度的注意力机制,来进一步减少低质量推理模式的影响. 实验结果表明,所提出的模型在43个不同设定下的知识图谱上具有良好的通用推理能力,平均性能指标均优于现有的有监督模型和预训练模型.

       

      Abstract: A knowledge graph (KG) is a structured knowledge base that stores a massive amount of real-world knowledge, providing data support for numerous knowledge-driven downstream tasks. KGs often suffer from incompleteness, with many missing facts. Therefore, the KG reasoning task aims to infer new conclusions based on known facts to complete the KG. With the research and development of knowledge engineering and its commercial applications, numerous general and domain-specific KGs have been constructed. Existing KG reasoning models mostly focus on completing a single KG but lack general reasoning capabilities. Inspired by the general capabilities of pre-trained large language models in recent years, some pre-trained universal KG reasoning models have been proposed. Addressing the issue of existing pre-trained model being unable to identify high-quality reasoning patterns, we introduce a rule-based pre-trained universal KG reasoning model called RulePreM. It discovers and filters high-quality reasoning rules to enhance the reasoning abilities. The proposed model first constructs a relational IO graph based on reasoning rules and uses an encoder, RuleGNN, to encode the relations. The encoded relations are then used as prompts to encode entities in the KG. Finally, candidate entities are scored for prediction. Additionally, an attention mechanism that combines rule confidence is introduced to further reduce the impact of low-quality reasoning patterns. Experimental results demonstrate that the proposed model exhibits universal reasoning abilities on 43 different KGs, with average performance surpassing existing supervised and pre-trained models.

       

    /

    返回文章
    返回