Abstract:
Relation reasoning, an important task in natural language processing, aims to infer possible semantic relations between two or more entities. The reasoning process typically involves deriving new relations from known relations between entities, and the results can be widely applied in various downstream tasks such as knowledge graph completion, relation extraction, and commonsense knowledge question answering. Previous studies often face two main limitations: First, they are primarily based on the closed-world assumption, meaning the relation types are predefined and difficult to expand. Second, even if some methods focus on open domains, they typically only handle 1-hop reasoning, which is insufficient for complex multi-hop reasoning scenarios. To address these issues, we propose and define an open domain 2-hop relation reasoning task and construct a dataset for evaluating this task. Furthermore, we introduce an open domain 2-hop relation reasoning framework, named ORANGE (open domain relation reasoning method on generative model), which includes 3 key modules: entity generation, relation generation, and result aggregation. Firstly, the entity generation module generates unknown entities. Secondly, the relation generation module proposes potential new relations. Finally, the result aggregation module integrates the outputs of the preceding modules to determine the final result. Experimental results demonstrate that, when compared to the best existing methods, our approach achieves a 10.36% improvement in the average score. Moreover, when employing ORANGE’s 3-module relation reasoning framework with large language models, it surpasses the conventional in-context learning prompt strategy, showcasing a 9.58% enhancement in the average score.