Abstract:
In recent years, numerous research works have been devoted to knowledge graph embedding learning which aims to encode entities and relations of the knowledge graph in continuous low-dimensional vector spaces. And the learned embedding representations have been successfully utilized to alleviate the computational inefficiency problem of large-scale knowledge graphs. However, most existing embedding models only consider the structural information of the knowledge graph. The contextual information and literal information are also abundantly contained in knowledge graphs and could be exploited to learn better embedding representations. In this paper, we focus on this problem and propose a rule-guided joint embedding learning model which integrates the contextual information and literal information into the embedding representations of entities and relations based on graph convolutional networks. Especially for the convolutional encoding of the contextual information, we measure the importance of a piece of contextual information by computing its confidence and relatedness metrics. For the confidence metric, we define a simple and effective rule and propose a rule-guided computing method. For the relatedness metric, we propose a computing method based on the representations of the literal information. We conduct extensive experiments on two benchmark datasets, and the experimental results demonstrate the effectiveness of the proposed model.