Abstract:
As an extension of the knowledge graph, the knowledge hypergraph has a strong ability to express n-ary relational facts. Using the knowledge hypergraph to model known facts in the real world and discover unknown facts through link prediction has become a current research hotspot. Among existing knowledge hypergraph (or knowledge graph) link prediction methods, constructing the loss function using true labels of samples and their predicted labels is a key step, where negative samples have a great influence on the training of the link prediction model. However, when applying the negative sampling methods for knowledge graph link prediction (e.g., the uniformly random sampling) to the knowledge hypergraph, we may face problems such as low quality of negative samples and high complexity of models. As a result, we design a generative adversarial negative sampling method, named HyperGAN, for knowledge hypergraph link prediction, which generates high-quality negative samples through adversarial training to solve the zero loss problem, thereby improving the accuracy of the link prediction model. Besides, HyperGAN does not require pre-training, which makes it more efficient than previous negative sampling methods in assisting the training of link prediction models. Comparative experiments on multiple real-world datasets show that HyperGAN outperforms the baselines in terms of performance and efficiency. In addition, the case study and quantitative analysis further validate our method in improving the quality of negative samples.