Abstract:
Network representation learning aims at embedding the network topology structures, vertex contents and other information of networks into the low-dimensional vector space, which thus provides an effective tool for network data mining, link prediction and recommendation system etc. However, the existing learning algorithms based on neural networks neglect the location information of the context vertices. Meanwhile, this kind of algorithms ignore the semantic associations between vertices and texts. Therefore, this paper proposes a novel network representation learning algorithm using the optimizations of neighboring vertices and relation model (NRNR). NRNR first uses the neighboring vertices to optimize the learning procedure, consequently, the location information of the vertices in the context windows is embedded into the network representations. In addition, NRNR first introduces the relational modeling from knowledge representation learning to learn the structure features of the networks, and the text contents between vertices are thus embedded into the network representations with the form of relational constraints. Moreover, NRNR proposes a feasible and effective network representation joint learning framework, which integrates the above two goals into a unified optimization objective function. The experimental results show that the proposed NRNR algorithm is superior to all kinds of baseline algorithms applied to the network node classification tasks in this paper. In network visualization tasks, the network representations obtained by NRNR algorithm show a distinct clustering boundary.