Learning graph embedding is a crucial research issue in the field of statistical relational learning and knowledge graph population, and it is important for the construction and application of knowledge graph in recent years. In this paper, we perform a comparative study of the prevalent knowledge representation based reasoning models, with detailed discussion of the general potential problems contained in their basic assumptions. A semantical symbol sensory projection based neural network model is proposed in order to learn graph embedding, whose basic idea is to utilize the recurrent neural network for encoding the compositional representation of symbol strings (composition of entity-relation) onto their target grounded symbol according to the existing relational data in knowledge. In addition, we introduce the inverse image of the relations into the system to deal with the symmetricasymmetric properties of the relations, which makes the proposed model more adaptable to different types of reasoning tasks on a variety of homogeneous and heterogeneous networks than other solutions. The proposed model is suitable for large scale knowledge graph representation learning. Experimental results on benchmark datasets show that the proposed model achieves state-of-the-art performance on both of the knowledge based completion benchmark tests and the graph based multi-label classification tasks.