Knowledge graph is of great research value to artificial intelligence, which has been extensively applied in the fields of semantic search and question answering, etc. Knowledge graph representation transforms a large-scale knowledge graph comprising entities and relations into a continuous vector space. To this end, there have been a number of models and methods proposed for knowledge embedding. Among them, TransE is a classic translation-based method that is of low model complexity, high computational efficiency, as well as good capability of expressing knowledge. However, TransE still has two flaws: one is that it utilizes inflexible Euclidean distance as metric, and treats each feature dimension identically, hence, the model accuracy may be interfered by irrelevant dimensions; the other is that it has limitations in dealing with complex relations including reflexive, one-to-many, many-to-one and many-to-many relations. Currently, there has not been a single method that resolves the flaws simultaneously, and thus, we propose a revised translation-based method for knowledge graph representation, namely, TransAH. For the first flaw, TransAH adopts an adaptive metric, replacing Euclidean distance with weighted Euclidean distance by adding a diagonal weight matrix, which assigns different weights to every feature dimension. As to the second, inspired by TransH, it introduces the relation-oriented hyperspace model, projecting head and tail entities to hyperspace of a given relation for distinction. At last, empirical studies on public real knowledge graph datasets analyze and verify the effectiveness of the proposed method. Comprehensive comparative experiments using two tasks-link prediction and triplet classification show that, in contrast to the existing models and methods, TransAH achieves remarkable improvement in various aspects and demonstrates its superiority.