ISSN 1000-1239 CN 11-1777/TP

    null
    null
    null

    Default Latest Most Read
    Please wait a minute...
    For Selected: Toggle Thumbnails
    Cross-Domain Adversarial Learning for Zero-Shot Classification
    Liu Huan, Zheng Qinghua, Luo Minnan, Zhao Hongke, Xiao Yang, Lü Yanzhang
    Journal of Computer Research and Development    2019, 56 (12): 2521-2535.   DOI: 10.7544/issn1000-1239.2019.20190614
    Abstract906)   HTML31)    PDF (4855KB)(559)       Save
    Zero-shot learning (ZSL) aims to recognize novel categories, which have few or even no sample for training and follow a different distribution from seen classes. With the recent advances of deep neural networks on cross-modal generation, encouraging breakthroughs have been achieved on classifying unseen categories with their synthetic samples. Extant methods synthesize unseen samples with the combination of generative adversarial nets (GANs) and variational auto-encoder (VAE) by sharing the generator and the decoder. However, due to the different data distributions produced by these two kinds of generative models, fake samples synthesized by the joint model follow the complex multi-domain distribution instead of satisfying a single model distribution. To address this issue, in this paper we propose a cross-domain adversarial generative network (CrossD-AGN) to integrate the traditional GANs and VAE into a unified framework, which is able to generate unseen samples based on the class-level semantics for zero-shot classification. We propose two symmetric cross-domain discriminators along with the cross-domain adversarial learning mechanism to learn to determine whether a synthetic sample is from the generator-domain or the decoder-domain distribution, so as to drive the generator/decoder of the joint model to improve its capacity of synthesizing fake samples. Extensive experimental results over several real-world datasets demonstrate the effectiveness and superiority of the proposed model on zero-shot visual classification.
    Related Articles | Metrics
    End-to-end Knowledge Triplet Extraction Combined with Adversarial Training
    Huang Peixin, Zhao Xiang, Fang Yang, Zhu Huiming, Xiao Weidong
    Journal of Computer Research and Development    2019, 56 (12): 2536-2548.   DOI: 10.7544/issn1000-1239.2019.20190640
    Abstract1003)   HTML35)    PDF (1562KB)(554)       Save
    As a system to effectively represent the real world, knowledge graph has been widely concerned by academia and industry, and its ability to accurately represent knowledge is widely used in upper applications such as information service, intelligent search, and automatic question answering. A fact (knowledge) in form of triplet (head_entity, relation, tail_entity), is the basic unit of knowledge graph. Since facts in existing knowledge graphs are far from enough to describe the real world, acquiring more knowledge for knowledge graph completion and construction appears to be crucial. This paper investigates the problem of knowledge triplet extraction in the task of knowledge acquisition. This paper proposes an end-to-end knowledge triplet extraction method combined with adversarial training. Traditional techniques, whether pipeline or joint extraction, failed to discover the link between two subtasks of named entity recognition and relation extraction, which led to error propagation and worse extraction effectiveness. To overcome these flaws, in this paper, we adopt an entity and relation joint tagging strategy, and leverage an end-to-end framework to automatically tag the text and classify the tagging results. In addition, self-attention mechanism is added to assist the encoding of text, an objective function with bias term is additionally introduced to increase the attention of relevant entities, and the adversarial training is utilized to improve the robustness of the model. In experiments, we evaluate the proposed knowledge triplet extraction model via three evaluation metrics and analyze the experiments in four aspects. The experimental results verify that our model outperforms other state-of-the-art alternatives on knowledge triplet extraction.
    Related Articles | Metrics
    Open Knowledge Graph Representation Learning Based on Neighbors and Semantic Affinity
    Du Zhijuan, Du Zhirong, Wang Lu
    Journal of Computer Research and Development    2019, 56 (12): 2549-2561.   DOI: 10.7544/issn1000-1239.2019.20190648
    Abstract821)   HTML16)    PDF (3786KB)(606)       Save
    Knowledge graph (KG) breaks the data isolation in different scenarios and provides basic support for the practical application. The representation learning transforms KG into the low-dimensional vector space to facilitate KG application. However, there are two problems in KG representation learning: 1)It is assumed that KG satisfies the closed-world assumption. It requires all entities to be visible during the training. In reality, most KGs are growing rapidly, e.g. a rate of 200 new entities per day in the DBPedia. 2)Complex semantic interaction, such as matrix projection and convolution, are used to improve the accuracy of the model and limit the scalability of the model. To this end, we propose a representation learning method TransNS for open KG that allows new entities to exist. It selects the related neighbors as the attribute of the entity to infer the new entity, and uses the semantic affinity between the entities to select the negative triple in the learning phase to enhance the semantic interaction capability. We compare our TransNS with the state-of-the-art baselines on 5 traditional and 8 new datasets. The results show that our TransNS performs well in the open KGs and even outperforms existing models on the baseline closed KGs.
    Related Articles | Metrics