ISSN 1000-1239 CN 11-1777/TP

    null
    null
    null

    Default Latest Most Read
    Please wait a minute...
    For Selected: Toggle Thumbnails
    Journal of Computer Research and Development    2017, 54 (12): 2647-2648.  
    Abstract1594)   HTML20)    PDF (370KB)(645)       Save
    Related Articles | Metrics
    Training Deep Neural Networks for Image Applications with Noisy Labels by Complementary Learning
    Zhou Yucong, Liu Yi, Wang Rui
    Journal of Computer Research and Development    2017, 54 (12): 2649-2659.   DOI: 10.7544/issn1000-1239.2017.20170637
    Abstract1154)   HTML1)    PDF (3210KB)(722)       Save
    In recent years, deep neural networks (DNNs) have made great progress in many fields such as image recognition, speech recognition and natural language processing, etc. The rapid development of the Internet and mobile devices promotes the popularity of image applications and provides a large amount of data to be used for training DNNs. Also, the manually annotated data is the key of training DNNs. However, with the rapid growth of data scale, the cost of manual annotation is getting higher and the quality is hard to be guaranteed, which will damage the performance of DNNs. Combining the idea of easy example mining and transfer learning, we propose a method called complementary learning to train DNNs on large-scale noisy labels for image applications. With a small number of clean labels and a large number of noisy labels, we jointly train two DNNs with complementary strategies and meanwhile transfer the knowledge from the auxiliary model to the main model. Through experiments we show that this method can efficiently train DNNs on noisy labels. Compared with current approaches, this method can handle more complicated noise labels, which demonstrates its value for image applications.
    Related Articles | Metrics
    Protein Function Prediction Based on Multiple Networks Collaborative Matrix Factorization
    Yu Guoxian, Wang Keyao, Fu Guangyuan, Wang Jun, Zeng An
    Journal of Computer Research and Development    2017, 54 (12): 2660-2673.   DOI: 10.7544/issn1000-1239.2017.20170644
    Abstract950)   HTML0)    PDF (2039KB)(500)       Save
    Accurately and automatically predicting biological functions of proteins is one of the fundamental tasks in bioinformatics, and it is also one of the key applications of artificial intelligence in biological data analysis. The wide application of high throughput technologies produces various functional association networks of molecules. Integrating these networks contributes to more comprehensive view for understanding the functional mechanism of proteins and to improve the performance of protein function prediction. However, existing network integration based solutions cannot apply to a large number of functional labels, ignore the correlation between labels, or cannot differentially integrate multiple networks. This paper proposes a protein function prediction approach based on multiple networks collaborative matrix factorization (ProCMF). To explore the latent relationship between proteins and between labels, ProCMF firstly applies nonnegative matrix factorization to factorize the protein-label association matrix into two low-rank matrices. To employ the correlation between labels and to guide the collaborative factorization with proteomic data, it defines two smoothness terms on these two low-rank matrices. To differentially integrate these networks, ProCMF sets different weights to them. In the end, ProCMF combines these goals into a unified objective function and introduces an alternative optimization technique to jointly optimize the low-rank matrices and weights. Experimental results on three model species (yeast, human and mouse) with multiple functional networks show that ProCMF outperforms other related competitive methods. ProCMF can effectively and efficiently handle massive labels and differentially integrate multiple networks.
    Related Articles | Metrics
    EAE: Enzyme Knowledge Graph Adaptive Embedding
    Du Zhijuan, Zhang Yi, Meng Xiaofeng, Wang Qiuyue
    Journal of Computer Research and Development    2017, 54 (12): 2674-2686.   DOI: 10.7544/issn1000-1239.2017.20170638
    Abstract1015)   HTML4)    PDF (3106KB)(580)       Save
    In recent years a drastic rise in constructing Web-scale knowledge graph (KG) has appeared and the deal with practical problems falls back on KG. Embedding learning of entities and relations has become a popular method to perform machine learning on relational data such as KG. Based on embedding representation, knowledge analysis, inference, fusion, completion and even decision-making could be promoted. Constructing and embedding open-domain knowledge graph (OKG) has mushroomed,which greatly promots the intelligentization of big data in open domain. Meanwhile, specific-domain knowledge graph (SKG) has become an important resource for smart applications in specific domain. However, SKG is developing and its embedding is still in the embryonic stage. This is mainly because there is a germination in SKG due to the difference for data distributions between OKG and SKG. More specifically: 1) In OKG, such as WordNet and Freebase, sparsity of head and tail entities are nearly equal, but in SKG, such as Enzyme KG and NCI-PID, inhomogeneous is more popular. For example, the tail entities are about 1000 times more than head ones in the enzyme KG of microbiology area. 2) Head and tail entities can be commuted in OKG,but they are noncommuting in SKG because most of relations are attributes. For example, entity “Obama” can be a head entity or a tail entity, but the head entity “enzyme” is always in the head position in the enzyme KG. 3) Breadth of relation has a small skew in OKG while imbalance in SKG. For example, a enzyme entity can link 31809 x-gene entities in the enzyme KG. Based on observation, we propose a novel approach EAE to deal with the 3 issues. We evaluate our approach on link prediction and triples classification tasks. Experimental results show that our approach outperforms Trans(E, H, R, D and TransSparse) significantly, and achieves state-of the-art performance.
    Related Articles | Metrics
    A Link Prediction Model for Clinical Temporal Knowledge Graph
    Chen Dehua, Yin Suna, Le Jiajin, Wang Mei, Pan Qiao, Zhu Lifeng
    Journal of Computer Research and Development    2017, 54 (12): 2687-2697.   DOI: 10.7544/issn1000-1239.2017.20170640
    Abstract2485)   HTML22)    PDF (2758KB)(1050)       Save
    Link prediction on knowledge graph is the main task of knowledge base completion, predicting whether a relationship existing in the knowledge base is likely to be true. However, traditional knowledge link prediction models are only appropriate for static data rather than temporal knowledge base. Temporal knowledge base exists on various fields. Take medical medicine field as example, diabetes is a typical chronic disease which evolves slowly. Thus, link prediction on clinical knowledge base such as diabetic complication requires the analysis on temporal characteristic of temporal knowledge base, which is a great challenge for traditional link prediction models. Thus, to address the prediction of temporal knowledge base, this paper proposes a long short-term memory (LSTM) based model for temporal knowledge base. The proposed model adopts memory cells of LSTM for sequential learning, and then builds incremental learning layer. Afterwards, timing characteristics can be extracted by the way of end-to-end, which realizes the prediction on temporal knowledge base. In experiments, the proposed model in clinical temporal knowledge base shows significant improvements compared with baselines including Rescal, NTN, TransE, TransH, TransR and DNN.
    Related Articles | Metrics