ISSN 1000-1239 CN 11-1777/TP

Most Down Articles

    Published in last 1 year | In last 2 years| In last 3 years| All| Most Downloaded in Recent Month | Most Downloaded in Recent Year|

    All
    Please wait a minute...
    For Selected: Toggle Thumbnails
    Big Data Management: Concepts,Techniques and Challenges
    Meng Xiaofeng and Ci Xiang
    Journal of Computer Research and Development   
    Accepted: 15 January 2020

    Knowledge Graph Construction Techniques
    LiuQiao,LiYang,DuanHong,LiuYao,QinZhiguang
    Journal of Computer Research and Development    2016, 53 (3): 582-600.   DOI: 10.7544/issn1000-1239.2016.20148228
    Abstract7620)   HTML97)    PDF (2414KB)(13067)       Save
    Google’s knowledge graph technology has drawn a lot of research attentions in recent years. However, due to the limited public disclosure of technical details, people find it difficult to understand the connotation and value of this technology. In this paper, we introduce the key techniques involved in the construction of knowledge graph in a bottom-up way, starting from a clearly defined concept and a technical architecture of the knowledge graph. Firstly, we describe in detail the definition and connotation of the knowledge graph, and then we propose the technical framework for knowledge graph construction, in which the construction process is divided into three levels according to the abstract level of the input knowledge materials, including the information extraction layer, the knowledge integration layer, and the knowledge processing layer, respectively. Secondly, the research status of the key technologies for each level are surveyed comprehensively and also investigated critically for the purposes of gradually revealing the mysteries of the knowledge graph technology, the state-of-the-art progress, and its relationship with related disciplines. Finally, five major research challenges in this area are summarized, and the corresponding key research issues are highlighted.
    Related Articles | Metrics
    Knowledge Representation Learning: A Review
    Liu Zhiyuan, Sun Maosong, Lin Yankai, Xie Ruobing
    Journal of Computer Research and Development    2016, 53 (2): 247-261.   DOI: 10.7544/issn1000-1239.2016.20160020
    Abstract8695)   HTML49)    PDF (3333KB)(12467)       Save
    Knowledge bases are usually represented as networks with entities as nodes and relations as edges. With network representation of knowledge bases, specific algorithms have to be designed to store and utilize knowledge bases, which are usually time consuming and suffer from data sparsity issue. Recently, representation learning, delegated by deep learning, has attracted many attentions in natural language processing, computer vision and speech analysis. Representation learning aims to project the interested objects into a dense, real-valued and low-dimensional semantic space, whereas knowledge representation learning focuses on representation learning of entities and relations in knowledge bases. Representation learning can efficiently measure semantic correlations of entities and relations, alleviate sparsity issues, and significantly improve the performance of knowledge acquisition, fusion and inference. In this paper, we will introduce the recent advances of representation learning, summarize the key challenges and possible solutions, and further give a future outlook on the research and application directions.
    Related Articles | Metrics
    Deep Learning: Yesterday, Today, and Tomorrow
    Yu Kai, Jia Lei, Chen Yuqiang, and Xu Wei
    计算机研究与发展    2013, 50 (9): 1799-1804.  
    Abstract3435)   HTML55)    PDF (873KB)(8982)       Save
    Machine learning is an important area of artificial intelligence. Since 1980s, huge success has been achieved in terms of algorithms, theory, and applications. From 2006, a new machine learning paradigm, named deep learning, has been popular in the research community, and has become a huge wave of technology trend for big data and artificial intelligence. Deep learning simulates the hierarchical structure of human brain, processing data from lower level to higher level, and gradually composing more and more semantic concepts. In recent years, Google, Microsoft, IBM, and Baidu have invested a lot of resources into the R&D of deep learning, making significant progresses on speech recognition, image understanding, natural language processing, and online advertising. In terms of the contribution to real-world applications, deep learning is perhaps the most successful progress made by the machine learning community in the last 10 years. In this article, we will give a high-level overview about the past and current stage of deep learning, discuss the main challenges, and share our views on the future development of deep learning.
    Related Articles | Metrics
    Towards Measuring Unobservability in Anonymous Communication Systems
    Tan Qingfeng, Shi Jinqiao, Fang Binxing, Guo Li, Zhang Wentao, Wang Xuebin, Wei Bingjie
    Journal of Computer Research and Development    2015, 52 (10): 2373-2381.   DOI: 10.7544/issn1000-1239.2015.20150562
    Abstract10082)   HTML15)    PDF (6861KB)(4278)       Save
    Anonymous communication technique is one of the main privacy-preserving techniques, which has been widely used to protect Internet users’ privacy. However, existing anonymous communication systems are particularly vulnerable to traffic analysis, and researchers have been improving unobservability of systems against Internet censorship and surveillance. However, how to quantify the degree of unobservability is a key challenge in anonymous communication systems. We model anonymous communication systems as an alternating turing machine, and analyze adversaries’ threat model. Based on this model, this paper proposes a relative entropy approach that allows to quantify the degree of unobservability for anonymous communication systems. The degree of unobservability is based on the probabilities of the observed flow patterns by attackers. We also apply this approach to measure the pluggable transports of TOR, and show how to calculate it for comparing the level of unobservability of these systems. The experimental results show that it is useful to evaluate the level of unobservability of anonymous communication systems. Finally, we present the conclusion and discuss future work on measuring unobservability in anonymous communication systems.
    Related Articles | Metrics
    A Survey on Entity Alignment of Knowledge Base
    Zhuang Yan, Li Guoliang, Feng Jianhua
    Journal of Computer Research and Development    2016, 53 (1): 165-192.   DOI: 10.7544/issn1000-1239.2016.20150661
    Abstract4431)   HTML21)    PDF (3322KB)(4142)       Save
    Entity alignment on knowledge base has been a hot research topic in recent years. The goal is to link multiple knowledge bases effectively and create a large-scale and unified knowledge base from the top-level to enrich the knowledge base, which can be used to help machines to understand the data and build more intelligent applications. However, there are still many research challenges on data quality and scalability, especially in the background of big data. In this paper, we present a survey on the techniques and algorithms of entity alignment on knowledge base in decade, and expect to provide alternative options for further research by classifying and summarizing the existing methods. Firstly, the entity alignment problem is formally defined. Secondly, the overall architecture is summarized and the research progress is reviewed in detail from algorithms, feature matching and indexing aspects. The entity alignment algorithms are the key points to solve this problem, and can be divided into pair-wise methods and collective methods. The most commonly used collective entity alignment algorithms are discussed in detail from local and global aspects. Some important experimental and real world data sets are introduced as well. Finally, open research issues are discussed and possible future research directions are prospected.
    Related Articles | Metrics
    Stock Network Community Detection Method Based on Influence Calculating Model
    Wang Hao, Li Guohuan, Yao Hongliang, Li Junzhao
    Journal of Computer Research and Development    2014, 51 (10): 2137-2147.   DOI: 10.7544/issn1000-1239.2014.20130575
    Abstract893)   HTML3)    PDF (3150KB)(4036)       Save
    Taking advantage of the energy characteristics of complex system, a concept of influence is introduced to research community detection method, so that community structure could be discovered effectively. With regard to the stock closing price, by introducing the definition of influence and node centrality, a stock network is construted with influence which is regarded as the edge weight. This paper proposes an algorithm named stock network hierarchical clustering based on the influence calculating model, which is referred to as BCNHC algorithm. Firstly, BCNHC algorithm introduces the definition of nodes’ activity and influence, and puts forward the influence calculating model of node in networks in addition. Then, on the basis of measure criterion of the node centrality, the nodes with large node centrality value as the center nodes are selected, and the nodes’ Intimacy and influence model are utilized to ensure the influence of association between neighbor nodes. Furthermore, the node with minimum degree is gathering toward to center nodes, so as to reduce the error clustering caused by the uncertainty of which community neighbor nodes belong to. On the basis, the neighbor communities are clustered with the average influence of association of communities. It guarantees that influence of association reach to maximization for all the nodes in the community, until the entire networks’ modularity come to maximum. At last, comparison and analysis of experimental on stock network prove the feasibility of BCNHC algorithm.
    Related Articles | Metrics
    Using Maximum Entropy Model for Chinese Text Categorization
    Li Ronglu, Wang Jianhui, Chen Xiaoyun, Tao Xiaopeng, and Hu Yunfa
    Journal of Computer Research and Development    2005, 42 (1): 94-101.  
    Abstract3074)   HTML13)    PDF (409KB)(3997)       Save
    With the rapid development of World Wide Web, text classification has become the key technology in organizing and processing large amount of document data. Maximum entropy model is a probability estimation technique widely used for a variety of natural language tasks. It offers a clean and accommodable frame to combine diverse pieces of contextual information to estimate the probability of a certain linguistics phenomena. This approach for many tasks of NLP perform near state-of-the-art level, or outperform other competing probability methods when trained and tested under similar conditions. However, relatively little work has been done on applying maximum entropy model to text categorization problems. In addition, no previous work has focused on using maximum entropy model in classifying Chinese documents. Maximum entropy model is used for text categorization. Its categorization performance is compared and analyzed using different approaches for text feature generation, different number of feature and smoothng technique. Moreover, in experiments it is compared to Bayes, KNN and SVM, and it is shown that its performance is higher than Bayes and comparable with KNN and SVM. It is a promising technique for text categorization.
    Related Articles | Metrics
    Survey of Data-Centric Smart City
    Wang Jingyuan, Li Chao, Xiong Zhang, and Shan Zhiguang
    Journal of Computer Research and Development   
    Quantum Annealing Algorithms: State of the Art
    Du Weilin, Li Bin, and Tian Yu
    Journal of Computer Research and Development    2008, 45 (9): 1501-1508.  
    Abstract1422)   HTML0)    PDF (1382KB)(3916)       Save
    In mathematics and applications, quantum annealing is a new method for finding solutions to combinatorial optimization problems and ground states of glassy systems using quantum fluctuations. Quantum fluctuations can be simulated in computers using various quantum Monte Carlo techniques, such as the path integral Monte Carlo method, and thus they can be used to obtain a new kind of heuristic algorithm for global optimization. It can be said that the idea of quantum annealing comes from the celebrated classical simulated thermal annealing invented by Kirkpatrick. However, unlike a simulated annealing algorithm, which utilizes thermal fluctuations to help the algorithm jump from local optimum to global optimum, quantum annealing algorithms utilize quantum fluctuations to help the algorithm tunnel through the barriers directly from local optimum to global optimum. According to the previous studies, although the quantum annealing algorithm is not capable, in general, of finding solutions to NP-complete problems in polynomial time, quantum annealing is still a promising optimization technique, which exhibits good performances on some typical optimization problems, such as the transverse Ising model and the traveling salesman problem. Provided in this paper is an overview of the principles and research progresses of quantum annealing algorithms in recent years; several different kinds of quantum annealing algorithms are presented in detail; both the advantages and disadvantages of each algorithm are analyzed; and prospects for the research orientation of the quantum annealing algorithm in future are given.
    Related Articles | Metrics
    Recent Advances in Bayesian Machine Learning
    Zhu Jun,Hu Wenbo
    Journal of Computer Research and Development    2015, 52 (1): 16-26.   DOI: 10.7544/issn1000-1239.2015.20140107
    Abstract3160)   HTML12)    PDF (2137KB)(3733)       Save
    With the fast growth of big data, statistical machine learning has attracted tremendous attention from both industry and academia, with many successful applications in vision, speech, natural language, and biology. In particular, the last decades have seen the fast development of Bayesian machine learning, which is now representing a very important class of techniques. In this article, we provide an overview of the recent advances in Bayesian machine learning, including the basics of Bayesian machine learning theory and methods, nonparametric Bayesian methods and inference algorithms, and regularized Bayesian inference. Finally, we also highlight the challenges and recent progress on large-scale Bayesian learning for big data, and discuss on some future directions.
    Related Articles | Metrics
    Summary of Storage System and Technology Based on Phase Change Memory
    Zhang Hongbin, Fan Jie, Shu Jiwu, Hu Qingda
    Journal of Computer Research and Development    2014, 51 (8): 1647-1662.   DOI: 10.7544/issn1000-1239.2014.20131123
    Abstract1022)   HTML2)    PDF (5502KB)(3573)       Save
    With the increasing of performance gap between CPU and memory, the “memory wall” problem becomes more and more prominent. In order to bridge the gap, many DRAM based solutions are proposed. However, the DRAM is approaching the bottleneck in density and energy cost. How to design a practical memory architecture to settle this problem is becoming more and more prominent. Recent years, phase change memory (PCM) has gained great attention of researchers from domestic and abroad for its high density and low energy cost. And especially, its non-volatility and byte addressable feature are blurring the difference of memory and storage, which can bring significant changes for future memory architecture. This paper mainly discusses the architecture of main memory based on PCM and related technology about tolerating slow writes, ware leveling, erasure codes, reuses of failed blocks and software optimizing. And this paper also discusses the application of PCM in storage system and the affects on the design of storage architecture and computer system. After the discussion, the research works are summarized and the possible research directions are pointed out.
    Related Articles | Metrics
    Survey and Prospect of Intelligent Interaction-Oriented Image Recognition Techniques
    Jiang Shuqiang, Min Weiqing, Wang Shuhui
    Journal of Computer Research and Development    2016, 53 (1): 113-122.   DOI: 10.7544/issn1000-1239.2016.20150689
    Abstract1409)   HTML0)    PDF (969KB)(3231)       Save
    Vision plays an important role in both the human interaction and human-nature interaction. Furthermore, equipping the terminals with the intelligent visual recognition and interaction is one of the core challenges in artificial intelligence and computer technology, and also one of lofty goals. With the rapid development of visual recognition techniques, in recent years the emerging new techniques and problems have been produced. Correspondingly, the applications with the intelligent interaction also present a few new characteristics, which are changing our original understanding of the visual recognition and interaction. We give a survey on image recognition techniques, covering recent advances in regarding to visual recognition, visual description, visual question and answering (VQA). Specifically, we first focus on the deep learning approaches for image recognition and scene classification. Next, the latest techniques in visual description and VQA are analyzed and discussed. Then we introduce visual recognition and interaction applications in mobile devices and robots. Finally, we discuss future research directions in this field.
    Related Articles | Metrics
    Detouring Matching Pursuit Algorithm in Compressed Sensing
    Pei Tingrui,Yang Shu,Li Zhetao,Xie Jingxiong
    Journal of Computer Research and Development    2014, 51 (9): 2101-2107.   DOI: 10.7544/issn1000-1239.2014.20131148
    Abstract1028)   HTML2)    PDF (1492KB)(3198)       Save
    Detouring matching pursuit (DMP) is a greedy algorithm of reconstructive sparse signals with low computational complexity, high accuracy and low column-correlation demand for sensing matrix. The increasing and deceasing formulas of the submatrix's inner-product and the coefficient matrix in the DMP are put forward and proved. By using the inverse of submatrix's inner-product and the coefficient matrix, DMP could reduce the amount of calculation of residual error's variable quantity and obtain light computation complexity in the end. In addition, by using the method of decreasing firstly, and then increasing the element of the assumed support set one by one optimally, DMP could improve the reconstructive accuracy and broaden the range of sparsity of reconstructing the sparse signal. The analysis of algorithmic complexity shows that the algorithmic complexity of getting, deceasing and increasing the assumed support set is O(K2N), O(b(K-b)N) and O(b(K-b)N), respectively. The experiment of indirect reconstructive weighted 0-1 sparse signal shows the reconstructive accuracy of the DMP, greedy pursuit algorithm (GPA), subspace pursuit (SP), compressive sampling matching pursuit (CoSaMP) and orthogonal matching pursuit (OMP) are 99%, 65%, 0%, 0% and 13% separately for 0-1 sparse signal with M/2 sparsity. The experiments of sparse signals in which the non-zero values obey normal distribution also show the reconstruction accuracy of DMP has obvious superiority.
    Related Articles | Metrics
    Survey on Privacy Preserving Techniques for Blockchain Technology
    Zhu Liehuang, Gao Feng, Shen Meng, Li Yandong, Zheng Baokun, Mao Hongliang, Wu Zhen
    Journal of Computer Research and Development    2017, 54 (10): 2170-2186.   DOI: 10.7544/issn1000-1239.2017.20170471
    Abstract5464)   HTML81)    PDF (3265KB)(3114)       Save
    Core features of the blockchain technology are “de-centralization” and “de-trusting”. As a distributed ledger technology, smart contract infrastructure platform and novel distributed computing paradigm, it can effectively build programmable currency, programmable finance and programmable society, which will have a far-reaching impact on the financial and other fields, and drive a new round of technological change and application change. While blockchain technology can improve efficiency, reduce costs and enhance data security, it is still in the face of serious privacy issues which have been widely concerned by researchers. The survey first analyzes the technical characteristics of the blockchain, defines the concept of identity privacy and transaction privacy, points out the advantages and disadvantages of blockchain technology in privacy protection and introduces the attack methods in existing researches, such as transaction tracing technology and account clustering technology. And then we introduce a variety of privacy mechanisms, including malicious nodes detection and restricting access technology for the network layer, transaction mixing technology, encryption technology and limited release technology for the transaction layer, and some defense mechanisms for blockchain applications layer. In the end, we discuss the limitations of the existing technologies and envision future directions on this topic. In addition, the regulatory approach to malicious use of blockchain technology is discussed.
    Related Articles | Metrics
    Cited: Baidu(8)
    Online Learning Algorithms for Big Data Analytics: A Survey
    Li Zhijie,Li Yuanxiang,Wang Feng,He Guoliang,Kuang Li
    Journal of Computer Research and Development    2015, 52 (8): 1707-1721.   DOI: 10.7544/issn1000-1239.2015.20150185
    Abstract3385)   HTML8)    PDF (1700KB)(3042)       Save
    The advent of big data has been presenting a large array of applications that require real-time processing of massive data with high velocity. How to mine big data stream in a wide range of real-world applications becomes more and more important. Conventional batch machine learning techniques suffer from many limitations when being applied to big data analytics tasks. Online learning technique with stream computing mode is a promising tool for data stream learning. In this survey, we firstly introduce the motivation and background of big data analytics, and then focus on presenting the family of classical and latest online learning methods and algorithms, which are promising to tackle the emerging challenges of mining big data in a wide range of real-world applications. The main technical content of this survey consists of three parts: 1) online learning for linear model;2) kernel-based online learning for nonlinear model;3) non-traditional online learning methods. This is followed by a discussion about some key problems of large-scale machine learning for big data analytics applications. Finally, we present a few typical scenarios of online learning for big data stream and discuss possible directions for ongoing and future research in this area.
    Related Articles | Metrics
    The State of the Art and Future Tendency of “Internet+” Oriented Network Technology
    Wang Xingwei, Li Jie, Tan Zhenhua, Ma Lianbo, Li Fuliang, Huang Min
    Journal of Computer Research and Development    2016, 53 (4): 729-741.   DOI: 10.7544/issn1000-1239.2016.20151146
    Abstract2566)   HTML8)    PDF (3264KB)(2957)       Save
    “Internet+” is the sublimation and development of the Internet, which aims at promoting the Internet’s deep integration with economy and society to propel economic and social innovation and development. Under the “Internet+” circumstances, the Internet plays a role not only as a kind of information infrastructure, but also as a more important innovation element for improving production, trade and management of economic and social entities. Under such background, in this paper, the origin of “Internet+” and its meaning are analyzed, and the national actions to push it forward are described. From the aspects of promoting mass information interconnection and access, improving network management and network performance, supporting convenient network access and interaction, adapting to the integration of industrialization and informationization, and production-orientation, the various kinds of new networking paradigms are presented, which are suitable to “Internet+”. We then discuss the significant challenges faced by “Internet+” in the aspects of networking scalability, heterogeneity, performance and security as well as networked applications. In the end, we draw some conclusions.
    Related Articles | Metrics
    Fully Homomorphic Encryption and Its Applications
    Liu Mingjie, Wang An
    Journal of Computer Research and Development    2014, 51 (12): 2593-2603.   DOI: 10.7544/issn1000-1239.2014.20131168
    Abstract2143)   HTML4)    PDF (1802KB)(2891)       Save
    With the development of Internet, especially, the occurrence of the concept of cloud computing, there is an increasing demand for the search and process of encrypted data, which makes the fully homomorphic encryption become more and more important. The concept of fully homomorphic encryption was first introduced by Rivest et al. in 1970s. How to construct such schemes is a hard problem for cryptographers. Until 2009, Gentry presented the first fully homomorphic schemes based on ideal lattice, which is a breakthrough in this field. After that, many cryptographers have done some interesting work which promote the fully homomorphric schemes to be practical in future. Fully homomorphric encryption becomes a very trendy topic in cryptography. This paper discusses the main progress on fully homomorphric schemes including the first homomorphic encryption introduced by Gentry and its optimizations, as well as the fully homorphric schemes based on integer and learning with errors problem (LWE problem). Then, the general application framework of fully homomorphic scheme is provided. Cloud computing, electronic voting and digital watermarking are taken as examples to introduce the significant value of application of fully homomorphric encryption.
    Related Articles | Metrics
    Outliers and Change-Points Detection Algorithm for Time Series
    Su Weixing, Zhu Yunlong, Liu Fang, and Hu Kunyuan
    Journal of Computer Research and Development   
    Survey of Internet of Things Security
    Zhang Yuqing, Zhou Wei, Peng Anni
    Journal of Computer Research and Development    2017, 54 (10): 2130-2143.   DOI: 10.7544/issn1000-1239.2017.20170470
    Abstract2438)   HTML23)    PDF (1747KB)(2824)       Save
    With the development of smart home, intelligent care and smart car, the application fields of IoT are becoming more and more widespread, and its security and privacy receive more attention by researchers. Currently, the related research on the security of the IoT is still in its initial stage, and most of the research results cannot solve the major security problem in the development of the IoT well. In this paper, we firstly introduce the three-layer logic architecture of the IoT, and outline the security problems and research priorities of each level. Then we discuss the security issues such as privacy preserving and intrusion detection, which need special attention in the IoT main application scenarios (smart home, intelligent healthcare, car networking, smart grid, and other industrial infrastructure). Though synthesizing and analyzing the deficiency of existing research and the causes of security problem, we point out five major technical challenges in IoT security. They are privacy protection in data sharing, the equipment security protection under limited resources, more effective intrusion detection and defense systems and method, access control of equipment automation operations and cross-domain authentication of motive device. We finally detail every technical challenge and point out the IoT security research hotspots in future.
    Related Articles | Metrics
    Cited: Baidu(13)