Loading...
ISSN 1000-1239 CN 11-1777/TP

Table of Content

01 January 2019, Volume 56 Issue 1
A Survey of Artificial Intelligence Chip
Han Dong, Zhou Shengyuan, Zhi Tian, Chen Yunji, Chen Tianshi
2019, 56(1):  7-22.  doi:10.7544/issn1000-1239.2019.20180693
Asbtract ( 1566 )   HTML ( 21)   PDF (4014KB) ( 1243 )  
Related Articles | Metrics
In recent years, artificial intelligence (AI)technologies have been widely used in many commercial fields. With the attention and investment of scientific researchers and research companies around the world, AI technologies have been proved their irreplaceable value in traditional speech recognition, image recognition, search/recommendation engine and other fields. However, at the same time, the amount of computation of AI technologies increases dramatically, which poses a huge challenge to the computing power of hardware equipments. At first, we describe the basic algorithms of AI technologies and their application algorithms in this paper, including their operation modes and operation characteristics. Then, we introduce the development directions of AI chips in recent years, and analyze the main architectures of AI chips. Furthermore, we emphatically introduce the researches of DianNao series processors. This series of processors are the latest and most advanced researches in the field of AI chips. Their architectures and designs are proposed for different technical features, including deep learning algorithms, large-scale deep learning algorithms, machine learning algorithms, deep learning algorithms for processing two-dimensional images and sparse deep learning algorithms. In addition, a complete and efficient instruction architecture(ISA) for deep learning algorithms, Cambricon, is proposed. Finally, we analyze the development directions of artificial neural network technologies from various angles, including network structures, operation characteristics and hardware devices. Based on the above, we predict and prospect the possible development directions of future work.
Revisiting the Architecture and System of Flash-Based Storage
Lu Youyou, Yang Zhe, Shu Jiwu
2019, 56(1):  23-34.  doi:10.7544/issn1000-1239.2019.20180772
Asbtract ( 403 )   HTML ( 7)   PDF (2256KB) ( 432 )  
Related Articles | Metrics
Flash-based storage has been rapidly and widely used in different fields, from embedded systems, desktops, enterprise servers to data centers, in recent years. How to explore the potentials of flash memory is an important research direction in storage research area. Legacy storage systems have been designed for hard disks for more than 60 years, and optimizations to existing systems have limited effectiveness in exploring flash storage benefits. New flash storage architecture and systems by re-architecting flash storage show potentials, and are being adopted in industry. This paper presents the research process in this area. First, it introduces the characteristics of flash memory and solid state drive, and analyses the problems of legacy flash storage architecture. Then, it describes the architecture evolution of flash storage, including device-based FTL, host-based FTL and software managed flash. After the description, it surveys the storage systems respectively on open-channel SSDs and near-data processing, both of which are required for function relocation and cooperation in software-managed flash. Finally, it concludes the challenges and remained research problems.
Practice of Chip Agile Development: Labeled RISC-V
Yu Zihao, Liu Zhigang, Li Yiwei, Huang Bowen, Wang Sa, Sun Ninghui, Bao Yungang
2019, 56(1):  35-48.  doi:10.7544/issn1000-1239.2019.20180771
Asbtract ( 957 )   HTML ( 23)   PDF (2016KB) ( 651 )  
Related Articles | Metrics
Current chip design projects require considerable manpower and time to carry out, and have certain risks. These conditions have limited the development of open-sourced chip design to some extent. To further reduce the threshold for chip development, research teams at University of California, Berkeley have designed the open ISA RISC-V. They also open-sourced the Rocket Chip project, the SoC implementation of RISC-V, and put forward Chisel, a new hardware construction language, for agile development. How do RISC-V, Rocket Chip and Chisel enable open-source chip agile development? With some case studies during the development of the Labeled RISC-V project led by the Institute of Computing Technology, Chinese Academy of Sciences, this article shows: 1) An open and active ISA ecosystem (such as RISC-V) is a necessary condition to promote chip innovation; 2) Chisel’s features such as bulk connection, metaprogramming, object-oriented programming, and functional programming, can greatly reduce the amount of code and improve code maintainability; 3) Agile development can achieve an order of magnitude improvement in coding efficiency, while achieving comparable or even better performance, power consumption and area overhead than traditional hardware development models.
Parallel Learning Architecture of micROS Powering the Ability of Life-Long Autonomous Learning
Dai Huadong, Yi Xiaodong, Wang Yanzhen, Wang Zhiyuan, Yang Xuejun
2019, 56(1):  49-57.  doi:10.7544/issn1000-1239.2019.20180776
Asbtract ( 805 )   HTML ( 6)   PDF (2257KB) ( 521 )  
Related Articles | Metrics
As the most important infrastructure of robotic platforms, robot operating system is playing an important role to improve autonomy and intelligence of robots and unmanned systems. In this paper, a parallel learning architecture of micROS supporting life-long autonomous learning is presented. It is built to power a wide variety of robots with the ability of contextual adaptation. In addition, two core concepts guiding the design of micROS are also presented. One concept is the actor, which is the control abstraction of robot behaviors. The other concept is the semantic situation abstracting the dataflow in micROS. Some important techniques including collective behavior control and ad hoc wireless networks, are also described in this paper.
Research Situation and Prospects of Operating System Virtualization
Wu Song, Wang Kun, Jin Hai
2019, 56(1):  58-68.  doi:10.7544/issn1000-1239.2019.20180720
Asbtract ( 756 )   HTML ( 16)   PDF (1288KB) ( 534 )  
Related Articles | Metrics
As a kind of lightweight virtualization technology, container has not only been widely used in resource management and DevOps of cloud computing platform and data center in recent years, but also gradually applied to some new fields such as edge computing and Internet of things. Container has shown a good development trend and application prospect. So, operating system virtualization as a core technology of container has received widespread attention in both industry and academia. Operating system virtualization allows multiple applications to run in a set of isolated runtime environment by sharing the same host operating system kernel. It has the advantages of fast startup, convenient deployment, low resource consumption, high running efficiency. However, there are also deficiencies such as weak isolation. And the deficiency has become a research hotspot in the field of virtualization. In this survey, we first introduce the technical architecture of operating system virtualization and compare it with traditional virtualization technology to summarize its characteristics. Then we analyze the current research status of operating system virtualization from container instance layer, container management layer and kernel resource layer. Finally, the paper lays out several challenges and research prospects of operating system virtualization.
Edge Computing: State-of-the-Art and Future Directions
Shi Weisong, Zhang Xingzhou, Wang Yifan, Zhang Qingyang
2019, 56(1):  69-89.  doi:10.7544/issn1000-1239.2019.20180760
Asbtract ( 3035 )   HTML ( 81)   PDF (3670KB) ( 2095 )  
Related Articles | Metrics
With the burgeoning of the Internet of everything, the amount of data generated by edge devices increases dramatically, resulting in higher network bandwidth requirements. In the meanwhile, the emergence of novel applications calls for the lower latency of the network. It is an unprecedented challenge to guarantee the quality of service while dealing with a massive amount of data for cloud computing, which has pushed the horizon of edge computing. Edge computing calls for processing the data at the edge of the network and develops rapidly from 2014 as it has the potential to reduce latency and bandwidth charges, address the limitation of computing capability of cloud data center, increase availability as well as protect data privacy and security. This paper mainly discusses three questions about edge computing: where does it come from, what is the current status and where is it going? This paper first sorts out the development process of edge computing and divides it into three periods: technology preparation period, rapid growth period and steady development period. This paper then summarizes seven essential technologies that drive the rapid development of edge computing. After that, six typical applications that have been widely used in edge computing are illustrated. Finally, this paper proposes six open problems that need to be solved urgently in future development.
Zone-Oriented Architecture: An Architectural Style for Smart Web of Everything
Xu Zhiwei, Zeng Chen, Chao Lu, Peng Xiaohui
2019, 56(1):  90-102.  doi:10.7544/issn1000-1239.2019.20180775
Asbtract ( 468 )   HTML ( 5)   PDF (3814KB) ( 347 )  
Related Articles | Metrics
After PC Internet and mobile Internet, the world is entering an era of smart Internet (Web) of everything, which is also called the human-cyber-physical ternary computing era. A main feature is that “computers” are not restricted to just PCs or smartphones anymore, but are embodied in the physical world, manifesting as trillions of smart devices. These smart devices will need diverse policies, to satisfy different needs on innovation freedom, security, privacy, governance, and user experience. One set of control policies is unlikely to satisfy the needs of the global smart Web of everything. However, the global smart Web can be divided into zones, each being a zone of control, a zone of rights, and a zone of governance. A zone has its own controlling scope and policies, such as whether the devices in a zone are tethered or tetherless. This paper proposes an architectural style for smart Web of everything, called zone-oriented architecture (ZOA), learning from the experiences of the service-oriented architecture (SOA) and the representational state transfer (REST) style. We present a zone algebra with four operators on zones, a set of three normal forms, and a set of two recommended and three optional architectural constraints. We discuss five problems of the smart Web of everything, use an application scenario to show how ZOA can help, and present open problems for future research.
Anonymous Communication and Darknet: A Survey
Luo Junzhou, Yang Ming, Ling Zhen, Wu Wenjia, Gu Xiaodan
2019, 56(1):  103-130.  doi:10.7544/issn1000-1239.2019.20180769
Asbtract ( 1144 )   HTML ( 43)   PDF (6516KB) ( 930 )  
Related Articles | Metrics
An anonymous communication system is an overlay network built on top of the Internet that integratedly uses various anonymous technologies such as data forwarding, content encryption and traffic obfuscation to conceal the relationships between the communication entities and hide their communication content from being revealed. Since it is quite difficult to traceback and locate the communication entities, considerable abuse problems have sprung up. Specifically, the hidden services of anonymous network are most often abusively used to establish darknets where diverse illegal activities take place, bringing huge harm to the individuals and society. At present, it lacks comprehensive and deep technical analysis and research survey in the field of anonymous communication system and darknet study. Based on the status quo, this paper first elaborates on the basic concepts of the two terms and their relationships, then continues to demonstrate the work mechanisms and three key technologies of the anonymous communication system in detail including anonymous access, anonymous routing and darknet services with an exemplification of the four mainstream darknets, which are Tor, I2P, Freenet and ZeroNet. On this basis, the paper summarizes the state-of-the-art anonymous communication attacks and defense technology, and introduces the research work on current darknet governance as well. Finally, the developing trend of the next generation anonymous communication system is prospected, and the challenges of the darknet governance and relevant countermeasures are also discussed.
New Devolopment of Information Security——For the 60th Anniversary of Journal of Computer Research and Development
Cao Zhenfu
2019, 56(1):  131-137.  doi:10.7544/issn1000-1239.2019.20180756
Asbtract ( 678 )   HTML ( 25)   PDF (870KB) ( 520 )  
Related Articles | Metrics
This article firstly identifies the most important trend in the development of information security is that finding new security issues by introducing cryptographic techniques into the field of system security, which enables that cryptographic security has been increasingly applied in almost every aspect of the computer system. Besides, we present the new characteristics of modern cryptography resulted from this new application and new types of service mode, including that the entity has been transformed from single mode to multiparty mode, the position has been transformed from local mode to remote mode, and the security model has been transformed from channel security model to “channel security plus” model. Based on the new features given above, we mainly focus on discussing both the state-of-the-art and the future directions for theorectical research in the aspects of ciphertext access control, secure outsourced comutation, secure search, electronic currency and blockchain security, and privacy-preserving in artificial intelligence and machine learning. Finally, we also introduce some application results, including the hardware development of the mobile device for encrypted data sharing and the identity authentication based on biological information.
A Survey on Inductive Logic Programming
Dai Wangzhou, Zhou Zhihua
2019, 56(1):  138-154.  doi:10.7544/issn1000-1239.2019.20180759
Asbtract ( 981 )   HTML ( 18)   PDF (1896KB) ( 685 )  
Related Articles | Metrics
Inductive logic programming (ILP) is a subfield of symbolic rule learning that is formalized by first-order logic and rooted in first-order logical induction theories. The model learned by ILP is a set of highly interpretable first-order rules rather than black boxes; owing to the strong expressive power of first-order logic language, it is relatively easier to exploit domain knowledge during learning; the learned model by ILP can be used for modeling relationships between subjects, rather than predicting the labels of independent objects. However, due to its huge and complicated underlying hypothesis space, it is difficult for ILP to learn models efficiently. This paper tries to review most of the current researches in this area. Mainstream ILP approaches are introduced according to different categorizations of first-order logical induction theories. This paper also reviews the most recent progress in the ILP area, including ILP techniques based on second-order logical abduction, probabilistic inductive logic programming (PILP) and the ILP approaches that introduce differentiable components. This paper also introduces some representative applications of ILP approaches in practical problems, and then talks about its major challenges, and finally discusses about the prospects for future research directions.
Deep Learning for Digital Geometry Processing and Analysis: A Review
Xia Qing, Li Shuai, Hao Aimin, Zhao Qinping
2019, 56(1):  155-182.  doi:10.7544/issn1000-1239.2019.20180709
Asbtract ( 815 )   HTML ( 9)   PDF (18059KB) ( 619 )  
Related Articles | Metrics
With the rapid development of various hardware sensors and reconstruction technologies, digital geometric models have become the fourth generation of digital multimedia after audio, image and video, and have been widely used in many fields. Traditional digital geometry processing and analysis are mainly based on manually defined features that can only be valid for specific problems or under specific conditions. The deep learning, especially the neural network model, in the success of natural language processing and image processing demonstrates its powerful ability as a feature extraction tool for data analysis, and is therefore gradually used in the field of digital geometry processing. In this paper, we review the works of digital geometry processing and analysis based on deep learning in recent years, carefully analyze the research progress of shape matching and retrieval, shape classification and segmentation, shape generation, shape completion and reconstruction and shape deformation and editing, and also point out some existing problems and a few possible directions of future works.
Current Research Status and Prospects on Multimedia Content Understanding
Peng Yuxin, Qi Jinwei, Huang Xin
2019, 56(1):  183-208.  doi:10.7544/issn1000-1239.2019.20180770
Asbtract ( 779 )   HTML ( 13)   PDF (6110KB) ( 670 )  
Related Articles | Metrics
With the rapid development of multimedia and Internet technologies, a large amount of multimedia data has been rapidly emerging, such as image, video, text and audio. Data of different media types from multi-source is heterogeneous in the form but relevant in the semantic. As indicated in the research of cognitive science, the perception and cognition of the environment is through the fusion across different sensory organs of human, which is decided by the human brain’s organization structure. Therefore, it has been a key challenge to perform data semantic analysis and correlation modeling across different media types, for achieving comprehensive multimedia content understanding, which has drawn wide interests of both academic and industrial areas. In this paper, the basic concepts, representative methods and research status of 5 latest highlighting research topics of multimedia content understanding are referred, including fine-grained image classification and retrieval, video classification and object detection, cross-media retrieval, visual description and generation, and visual question answering. This paper further presents the major challenges of multimedia content understanding, as well as gives the development trend in the future. The goal of this paper is to help readers get a comprehensive understanding on the research status of multimedia content understanding, draw more attention of researchers to relevant research topics, and provide the technical insights to promote further development of this area.
The State of the Art and Future Tendency of Smart Education
Zheng Qinghua, Dong Bo, Qian Buyue, Tian Feng, Wei Bifan, Zhang Weizhan, Liu Jun
2019, 56(1):  209-224.  doi:10.7544/issn1000-1239.2019.20180758
Asbtract ( 2068 )   HTML ( 61)   PDF (1890KB) ( 1006 )  
Related Articles | Metrics
At present the smart education pattern supported by information technology such as big data analytics and artificial intelligence has become the trend of the development of education informatization, and also has become a popular research direction in academic hotspots. Firstly, we investigate and analyze the data mining technologies of two kinds of educational big data including teaching behavior and massive knowledge resources. Secondly, we focus on four vital technologies in teaching process such as learning guidance, recommendation, Q&A and evaluation, including learning path generation and navigation, learner profiling and personalized recommendations, online smart Q&A and precise evaluation. Then we compare and analyze the mainstream smart education platforms at home and abroad. Finally, we discuss the limitations of current smart education research and summarize the research and development directions of online smart learning assistants, learner smart assessment, networked group cognition, causality discovery and other smart education aspects.