ISSN 1000-1239 CN 11-1777/TP

Most Down Articles

    Published in last 1 year | In last 2 years| In last 3 years| All| Most Downloaded in Recent Month | Most Downloaded in Recent Year|

    In last 3 years
    Please wait a minute...
    For Selected: Toggle Thumbnails
    Edge Computing: State-of-the-Art and Future Directions
    Shi Weisong, Zhang Xingzhou, Wang Yifan, Zhang Qingyang
    Journal of Computer Research and Development    2019, 56 (1): 69-89.   DOI: 10.7544/issn1000-1239.2019.20180760
    Abstract5767)   HTML281)    PDF (3670KB)(4094)       Save
    With the burgeoning of the Internet of everything, the amount of data generated by edge devices increases dramatically, resulting in higher network bandwidth requirements. In the meanwhile, the emergence of novel applications calls for the lower latency of the network. It is an unprecedented challenge to guarantee the quality of service while dealing with a massive amount of data for cloud computing, which has pushed the horizon of edge computing. Edge computing calls for processing the data at the edge of the network and develops rapidly from 2014 as it has the potential to reduce latency and bandwidth charges, address the limitation of computing capability of cloud data center, increase availability as well as protect data privacy and security. This paper mainly discusses three questions about edge computing: where does it come from, what is the current status and where is it going? This paper first sorts out the development process of edge computing and divides it into three periods: technology preparation period, rapid growth period and steady development period. This paper then summarizes seven essential technologies that drive the rapid development of edge computing. After that, six typical applications that have been widely used in edge computing are illustrated. Finally, this paper proposes six open problems that need to be solved urgently in future development.
    Related Articles | Metrics
    Survey on Privacy-Preserving Machine Learning
    Liu Junxu, Meng Xiaofeng
    Journal of Computer Research and Development    2020, 57 (2): 346-362.   DOI: 10.7544/issn1000-1239.2020.20190455
    Abstract3209)   HTML141)    PDF (1684KB)(3310)       Save
    Large-scale data collection has vastly improved the performance of machine learning, and achieved a win-win situation for both economic and social benefits, while personal privacy preservation is facing new and greater risks and crises. In this paper, we summarize the privacy issues in machine learning and the existing work on privacy-preserving machine learning. We respectively discuss two settings of the model training process—centralized learning and federated learning. The former needs to collect all the user data before training. Although this setting is easy to deploy, it still exists enormous privacy and security hidden troubles. The latter achieves that massive devices can collaborate to train a global model while keeping their data in local. As it is currently in the early stage of the study, it also has many problems to be solved. The existing work on privacy-preserving techniques can be concluded into two main clues—the encryption method including homomorphic encryption and secure multi-party computing and the perturbation method represented by differential privacy, each having its advantages and disadvantages. In this paper, we first focus on the design of differentially-private machine learning algorithm, especially under centralized setting, and discuss the differences between traditional machine learning models and deep learning models. Then, we summarize the problems existing in the current federated learning study. Finally, we propose the main challenges in the future work and point out the connection among privacy protection, model interpretation and data transparency.
    Related Articles | Metrics
    Privacy and Security Issues in Machine Learning Systems: A Survey
    He Yingzhe, Hu Xingbo, He Jinwen, Meng Guozhu, Chen Kai
    Journal of Computer Research and Development    2019, 56 (10): 2049-2070.   DOI: 10.7544/issn1000-1239.2019.20190437
    Abstract2261)   HTML75)    PDF (1644KB)(2710)       Save
    Artificial intelligence has penetrated into every corners of our life and brought humans great convenience. Especially in recent years, with the vigorous development of the deep learning branch in machine learning, there are more and more related applications in our life. Unfortunately, machine learning systems are suffering from many security hazards. Even worse, the popularity of machine learning systems further magnifies these hazards. In order to unveil these security hazards and assist in implementing a robust machine learning system, we conduct a comprehensive investigation of the mainstream deep learning systems. In the beginning of the study, we devise an analytical model for dissecting deep learning systems, and define our survey scope. Our surveyed deep learning systems span across four fields-image classification, audio speech recognition, malware detection, and natural language processing. We distill four types of security hazards and manifest them in multiple dimensions such as complexity, attack success rate, and damage. Furthermore, we survey defensive techniques for deep learning systems as well as their characteristics. Finally, through the observation of these systems, we propose the practical proposals of constructing robust deep learning system.
    Related Articles | Metrics
    A Measurable Bayesian Network Structure Learning Method
    Qi Xiaolong, Gao Yang, Wang Hao, Song Bei, Zhou Chunlei,Zhang Youwei
    Journal of Computer Research and Development    2018, 55 (8): 1717-1725.   DOI: 10.7544/issn1000-1239.2018.20180197
    Abstract748)   HTML5)    PDF (1062KB)(2704)       Save
    In this paper, a Bayesian network structure learning method via variable ordering based on mutual information (BNS\+{vo}-learning) is presented, which includes two components: the metric information matrix learning and the “lazy” heuristic strategy. The matrix of measurement information characterizes the degree of dependency among variables and implies the degree of strength comparison, which effectively solves the problem of misjudgment due to order of variables in the independence test process. Under the guidance of metric information matrix, the “lazy” heuristic strategy selectively adds variables to the condition set in order to effectively reduce high-order tests and reduce the number of tests. We theoretically prove the reliability of the new method and experimentally demonstrate that the new method searches significantly faster than other search processes. And BNS\+{vo}-learning is easily extended to small and sparse data sets without losing the quality of the learning structure.
    Related Articles | Metrics
    Blockchain Data Analysis: A Review of Status, Trends and Challenges
    Chen Weili,Zheng Zibin
    Journal of Computer Research and Development    2018, 55 (9): 1853-1870.   DOI: 10.7544/issn1000-1239.2018.20180127
    Abstract4595)   HTML105)    PDF (3117KB)(2673)       Save
    Blockchain technology is a new emerging technology that has the potential to revolutionize many traditional industries. Since the creation of Bitcoin, which represents blockchain 1.0, blockchain technology has been attracting extensive attention and a great amount of user transaction data has been accumulated. Furthermore, the birth of Ethereum, which represents blockchain 2.0, further enriches data type in blockchain. While the popularity of blockchain technology bringing about a lot of technical innovation, it also leads to many new problems, such as user privacy disclosure and illegal financial activities. However, the public accessible of blockchain data provides unprecedented opportunity for researchers to understand and resolve these problems through blockchain data analysis. Thus, it is of great significance to summarize the existing research problems, the results obtained, the possible research trends, and the challenges faced in blockchain data analysis. To this end, a comprehensive review and summary of the progress of blockchain data analysis is presented. The review begins by introducing the architecture and key techniques of blockchain technology and providing the main data types in blockchain with the corresponding analysis methods. Then, the current research progress in blockchain data analysis is summarized in seven research problems, which includes entity recognition, privacy disclosure risk analysis, network portrait, network visualization, market effect analysis, transaction pattern recognition, illegal behavior detection and analysis. Finally, the directions, prospects and challenges for future research are explored based on the shortcomings of current research.
    Related Articles | Metrics
    Research Review of Knowledge Graph and Its Application in Medical Domain
    Hou Mengwei, Wei Rong, Lu Liang, Lan Xin, Cai Hongwei
    Journal of Computer Research and Development    2018, 55 (12): 2587-2599.   DOI: 10.7544/issn1000-1239.2018.20180623
    Abstract3800)   HTML134)    PDF (2825KB)(2626)       Save
    With the advent of the medical big data era, knowledge interconnection has received extensive attention. How to extract useful medical knowledge from massive data is the key for medical big data analysis. Knowledge graph technology provides a means to extract structured knowledge from massive texts and images.The combination of knowledge graph, big data technology and deep learning technology is becoming the core driving force for the development of artificial intelligence. The knowledge graph technology has a broad application prospect in the medical domain. The application of knowledge graph technology in the medical domain will play an important role in solving the contradiction between the supply of high-quality medical resources and the continuous increase of demand for medical services.At present, the research on medical knowledge graph is still in the exploratory stage. The existing knowledge graph technology generally has several problems such as low efficiency, multiple restrictions and poor expansion in the medical domain. This paper firstly analyzes the medical knowledge graph architecture and construction technology for the strong professionalism and complex structure of big data in the medical domain. Secondly, the key technologies and research progress of the three modules of knowledge extraction, knowledge expression, knowledge fusion and knowledge reasoning in medical knowledge map are summarized. In addition, the application status of medical knowledge maps in clinical decision support, medical intelligence semantic retrieval, medical question answering system and other medical services are introduced. Finally, the existing problems and challenges of current research are discussed and analyzed, and its development is prospected.
    Related Articles | Metrics
    Survey of Smart Contract Technology and Application Based on Blockchain
    He Haiwu, Yan An, Chen Zehua
    Journal of Computer Research and Development    2018, 55 (11): 2452-2466.   DOI: 10.7544/issn1000-1239.2018.20170658
    Abstract3670)   HTML151)    PDF (3644KB)(2588)       Save
    With the flourishing development of blockchain technology represented by bitcoin, the blockchain technology has moved from the era of programmable currency into the era of smart contract. The smart contract is an event-driven, state-based code contract and algorithm contract, which has been widely concerned and studied with the deep development of blockchain technology. The protocol and user interface are applied to complete all steps of the smart contract process. Smart contract enables users to implement personalized logic on the blockchain. The blockchain-based smart contract technology has the characteristics of de-centralization, autonomy, observability, verifiability and information sharing. It can also be effectively applied to build programmable finance and programmable society, which has been widely used in digital payment, financial asset disposal, multi-signature contract, cloud computing, Internet of things, sharing economy and other fields. The survey describes the basic concepts of smart contract technology, its whole life cycle, basic classification and structure, key technology, the art of the state, as well as its application scenarios and the main technology platforms. Its problems encountered at present are also discussed. Finally, based on the theoretical knowledge of the smart contract, we set up the Ethereum experimental environment and develop a system of crowdsale contract. The survey is aimed at providing helpful guidance and reference for future research of smart contract based on blockchain technology.
    Related Articles | Metrics
    Security and Privacy Risks in Artificial Intelligence Systems
    Chen Yufei, Shen Chao, Wang Qian, Li Qi, Wang Cong, Ji Shouling, Li Kang, Guan Xiaohong
    Journal of Computer Research and Development    2019, 56 (10): 2135-2150.   DOI: 10.7544/issn1000-1239.2019.20190415
    Abstract4324)   HTML186)    PDF (1175KB)(2153)       Save
    Human society is witnessing a wave of artificial intelligence (AI) driven by deep learning techniques, bringing a technological revolution for human production and life. In some specific fields, AI has achieved or even surpassed human-level performance. However, most previous machine learning theories have not considered the open and even adversarial environments, and the security and privacy issues are gradually rising. Besides of insecure code implementations, biased models, adversarial examples, sensor spoofing can also lead to security risks which are hard to be discovered by traditional security analysis tools. This paper reviews previous works on AI system security and privacy, revealing potential security and privacy risks. Firstly, we introduce a threat model of AI systems, including attack surfaces, attack capabilities and attack goals. Secondly, we analyze security risks and counter measures in terms of four critical components in AI systems: data input (sensor), data preprocessing, machine learning model and output. Finally, we discuss future research trends on the security of AI systems. The aim of this paper is to arise the attention of the computer security society and the AI society on security and privacy of AI systems, and so that they can work together to unlock AI’s potential to build a bright future.
    Related Articles | Metrics
    A Survey on Machine Learning Based Routing Algorithms
    Liu Chenyi, Xu Mingwei, Geng Nan, Zhang Xiang
    Journal of Computer Research and Development    2020, 57 (4): 671-687.   DOI: 10.7544/issn1000-1239.2020.20190866
    Abstract2780)   HTML104)    PDF (2198KB)(1922)       Save
    The rapid development of the Internet accesses many new applications including real time multi-media service, remote cloud service, etc. These applications require various types of service quality, which is a significant challenge towards current best effort routing algorithms. Since the recent huge success in applying machine learning in game, computer vision and natural language processing, many people tries to design “smart” routing algorithms based on machine learning methods. In contrary with traditional model-based, decentralized routing algorithms (e.g.OSPF), machine learning based routing algorithms are usually data-driven, which can adapt to dynamically changing network environments and accommodate different service quality requirements. Data-driven routing algorithms based on machine learning approach have shown great potential in becoming an important part of the next generation network. However, researches on artificial intelligent routing are still on a very beginning stage. In this paper we firstly introduce current researches on data-driven routing algorithms based on machine learning approach, showing the main ideas, application scenarios and pros and cons of these different works. Our analysis shows that current researches are mainly for the principle of machine learning based routing algorithms but still far from deployment in real scenarios. So we then analyze different training and deploying methods for machine learning based routing algorithms in real scenarios and propose two reasonable approaches to train and deploy such routing algorithms with low overhead and high reliability. Finally, we discuss the opportunities and challenges and show several potential research directions for machine learning based routing algorithms in the future.
    Related Articles | Metrics
    Review of Entity Relation Extraction Methods
    Li Dongmei, Zhang Yang, Li Dongyuan, Lin Danqiong
    Journal of Computer Research and Development    2020, 57 (7): 1424-1448.   DOI: 10.7544/issn1000-1239.2020.20190358
    Abstract2146)   HTML69)    PDF (1404KB)(1768)       Save
    There is a phenomenon that information extraction has long been concerned by a lot of research works in the field of natural language processing. Information extraction mainly includes three sub-tasks: entity extraction, relation extraction and event extraction, among which relation extraction is the core mission and a great significant part of information extraction. Furthermore, the main goal of entity relation extraction is to identify and determine the specific relation between entity pairs from plenty of natural language texts, which provides fundamental support for intelligent retrieval, semantic analysis, etc, and improves both search efficiency and the automatic construction of the knowledge base. Then, we briefly expound the development of entity relation extraction and introduce several tools and evaluation systems of relation extraction in both Chinese and English. In addition, four main methods of entity relation extraction are mentioned in this paper, including traditional relation extraction methods, and other three methods respectively based on traditional machine learning, deep learning and open domain. What is more important is that we summarize the mainstream research methods and corresponding representative results in different historical stages, and conduct contrastive analysis concerning different entity relation extraction methods. In the end, we forecast the contents and trend of future research.
    Related Articles | Metrics
    Survey on Techniques, Applications and Security of Machine Learning Interpretability
    Ji Shouling, Li Jinfeng, Du Tianyu, Li Bo
    Journal of Computer Research and Development    2019, 56 (10): 2071-2096.   DOI: 10.7544/issn1000-1239.2019.20190540
    Abstract2118)   HTML66)    PDF (5499KB)(1767)       Save
    While machine learning has achieved great success in various domains, the lack of interpretability has limited its widespread applications in real-world tasks, especially security-critical tasks. To overcome this crucial weakness, intensive research on improving the interpretability of machine learning models has emerged, and a plethora of interpretation methods have been proposed to help end users understand its inner working mechanism. However, the research on model interpretation is still in its infancy, and there are a large amount of scientific issues to be resolved. Furthermore, different researchers have different perspectives on solving the interpretation problem and give different definitions for interpretability, and the proposed interpretation methods also have different emphasis. Till now, the research community still lacks a comprehensive understanding of interpretability as well as a scientific guide for the research on model interpretation. In this survey, we review the explanatory problems in machine learning, and make a systematic summary and scientific classification of the existing research works. At the same time, we discuss the potential applications of interpretation related technologies, analyze the relationship between interpretability and the security of interpretable machine learning, and discuss the current research challenges and potential future research directions, aiming at providing necessary help for future researchers to facilitate the research and application of model interpretability.
    Related Articles | Metrics
    Survey on Accelerating Neural Network with Hardware
    Chen Guilin, Ma Sheng, Guo Yang
    Journal of Computer Research and Development    2019, 56 (2): 240-253.   DOI: 10.7544/issn1000-1239.2019.20170852
    Abstract1816)   HTML51)    PDF (3305KB)(1645)       Save
    Artificial neural networks are widely used in artificial intelligence applications such as voice assistant, image recognition and natural language processing. With the rise of complexity of the application, the computational complexity has also increased dramatically. The traditional general-purpose processor is limited by the memory bandwidth and energy consumption when dealing with the complex neural network. People began to improve the architecture of the general-purpose processors to support the efficient processing of the neural network. In addition, the development of special-purpose accelerators becomes another way to accelerate processing of neural network. Compared with the general-purpose processor, it has lower energy consumption and higher performance. The article aims to introduce the designs from current general-purpose processors and special-purpose accelerators for supporting the neural network. It also summarizes the latest design innovation and breakthrough of the neural network acceleration platforms. In particular, the article provides an overview of the neural network and discusses the improvements made by various general-purpose chips to support neural networks, which include supporting low-precision operations and adding a calculation module to speed up neural network processing. Then from the viewpoint of the computational structure and storage structure, the article summarizes the customized designs of special-purpose accelerators, and describes the dataflow used by the neural network chips based on the reuse of various types of the data in the neural network. Through analyzing the advantages and disadvantages of these solutions, the article puts forward the future design trend and challenge of the neural network accelerator.
    Related Articles | Metrics
    Research Advances in the Interpretability of Deep Learning
    Cheng Keyang, Wang Ning, Shi Wenxi, Zhan Yongzhao
    Journal of Computer Research and Development    2020, 57 (6): 1208-1217.   DOI: 10.7544/issn1000-1239.2020.20190485
    Abstract2203)   HTML62)    PDF (1226KB)(1614)       Save
    The research on the interpretability of deep learning is closely related to various disciplines such as artificial intelligence, machine learning, logic and cognitive psychology. It has important theoretical research significance and practical application value in too many fields, such as information push, medical research, finance, and information security. In the past few years, there were a lot of well studied work in this field, but we are still facing various issues. In this paper, we clearly review the history of deep learning interpretability research and related work. Firstly, we introduce the history of interpretable deep learning from following three aspects: origin of interpretable deep learning, research exploration stage and model construction stage. Then, the research situation is presented from three aspects, namely visual analysis, robust perturbation analysis and sensitivity analysis. The research on the construction of interpretable deep learning model is introduced following four aspects: model agent, logical reasoning, network node association analysis and traditional machine learning model. Moreover, the limitations of current research are analyzed and discussed in this paper. At last, we list the typical applications of the interpretable deep learning and forecast the possible future research directions of this field along with reasonable and suitable suggestions.
    Related Articles | Metrics
    Interpretation and Understanding in Machine Learning
    Chen Kerui, Meng Xiaofeng
    Journal of Computer Research and Development    2020, 57 (9): 1971-1986.   DOI: 10.7544/issn1000-1239.2020.20190456
    Abstract1875)   HTML73)    PDF (1315KB)(1599)       Save
    In recent years, machine learning has developed rapidly, especially in the deep learning, where remarkable achievements are obtained in image, voice, natural language processing and other fields. The expressive ability of machine learning algorithm has been greatly improved; however, with the increase of model complexity, the interpretability of computer learning algorithm has deteriorated. So far, the interpretability of machine learning remains as a challenge. The trained models via algorithms are regarded as black boxes, which seriously hamper the use of machine learning in certain fields, such as medicine, finance and so on. Presently, only a few works emphasis on the interpretability of machine learning. Therefore, this paper aims to classify, analyze and compare the existing interpretable methods; on the one hand, it expounds the definition and measurement of interpretability, while on the other hand, for the different interpretable objects, it summarizes and analyses various interpretable techniques of machine learning from three aspects: model understanding, prediction result interpretation and mimic model understanding. Moreover, the paper also discusses the challenges and opportunities faced by machine learning interpretable methods and the possible development direction in the future. The proposed interpretation methods should also be useful for putting many research open questions in perspective.
    Related Articles | Metrics
    A Survey of Artificial Intelligence Chip
    Han Dong, Zhou Shengyuan, Zhi Tian, Chen Yunji, Chen Tianshi
    Journal of Computer Research and Development    2019, 56 (1): 7-22.   DOI: 10.7544/issn1000-1239.2019.20180693
    Abstract2257)   HTML45)    PDF (4014KB)(1569)       Save
    In recent years, artificial intelligence (AI)technologies have been widely used in many commercial fields. With the attention and investment of scientific researchers and research companies around the world, AI technologies have been proved their irreplaceable value in traditional speech recognition, image recognition, search/recommendation engine and other fields. However, at the same time, the amount of computation of AI technologies increases dramatically, which poses a huge challenge to the computing power of hardware equipments. At first, we describe the basic algorithms of AI technologies and their application algorithms in this paper, including their operation modes and operation characteristics. Then, we introduce the development directions of AI chips in recent years, and analyze the main architectures of AI chips. Furthermore, we emphatically introduce the researches of DianNao series processors. This series of processors are the latest and most advanced researches in the field of AI chips. Their architectures and designs are proposed for different technical features, including deep learning algorithms, large-scale deep learning algorithms, machine learning algorithms, deep learning algorithms for processing two-dimensional images and sparse deep learning algorithms. In addition, a complete and efficient instruction architecture(ISA) for deep learning algorithms, Cambricon, is proposed. Finally, we analyze the development directions of artificial neural network technologies from various angles, including network structures, operation characteristics and hardware devices. Based on the above, we predict and prospect the possible development directions of future work.
    Related Articles | Metrics
    Survey on Machine Learning for Database Systems
    Meng Xiaofeng, Ma Chaohong, Yang Chen
    Journal of Computer Research and Development    2019, 56 (9): 1803-1820.   DOI: 10.7544/issn1000-1239.2019.20190446
    Abstract1431)   HTML50)    PDF (1227KB)(1561)       Save
    As one of the most popular technologies, database systems have been developed for more than 50 years, and are mature enough to support many real scenarios. Although many researches still focus on the traditional database optimization tasks, the performance improvement is little. Actually, with the advent of big data, we have met the new gap obstructing the further performance improvement of database systems. The database systems face challenges in two aspects. Firstly, the increase of data volume requires the database system to process tasks more quickly. Secondly, the rapid change of query workload and its diversity make database systems impossible to adjust the system knobs to the optimal configuration in real time. Fortunately, machine learning may be the dawn bringing an unprecedented opportunity for the traditional database systems to lead us to the new optimization direction. In this paper, we introduce how to combine machine learning into the further development of database management systems. We focus on the current research work of machine learning for database systems, mainly including the machine learning for storage management and query optimization, as well as automatic database management systems. This area has also opened various challenges and problems to be solved. Thus, based on the analysis of existing technologies, the future challenges, which may be encountered in machine learning for database systems, are pointed out.
    Related Articles | Metrics
    Security Issues and Privacy Preserving in Machine Learning
    Wei Lifei, Chen Congcong, Zhang Lei, Li Mengsi, Chen Yujiao, Wang Qin
    Journal of Computer Research and Development    2020, 57 (10): 2066-2085.   DOI: 10.7544/issn1000-1239.2020.20200426
    Abstract1826)   HTML54)    PDF (2361KB)(1550)       Save
    In recent years, machine learning has developed rapidly, and it is widely used in the aspects of work and life, which brings not only convenience but also great security risks. The security and privacy issues have become a stumbling block in the development of machine learning. The training and inference of the machine learning model are based on a large amount of data, which always contains some sensitive information. With the frequent occurrence of data privacy leakage events and the aggravation of the leakage scale annually, how to make sure the security and privacy of data has attracted the attention of the researchers from academy and industry. In this paper we introduce some fundamental concepts such as the adversary model in the privacy preserving of machine learning and summarize the common security threats and privacy threats in the training and inference phase of machine learning, such as privacy leakage of training data, poisoning attack, adversarial attack, privacy attack, etc. Subsequently, we introduce the common security protecting and privacy preserving methods, especially focusing on homomorphic encryption, secure multi-party computation, differential privacy, etc. and compare the typical schemes and applicable scenarios of the three technologies. At the end, the future development trend and research direction of machine learning privacy preserving are prospected.
    Related Articles | Metrics
    Review on the Development of Microservice Architecture
    Feng Zhiyong, Xu Yanwei, Xue Xiao, Chen Shizhan
    Journal of Computer Research and Development    2020, 57 (5): 1103-1122.   DOI: 10.7544/issn1000-1239.2020.20190460
    Abstract2816)   HTML147)    PDF (3960KB)(1523)       Save
    With the rapid development of cloud computing and Internet of things, users’ demand for software systems tends to be diversified. Service oriented architecture (SOA) needs to strike a balance between stable service integration and flexible adaptation of requirements. Based on this situation, the microservice technology, which goes with independent process as well as independent deployment capability, emerges as the times require. It has a slew of advantages, such as distributed storage, high availability, scalability, and intelligent operation maintenance, which can make up for the shortcomings of the traditional SOA architecture. From the perspective of system integration, the paper firstly describes the application background of microservice, which include the core components of microservice, software technology development and architecture evolution to ensure the availability of microservice infrastructure. Secondly, in view of problems existing in practical applications, the paper analyzes the key technologies utilized in the specific application of the microservice architecture through the aspects of distributed communication, distributed data storage, distributed call chain, and testing complexity; then, a specific application case is given to confirm the technical feasibility of microservice. Finally, this paper intends to explore the challenges by microservice through the aspects of infrastructure, information exchange, data security, and network security. Meanwhile, the future development trend is analyzed so as to provide valuable theoretical and technical reference for the future innovation and development of microservice.
    Related Articles | Metrics
    TensorFlow Lite: On-Device Machine Learning Framework
    Li Shuangfeng
    Journal of Computer Research and Development    2020, 57 (9): 1839-1853.   DOI: 10.7544/issn1000-1239.2020.20200291
    Abstract1069)   HTML36)    PDF (1882KB)(1477)       Save
    TensorFlow Lite (TFLite) is a lightweight, fast and cross-platform open source machine learning framework specifically designed for mobile and IoT. It’s part of TensorFlow and supports multiple platforms such as Android, iOS, embedded Linux, and MCU etc. It greatly reduces the barrier for developers, accelerates the development of on-device machine learning (ODML), and makes ML run everywhere. This article introduces the trend, challenges and typical applications of ODML; the origin and system architecture of TFLite; best practices and tool chains suitable for ML beginners; and the roadmap of TFLite.
    Related Articles | Metrics
    Principle and Research Progress of Quantum Computation and Quantum Cryptography
    Wang Yongli, Xu Qiuliang
    Journal of Computer Research and Development    2020, 57 (10): 2015-2026.   DOI: 10.7544/issn1000-1239.2020.20200615
    Abstract1503)   HTML41)    PDF (967KB)(1472)       Save
    Quantum computation and quantum cryptography are based on principles of quantum mechanics. In 1984, Bennett and Brassard proposed the first quantum key distribution protocol called BB84, which started the study of quantum cryptography. Since then, a great deal of work has been carried out in various fields such as quantum encryption and quantum signature. In 1994, Shor designed the first practical quantum algorithm which can factor large integers in polynomial time. Shor’s algorithm used Quantum Fourier Transform, which is the kernel of most quantum algorithms. In 1996, Grover designed a new algorithm which can search the unstructured data to get the required result in the time of approximately the square root of the total account of the data. Shor’s algorithm and Grover’s algorithm not only embody the advantages of quantum computing, but also pose a threat to the traditional cryptography based on mathematical difficulties such as RSA. After half a century’s development, quantum computing and quantum cryptography have achieved fruitful results in theory and practice. In this paper, we summarize the contents from the perspectives of the mathematical framework of quantum mechanics, basic concepts and principles, basic ideas of quantum computing, research progress and main ideas of quantum cryptography, etc.
    Related Articles | Metrics