ISSN 1000-1239 CN 11-1777/TP

Most Down Articles

    Published in last 1 year | In last 2 years| In last 3 years| All| Most Downloaded in Recent Month | Most Downloaded in Recent Year|

    In last 2 years
    Please wait a minute...
    For Selected: Toggle Thumbnails
    Survey on Privacy-Preserving Machine Learning
    Liu Junxu, Meng Xiaofeng
    Journal of Computer Research and Development    2020, 57 (2): 346-362.   DOI: 10.7544/issn1000-1239.2020.20190455
    Abstract3209)   HTML141)    PDF (1684KB)(3310)       Save
    Large-scale data collection has vastly improved the performance of machine learning, and achieved a win-win situation for both economic and social benefits, while personal privacy preservation is facing new and greater risks and crises. In this paper, we summarize the privacy issues in machine learning and the existing work on privacy-preserving machine learning. We respectively discuss two settings of the model training process—centralized learning and federated learning. The former needs to collect all the user data before training. Although this setting is easy to deploy, it still exists enormous privacy and security hidden troubles. The latter achieves that massive devices can collaborate to train a global model while keeping their data in local. As it is currently in the early stage of the study, it also has many problems to be solved. The existing work on privacy-preserving techniques can be concluded into two main clues—the encryption method including homomorphic encryption and secure multi-party computing and the perturbation method represented by differential privacy, each having its advantages and disadvantages. In this paper, we first focus on the design of differentially-private machine learning algorithm, especially under centralized setting, and discuss the differences between traditional machine learning models and deep learning models. Then, we summarize the problems existing in the current federated learning study. Finally, we propose the main challenges in the future work and point out the connection among privacy protection, model interpretation and data transparency.
    Related Articles | Metrics
    Privacy and Security Issues in Machine Learning Systems: A Survey
    He Yingzhe, Hu Xingbo, He Jinwen, Meng Guozhu, Chen Kai
    Journal of Computer Research and Development    2019, 56 (10): 2049-2070.   DOI: 10.7544/issn1000-1239.2019.20190437
    Abstract2261)   HTML75)    PDF (1644KB)(2710)       Save
    Artificial intelligence has penetrated into every corners of our life and brought humans great convenience. Especially in recent years, with the vigorous development of the deep learning branch in machine learning, there are more and more related applications in our life. Unfortunately, machine learning systems are suffering from many security hazards. Even worse, the popularity of machine learning systems further magnifies these hazards. In order to unveil these security hazards and assist in implementing a robust machine learning system, we conduct a comprehensive investigation of the mainstream deep learning systems. In the beginning of the study, we devise an analytical model for dissecting deep learning systems, and define our survey scope. Our surveyed deep learning systems span across four fields-image classification, audio speech recognition, malware detection, and natural language processing. We distill four types of security hazards and manifest them in multiple dimensions such as complexity, attack success rate, and damage. Furthermore, we survey defensive techniques for deep learning systems as well as their characteristics. Finally, through the observation of these systems, we propose the practical proposals of constructing robust deep learning system.
    Related Articles | Metrics
    Security and Privacy Risks in Artificial Intelligence Systems
    Chen Yufei, Shen Chao, Wang Qian, Li Qi, Wang Cong, Ji Shouling, Li Kang, Guan Xiaohong
    Journal of Computer Research and Development    2019, 56 (10): 2135-2150.   DOI: 10.7544/issn1000-1239.2019.20190415
    Abstract4324)   HTML186)    PDF (1175KB)(2153)       Save
    Human society is witnessing a wave of artificial intelligence (AI) driven by deep learning techniques, bringing a technological revolution for human production and life. In some specific fields, AI has achieved or even surpassed human-level performance. However, most previous machine learning theories have not considered the open and even adversarial environments, and the security and privacy issues are gradually rising. Besides of insecure code implementations, biased models, adversarial examples, sensor spoofing can also lead to security risks which are hard to be discovered by traditional security analysis tools. This paper reviews previous works on AI system security and privacy, revealing potential security and privacy risks. Firstly, we introduce a threat model of AI systems, including attack surfaces, attack capabilities and attack goals. Secondly, we analyze security risks and counter measures in terms of four critical components in AI systems: data input (sensor), data preprocessing, machine learning model and output. Finally, we discuss future research trends on the security of AI systems. The aim of this paper is to arise the attention of the computer security society and the AI society on security and privacy of AI systems, and so that they can work together to unlock AI’s potential to build a bright future.
    Related Articles | Metrics
    A Survey on Machine Learning Based Routing Algorithms
    Liu Chenyi, Xu Mingwei, Geng Nan, Zhang Xiang
    Journal of Computer Research and Development    2020, 57 (4): 671-687.   DOI: 10.7544/issn1000-1239.2020.20190866
    Abstract2780)   HTML104)    PDF (2198KB)(1922)       Save
    The rapid development of the Internet accesses many new applications including real time multi-media service, remote cloud service, etc. These applications require various types of service quality, which is a significant challenge towards current best effort routing algorithms. Since the recent huge success in applying machine learning in game, computer vision and natural language processing, many people tries to design “smart” routing algorithms based on machine learning methods. In contrary with traditional model-based, decentralized routing algorithms (e.g.OSPF), machine learning based routing algorithms are usually data-driven, which can adapt to dynamically changing network environments and accommodate different service quality requirements. Data-driven routing algorithms based on machine learning approach have shown great potential in becoming an important part of the next generation network. However, researches on artificial intelligent routing are still on a very beginning stage. In this paper we firstly introduce current researches on data-driven routing algorithms based on machine learning approach, showing the main ideas, application scenarios and pros and cons of these different works. Our analysis shows that current researches are mainly for the principle of machine learning based routing algorithms but still far from deployment in real scenarios. So we then analyze different training and deploying methods for machine learning based routing algorithms in real scenarios and propose two reasonable approaches to train and deploy such routing algorithms with low overhead and high reliability. Finally, we discuss the opportunities and challenges and show several potential research directions for machine learning based routing algorithms in the future.
    Related Articles | Metrics
    Review of Entity Relation Extraction Methods
    Li Dongmei, Zhang Yang, Li Dongyuan, Lin Danqiong
    Journal of Computer Research and Development    2020, 57 (7): 1424-1448.   DOI: 10.7544/issn1000-1239.2020.20190358
    Abstract2146)   HTML69)    PDF (1404KB)(1768)       Save
    There is a phenomenon that information extraction has long been concerned by a lot of research works in the field of natural language processing. Information extraction mainly includes three sub-tasks: entity extraction, relation extraction and event extraction, among which relation extraction is the core mission and a great significant part of information extraction. Furthermore, the main goal of entity relation extraction is to identify and determine the specific relation between entity pairs from plenty of natural language texts, which provides fundamental support for intelligent retrieval, semantic analysis, etc, and improves both search efficiency and the automatic construction of the knowledge base. Then, we briefly expound the development of entity relation extraction and introduce several tools and evaluation systems of relation extraction in both Chinese and English. In addition, four main methods of entity relation extraction are mentioned in this paper, including traditional relation extraction methods, and other three methods respectively based on traditional machine learning, deep learning and open domain. What is more important is that we summarize the mainstream research methods and corresponding representative results in different historical stages, and conduct contrastive analysis concerning different entity relation extraction methods. In the end, we forecast the contents and trend of future research.
    Related Articles | Metrics
    Survey on Techniques, Applications and Security of Machine Learning Interpretability
    Ji Shouling, Li Jinfeng, Du Tianyu, Li Bo
    Journal of Computer Research and Development    2019, 56 (10): 2071-2096.   DOI: 10.7544/issn1000-1239.2019.20190540
    Abstract2118)   HTML66)    PDF (5499KB)(1767)       Save
    While machine learning has achieved great success in various domains, the lack of interpretability has limited its widespread applications in real-world tasks, especially security-critical tasks. To overcome this crucial weakness, intensive research on improving the interpretability of machine learning models has emerged, and a plethora of interpretation methods have been proposed to help end users understand its inner working mechanism. However, the research on model interpretation is still in its infancy, and there are a large amount of scientific issues to be resolved. Furthermore, different researchers have different perspectives on solving the interpretation problem and give different definitions for interpretability, and the proposed interpretation methods also have different emphasis. Till now, the research community still lacks a comprehensive understanding of interpretability as well as a scientific guide for the research on model interpretation. In this survey, we review the explanatory problems in machine learning, and make a systematic summary and scientific classification of the existing research works. At the same time, we discuss the potential applications of interpretation related technologies, analyze the relationship between interpretability and the security of interpretable machine learning, and discuss the current research challenges and potential future research directions, aiming at providing necessary help for future researchers to facilitate the research and application of model interpretability.
    Related Articles | Metrics
    Research Advances in the Interpretability of Deep Learning
    Cheng Keyang, Wang Ning, Shi Wenxi, Zhan Yongzhao
    Journal of Computer Research and Development    2020, 57 (6): 1208-1217.   DOI: 10.7544/issn1000-1239.2020.20190485
    Abstract2203)   HTML62)    PDF (1226KB)(1614)       Save
    The research on the interpretability of deep learning is closely related to various disciplines such as artificial intelligence, machine learning, logic and cognitive psychology. It has important theoretical research significance and practical application value in too many fields, such as information push, medical research, finance, and information security. In the past few years, there were a lot of well studied work in this field, but we are still facing various issues. In this paper, we clearly review the history of deep learning interpretability research and related work. Firstly, we introduce the history of interpretable deep learning from following three aspects: origin of interpretable deep learning, research exploration stage and model construction stage. Then, the research situation is presented from three aspects, namely visual analysis, robust perturbation analysis and sensitivity analysis. The research on the construction of interpretable deep learning model is introduced following four aspects: model agent, logical reasoning, network node association analysis and traditional machine learning model. Moreover, the limitations of current research are analyzed and discussed in this paper. At last, we list the typical applications of the interpretable deep learning and forecast the possible future research directions of this field along with reasonable and suitable suggestions.
    Related Articles | Metrics
    Interpretation and Understanding in Machine Learning
    Chen Kerui, Meng Xiaofeng
    Journal of Computer Research and Development    2020, 57 (9): 1971-1986.   DOI: 10.7544/issn1000-1239.2020.20190456
    Abstract1875)   HTML73)    PDF (1315KB)(1599)       Save
    In recent years, machine learning has developed rapidly, especially in the deep learning, where remarkable achievements are obtained in image, voice, natural language processing and other fields. The expressive ability of machine learning algorithm has been greatly improved; however, with the increase of model complexity, the interpretability of computer learning algorithm has deteriorated. So far, the interpretability of machine learning remains as a challenge. The trained models via algorithms are regarded as black boxes, which seriously hamper the use of machine learning in certain fields, such as medicine, finance and so on. Presently, only a few works emphasis on the interpretability of machine learning. Therefore, this paper aims to classify, analyze and compare the existing interpretable methods; on the one hand, it expounds the definition and measurement of interpretability, while on the other hand, for the different interpretable objects, it summarizes and analyses various interpretable techniques of machine learning from three aspects: model understanding, prediction result interpretation and mimic model understanding. Moreover, the paper also discusses the challenges and opportunities faced by machine learning interpretable methods and the possible development direction in the future. The proposed interpretation methods should also be useful for putting many research open questions in perspective.
    Related Articles | Metrics
    Survey on Machine Learning for Database Systems
    Meng Xiaofeng, Ma Chaohong, Yang Chen
    Journal of Computer Research and Development    2019, 56 (9): 1803-1820.   DOI: 10.7544/issn1000-1239.2019.20190446
    Abstract1431)   HTML50)    PDF (1227KB)(1561)       Save
    As one of the most popular technologies, database systems have been developed for more than 50 years, and are mature enough to support many real scenarios. Although many researches still focus on the traditional database optimization tasks, the performance improvement is little. Actually, with the advent of big data, we have met the new gap obstructing the further performance improvement of database systems. The database systems face challenges in two aspects. Firstly, the increase of data volume requires the database system to process tasks more quickly. Secondly, the rapid change of query workload and its diversity make database systems impossible to adjust the system knobs to the optimal configuration in real time. Fortunately, machine learning may be the dawn bringing an unprecedented opportunity for the traditional database systems to lead us to the new optimization direction. In this paper, we introduce how to combine machine learning into the further development of database management systems. We focus on the current research work of machine learning for database systems, mainly including the machine learning for storage management and query optimization, as well as automatic database management systems. This area has also opened various challenges and problems to be solved. Thus, based on the analysis of existing technologies, the future challenges, which may be encountered in machine learning for database systems, are pointed out.
    Related Articles | Metrics
    Security Issues and Privacy Preserving in Machine Learning
    Wei Lifei, Chen Congcong, Zhang Lei, Li Mengsi, Chen Yujiao, Wang Qin
    Journal of Computer Research and Development    2020, 57 (10): 2066-2085.   DOI: 10.7544/issn1000-1239.2020.20200426
    Abstract1826)   HTML54)    PDF (2361KB)(1550)       Save
    In recent years, machine learning has developed rapidly, and it is widely used in the aspects of work and life, which brings not only convenience but also great security risks. The security and privacy issues have become a stumbling block in the development of machine learning. The training and inference of the machine learning model are based on a large amount of data, which always contains some sensitive information. With the frequent occurrence of data privacy leakage events and the aggravation of the leakage scale annually, how to make sure the security and privacy of data has attracted the attention of the researchers from academy and industry. In this paper we introduce some fundamental concepts such as the adversary model in the privacy preserving of machine learning and summarize the common security threats and privacy threats in the training and inference phase of machine learning, such as privacy leakage of training data, poisoning attack, adversarial attack, privacy attack, etc. Subsequently, we introduce the common security protecting and privacy preserving methods, especially focusing on homomorphic encryption, secure multi-party computation, differential privacy, etc. and compare the typical schemes and applicable scenarios of the three technologies. At the end, the future development trend and research direction of machine learning privacy preserving are prospected.
    Related Articles | Metrics
    Review on the Development of Microservice Architecture
    Feng Zhiyong, Xu Yanwei, Xue Xiao, Chen Shizhan
    Journal of Computer Research and Development    2020, 57 (5): 1103-1122.   DOI: 10.7544/issn1000-1239.2020.20190460
    Abstract2816)   HTML147)    PDF (3960KB)(1523)       Save
    With the rapid development of cloud computing and Internet of things, users’ demand for software systems tends to be diversified. Service oriented architecture (SOA) needs to strike a balance between stable service integration and flexible adaptation of requirements. Based on this situation, the microservice technology, which goes with independent process as well as independent deployment capability, emerges as the times require. It has a slew of advantages, such as distributed storage, high availability, scalability, and intelligent operation maintenance, which can make up for the shortcomings of the traditional SOA architecture. From the perspective of system integration, the paper firstly describes the application background of microservice, which include the core components of microservice, software technology development and architecture evolution to ensure the availability of microservice infrastructure. Secondly, in view of problems existing in practical applications, the paper analyzes the key technologies utilized in the specific application of the microservice architecture through the aspects of distributed communication, distributed data storage, distributed call chain, and testing complexity; then, a specific application case is given to confirm the technical feasibility of microservice. Finally, this paper intends to explore the challenges by microservice through the aspects of infrastructure, information exchange, data security, and network security. Meanwhile, the future development trend is analyzed so as to provide valuable theoretical and technical reference for the future innovation and development of microservice.
    Related Articles | Metrics
    TensorFlow Lite: On-Device Machine Learning Framework
    Li Shuangfeng
    Journal of Computer Research and Development    2020, 57 (9): 1839-1853.   DOI: 10.7544/issn1000-1239.2020.20200291
    Abstract1069)   HTML36)    PDF (1882KB)(1477)       Save
    TensorFlow Lite (TFLite) is a lightweight, fast and cross-platform open source machine learning framework specifically designed for mobile and IoT. It’s part of TensorFlow and supports multiple platforms such as Android, iOS, embedded Linux, and MCU etc. It greatly reduces the barrier for developers, accelerates the development of on-device machine learning (ODML), and makes ML run everywhere. This article introduces the trend, challenges and typical applications of ODML; the origin and system architecture of TFLite; best practices and tool chains suitable for ML beginners; and the roadmap of TFLite.
    Related Articles | Metrics
    Principle and Research Progress of Quantum Computation and Quantum Cryptography
    Wang Yongli, Xu Qiuliang
    Journal of Computer Research and Development    2020, 57 (10): 2015-2026.   DOI: 10.7544/issn1000-1239.2020.20200615
    Abstract1503)   HTML41)    PDF (967KB)(1472)       Save
    Quantum computation and quantum cryptography are based on principles of quantum mechanics. In 1984, Bennett and Brassard proposed the first quantum key distribution protocol called BB84, which started the study of quantum cryptography. Since then, a great deal of work has been carried out in various fields such as quantum encryption and quantum signature. In 1994, Shor designed the first practical quantum algorithm which can factor large integers in polynomial time. Shor’s algorithm used Quantum Fourier Transform, which is the kernel of most quantum algorithms. In 1996, Grover designed a new algorithm which can search the unstructured data to get the required result in the time of approximately the square root of the total account of the data. Shor’s algorithm and Grover’s algorithm not only embody the advantages of quantum computing, but also pose a threat to the traditional cryptography based on mathematical difficulties such as RSA. After half a century’s development, quantum computing and quantum cryptography have achieved fruitful results in theory and practice. In this paper, we summarize the contents from the perspectives of the mathematical framework of quantum mechanics, basic concepts and principles, basic ideas of quantum computing, research progress and main ideas of quantum cryptography, etc.
    Related Articles | Metrics
    Research Advances on Privacy Preserving in Recommender Systems
    Zhou Jun, Dong Xiaolei, Cao Zhenfu
    Journal of Computer Research and Development    2019, 56 (10): 2033-2048.   DOI: 10.7544/issn1000-1239.2019.20190541
    Abstract2074)   HTML51)    PDF (1868KB)(1452)       Save
    Recommender system is a type of intelligent platform based on massive dataset mining, which can establish recommendation model, predict users’ preferences on unrated items and achieve individualized information service and strategy support by exploiting the techniques of statistic analyzing, machine learning and artificial intelligence, according to the unique profiles of users and the different characteristics of various items, such as users’ interests, historical consumption behaviors, the quality and the prices of items. Unfortunately, the historical dataset, prediction model and recommendation result are closely related to the users’ privacy. How to provide accurate prediction results under the conditions that the users’ privacy is well protected and the correctness of the recommendation result is efficiently verified becomes a challenging issue. The state-of-the-art mainly focused on solving this problem, by using the techniques of data perturbation and public key fully homomorphic encryption (FHE). However, most of them cannot satisfy all the requirements of accuracy, efficiency and types of privacy preserving required by recommender systems. This article elaborates the existing work from the following four aspects, namely the operation mode, formal security model, the generic constructions of lightweight privacy preserving recommender system and the verification, and the accountability of recommendation results; and identifies the unaddressed challenging problems with convincing solutions. For security models, we focus on formalizing the security models with respect to user data privacy, prediction model privacy and recommendation result privacy, under the standard model or universal composable (UC) model. For efficiency, without exploiting public key FHE, we study the generic constructions of efficient privacy preserving recommender system, respectively in the single user, multiple data setting and the multiple user, multiple data setting, by reducing the usage times of public key encryption and decryption (i.e. only once while it is optimized). Last but not least, we also address the generic theoretical issue of efficient correctness verifiability and auditability for recommendation results, by exploiting the technique of batch verification. All the convincing techniques and solutions discussed above would significantly contribute to both the theoretical breakthrough and the practicability for privacy preserving in recommender systems.
    Related Articles | Metrics
    Data Center Energy Consumption Models and Energy Efficient Algorithms
    Wang Jiye, Zhou Biyu, Zhang Fa, Shi Xiang, Zeng Nan, Liu Zhiyong
    Journal of Computer Research and Development    2019, 56 (8): 1587-1603.   DOI: 10.7544/issn1000-1239.2019.20180574
    Abstract1433)   HTML47)    PDF (1055KB)(1346)       Save
    With the rapid development of cloud computing and virtualization technology, the number as well as the scale of data centers are growing rapidly around the world, which results in huge energy consumption of data centers. At the same time, however, the resource utilization of data centers are very low, leading to a lot of waste of energy. Due to the contradiction between huge energy consumption and extreme low resource utilization in data centers, optimizing the energy efficiency of data centers has become a hot topic in both academic and industrial fields in recent years. Aiming at the basic problem of energy efficiency in data center, we study the key technology of energy saving in data center based on resource and task scheduling. From the view of energy efficiency model and energy efficiency algorithm, the research progress and the latest achievement of energy efficiency in data center are summarized, mainly on server system and network system. This paper firstly decomposes and analyzes the data center energy consumption sources. Then the typical energy consumption models and classification standard are introduced. Based on the models and the standard, the strategy algorithms are summarized. The development trend of energy efficiency optimization in data center is also prospected.
    Related Articles | Metrics
    Survey on Automatic Text Summarization
    Li Jinpeng, Zhang Chuang, Chen Xiaojun, Hu Yue, Liao Pengcheng
    Journal of Computer Research and Development    2021, 58 (1): 1-21.   DOI: 10.7544/issn1000-1239.2021.20190785
    Abstract1380)   HTML39)    PDF (1756KB)(1290)       Save
    In recent years, the rapid development of Internet technology has greatly facilitated the daily life of human, and it is inevitable that massive information erupts in a blowout. How to quickly and effectively obtain the required information on the Internet is an urgent problem. The automatic text summarization technology can effectively alleviate this problem. As one of the most important fields in natural language processing and artificial intelligence, it can automatically produce a concise and coherent summary from a long text or text set through computer, in which the summary should accurately reflect the central themes of source text. In this paper, we expound the connotation of automatic summarization, review the development of automatic text summarization technique and introduce two main techniques in detail: extractive and abstractive summarization, including feature scoring, classification method, linear programming, submodular function, graph ranking, sequence labeling, heuristic algorithm, deep learning, etc. We also analyze the datasets and evaluation metrics that are commonly used in automatic summarization. Finally, the challenges ahead and the future trends of research and application have been predicted.
    Related Articles | Metrics
    Program Comprehension Based on Deep Learning
    Liu Fang, Li Ge, Hu Xing, Jin Zhi
    Journal of Computer Research and Development    2019, 56 (8): 1605-1620.   DOI: 10.7544/issn1000-1239.2019.20190185
    Abstract1343)   HTML33)    PDF (1562KB)(1127)       Save
    Program comprehension is the process of obtaining relevant information in programs by analyzing, abstracting, and reasoning the programs. It plays an important role in software development, maintenance, migration, and other processes. It has received extensive attention in academia and industry. Traditional program comprehension relies heavily on the experience of developers. However, as the scale and complexity of software continue to grow, it is time-consuming and laborious to rely solely on the developer’s prior knowledge to extract program features, and it is difficult to fully exploit the hidden features in the program. Deep learning is a data-driven end-to-end method. It builds deep neural networks based on existing data to mine the hidden features in data, and has been successfully applied in many fields. By applying deep learning technology to program comprehension, we can automatically learn the features implied in programs, which can fully exploit the knowledge implied in the program and improve the efficiency of program comprehension. This paper surveys the research work of program comprehension based on deep learning in recent years. Firstly, we analyze the properties of the program, and then introduce mainstream program comprehension models, including sequential models, structural models, and execution traces based models. Furthermore, the applications of deep learning-based program comprehension in program analysis are introduced, which mainly focus on code completion, code summarization and code search, etc. Finally, we summarize the challenges in program comprehension research.
    Related Articles | Metrics
    Causal Relation Extraction Based on Graph Attention Networks
    Xu Jinghang, Zuo Wanli, Liang Shining, Wang Ying
    Journal of Computer Research and Development    2020, 57 (1): 159-174.   DOI: 10.7544/issn1000-1239.2020.20190042
    Abstract1355)   HTML57)    PDF (1344KB)(1064)       Save
    Causality represents a kind of correlation between cause and effect, where the happening of cause will leads to the happening of effect. As the most important type of relationship between entities, causality plays a vital role in many fields such as automatic reasoning and scenario generation. Therefore, extracting causal relation becomes a basic task in natural language processing and text mining. Different from traditional text classification methods or relation extraction methods, this paper proposes a sequence labeling method to extract causal entity in text and identify direction of causality, without relying on feature engineering or causal background knowledge. The main contributions of this paper can be summarized as follows: 1) we extend syntactic dependency tree to the syntactic dependency graph, adopt graph attention networks in natural language processing, and introduce the concept of S-GAT(graph attention network based on syntactic dependency graph); 2) Bi-LSTM+CRF+S-GAT model for causal extraction is proposed, which generates causal label of each word in sentence based on input word vectors; 3) SemEval data set is modified and extended, and rules are defined to relabel experimental data with an aim of overcoming defects of the original labeling method. Extensive experiments are conducted on the expanded SemEval dataset, which shows that our model achieves 0.064 improvement over state-of-the-art model Bi-LSTM+CRF+self-ATT in terms of prediction accuracy.
    Related Articles | Metrics
    A Survey of Data Consistency Research for Non-Volatile Memory
    Xiao Renzhi, Feng Dan, Hu Yuchong, Zhang Xiaoyi, Cheng Liangfeng
    Journal of Computer Research and Development    2020, 57 (1): 85-101.   DOI: 10.7544/issn1000-1239.2020.20190062
    Abstract1024)   HTML29)    PDF (1043KB)(1057)       Save
    As DRAM technology is facing the bottleneck in density scaling and the problem of high power leakage, novel non-volatile memory (NVM) has drawn extensive attention from academia and industry, due to its superiority in non-volatility, high-density, byte addressability, and low static power consumption. Novel non-volatile memory such as phase change memory (PCM) is likely to substitute or complement DRAM as system main memory. However, due to the non-volatility of NVM, when system failed, data stored in NVM may be inconsistent by reason of partial updates or memory controller write reordering. In order to guarantee the consistency of data in NVM, it is essential to ensure the serialization and persistence in NVM write operations. NVM has inherent drawbacks, such as limited write endurance and high write latency, thus reducing the number of writes can help prolong the lifetime of NVM and improve the performance of NVM-based systems as long as data consistency in NVM is guaranteed. This paper focuses on data consistency based on NVM, especially on persistent indexes, file systems and persistent transactions, and to provide better solutions or ideas for achieving low data consistency overhead. Finally, the possible research directions of data consistency based on NVM are pointed out.
    Related Articles | Metrics
    An Access Control Method Using Smart Contract for Internet of Things
    Du Ruizhong, Liu Yan, Tian Junfeng
    Journal of Computer Research and Development    2019, 56 (10): 2287-2298.   DOI: 10.7544/issn1000-1239.2019.20190416
    Abstract1516)   HTML49)    PDF (2976KB)(991)       Save
    While Internet of things (IoT) technology has been widely recognized as an essential part in our daily life, it also brings new challenges in terms of privacy and security. In view of the limited resources, large number of connections and strong dynamics of the devices in the Internet of things, the traditional centralized access control technology is not fully applicable, and how to achieve secure and efficient access control authorization in the IoT environment has become an urgent problem to be solved. In this regard, a distributed architecture based on hierarchical blockchain for Internet of Things (DAHB) is proposed, which includes device layer, edge layer and the cloud layer. In this architecture, we combine the advantages of blockchain technology to realize flexible, dynamic and automatic access control for IoT devices based on ABAC model in the domain and across the domain by means of smart contract. At the same time, the credit value and honesty are added to the attribute metric to dynamically evaluate the trust relationship between different domains and devices. The theoretical analysis and experimental results show that this scheme is more effective than the existing schemes in solving the requirements of lightweight, flexibility, fine-grained and security in Internet of things (IoT)access control.
    Related Articles | Metrics