ISSN 1000-1239 CN 11-1777/TP

Most Down Articles

    Published in last 1 year | In last 2 years| In last 3 years| All| Most Downloaded in Recent Month | Most Downloaded in Recent Year|

    Most Downloaded in Recent Year
    Please wait a minute...
    For Selected: Toggle Thumbnails
    Knowledge Graph Construction Techniques
    LiuQiao,LiYang,DuanHong,LiuYao,QinZhiguang
    Journal of Computer Research and Development    2016, 53 (3): 582-600.   DOI: 10.7544/issn1000-1239.2016.20148228
    Abstract11122)   HTML368)    PDF (2414KB)(18412)       Save
    Google’s knowledge graph technology has drawn a lot of research attentions in recent years. However, due to the limited public disclosure of technical details, people find it difficult to understand the connotation and value of this technology. In this paper, we introduce the key techniques involved in the construction of knowledge graph in a bottom-up way, starting from a clearly defined concept and a technical architecture of the knowledge graph. Firstly, we describe in detail the definition and connotation of the knowledge graph, and then we propose the technical framework for knowledge graph construction, in which the construction process is divided into three levels according to the abstract level of the input knowledge materials, including the information extraction layer, the knowledge integration layer, and the knowledge processing layer, respectively. Secondly, the research status of the key technologies for each level are surveyed comprehensively and also investigated critically for the purposes of gradually revealing the mysteries of the knowledge graph technology, the state-of-the-art progress, and its relationship with related disciplines. Finally, five major research challenges in this area are summarized, and the corresponding key research issues are highlighted.
    Related Articles | Metrics
    Knowledge Representation Learning: A Review
    Liu Zhiyuan, Sun Maosong, Lin Yankai, Xie Ruobing
    Journal of Computer Research and Development    2016, 53 (2): 247-261.   DOI: 10.7544/issn1000-1239.2016.20160020
    Abstract10124)   HTML112)    PDF (3333KB)(15824)       Save
    Knowledge bases are usually represented as networks with entities as nodes and relations as edges. With network representation of knowledge bases, specific algorithms have to be designed to store and utilize knowledge bases, which are usually time consuming and suffer from data sparsity issue. Recently, representation learning, delegated by deep learning, has attracted many attentions in natural language processing, computer vision and speech analysis. Representation learning aims to project the interested objects into a dense, real-valued and low-dimensional semantic space, whereas knowledge representation learning focuses on representation learning of entities and relations in knowledge bases. Representation learning can efficiently measure semantic correlations of entities and relations, alleviate sparsity issues, and significantly improve the performance of knowledge acquisition, fusion and inference. In this paper, we will introduce the recent advances of representation learning, summarize the key challenges and possible solutions, and further give a future outlook on the research and application directions.
    Related Articles | Metrics
    Big Data Management: Concepts,Techniques and Challenges
    Meng Xiaofeng and Ci Xiang
    Journal of Computer Research and Development   
    Accepted: 15 January 2020

    Survey on Privacy-Preserving Machine Learning
    Liu Junxu, Meng Xiaofeng
    Journal of Computer Research and Development    2020, 57 (2): 346-362.   DOI: 10.7544/issn1000-1239.2020.20190455
    Abstract3209)   HTML141)    PDF (1684KB)(3310)       Save
    Large-scale data collection has vastly improved the performance of machine learning, and achieved a win-win situation for both economic and social benefits, while personal privacy preservation is facing new and greater risks and crises. In this paper, we summarize the privacy issues in machine learning and the existing work on privacy-preserving machine learning. We respectively discuss two settings of the model training process—centralized learning and federated learning. The former needs to collect all the user data before training. Although this setting is easy to deploy, it still exists enormous privacy and security hidden troubles. The latter achieves that massive devices can collaborate to train a global model while keeping their data in local. As it is currently in the early stage of the study, it also has many problems to be solved. The existing work on privacy-preserving techniques can be concluded into two main clues—the encryption method including homomorphic encryption and secure multi-party computing and the perturbation method represented by differential privacy, each having its advantages and disadvantages. In this paper, we first focus on the design of differentially-private machine learning algorithm, especially under centralized setting, and discuss the differences between traditional machine learning models and deep learning models. Then, we summarize the problems existing in the current federated learning study. Finally, we propose the main challenges in the future work and point out the connection among privacy protection, model interpretation and data transparency.
    Related Articles | Metrics
    Edge Computing: State-of-the-Art and Future Directions
    Shi Weisong, Zhang Xingzhou, Wang Yifan, Zhang Qingyang
    Journal of Computer Research and Development    2019, 56 (1): 69-89.   DOI: 10.7544/issn1000-1239.2019.20180760
    Abstract5767)   HTML281)    PDF (3670KB)(4094)       Save
    With the burgeoning of the Internet of everything, the amount of data generated by edge devices increases dramatically, resulting in higher network bandwidth requirements. In the meanwhile, the emergence of novel applications calls for the lower latency of the network. It is an unprecedented challenge to guarantee the quality of service while dealing with a massive amount of data for cloud computing, which has pushed the horizon of edge computing. Edge computing calls for processing the data at the edge of the network and develops rapidly from 2014 as it has the potential to reduce latency and bandwidth charges, address the limitation of computing capability of cloud data center, increase availability as well as protect data privacy and security. This paper mainly discusses three questions about edge computing: where does it come from, what is the current status and where is it going? This paper first sorts out the development process of edge computing and divides it into three periods: technology preparation period, rapid growth period and steady development period. This paper then summarizes seven essential technologies that drive the rapid development of edge computing. After that, six typical applications that have been widely used in edge computing are illustrated. Finally, this paper proposes six open problems that need to be solved urgently in future development.
    Related Articles | Metrics
    Deep Learning: Yesterday, Today, and Tomorrow
    Yu Kai, Jia Lei, Chen Yuqiang, and Xu Wei
    计算机研究与发展    2013, 50 (9): 1799-1804.  
    Abstract4956)   HTML206)    PDF (873KB)(10748)       Save
    Machine learning is an important area of artificial intelligence. Since 1980s, huge success has been achieved in terms of algorithms, theory, and applications. From 2006, a new machine learning paradigm, named deep learning, has been popular in the research community, and has become a huge wave of technology trend for big data and artificial intelligence. Deep learning simulates the hierarchical structure of human brain, processing data from lower level to higher level, and gradually composing more and more semantic concepts. In recent years, Google, Microsoft, IBM, and Baidu have invested a lot of resources into the R&D of deep learning, making significant progresses on speech recognition, image understanding, natural language processing, and online advertising. In terms of the contribution to real-world applications, deep learning is perhaps the most successful progress made by the machine learning community in the last 10 years. In this article, we will give a high-level overview about the past and current stage of deep learning, discuss the main challenges, and share our views on the future development of deep learning.
    Related Articles | Metrics
    Review of Entity Relation Extraction Methods
    Li Dongmei, Zhang Yang, Li Dongyuan, Lin Danqiong
    Journal of Computer Research and Development    2020, 57 (7): 1424-1448.   DOI: 10.7544/issn1000-1239.2020.20190358
    Abstract2147)   HTML69)    PDF (1404KB)(1768)       Save
    There is a phenomenon that information extraction has long been concerned by a lot of research works in the field of natural language processing. Information extraction mainly includes three sub-tasks: entity extraction, relation extraction and event extraction, among which relation extraction is the core mission and a great significant part of information extraction. Furthermore, the main goal of entity relation extraction is to identify and determine the specific relation between entity pairs from plenty of natural language texts, which provides fundamental support for intelligent retrieval, semantic analysis, etc, and improves both search efficiency and the automatic construction of the knowledge base. Then, we briefly expound the development of entity relation extraction and introduce several tools and evaluation systems of relation extraction in both Chinese and English. In addition, four main methods of entity relation extraction are mentioned in this paper, including traditional relation extraction methods, and other three methods respectively based on traditional machine learning, deep learning and open domain. What is more important is that we summarize the mainstream research methods and corresponding representative results in different historical stages, and conduct contrastive analysis concerning different entity relation extraction methods. In the end, we forecast the contents and trend of future research.
    Related Articles | Metrics
    Research Review of Knowledge Graph and Its Application in Medical Domain
    Hou Mengwei, Wei Rong, Lu Liang, Lan Xin, Cai Hongwei
    Journal of Computer Research and Development    2018, 55 (12): 2587-2599.   DOI: 10.7544/issn1000-1239.2018.20180623
    Abstract3800)   HTML134)    PDF (2825KB)(2626)       Save
    With the advent of the medical big data era, knowledge interconnection has received extensive attention. How to extract useful medical knowledge from massive data is the key for medical big data analysis. Knowledge graph technology provides a means to extract structured knowledge from massive texts and images.The combination of knowledge graph, big data technology and deep learning technology is becoming the core driving force for the development of artificial intelligence. The knowledge graph technology has a broad application prospect in the medical domain. The application of knowledge graph technology in the medical domain will play an important role in solving the contradiction between the supply of high-quality medical resources and the continuous increase of demand for medical services.At present, the research on medical knowledge graph is still in the exploratory stage. The existing knowledge graph technology generally has several problems such as low efficiency, multiple restrictions and poor expansion in the medical domain. This paper firstly analyzes the medical knowledge graph architecture and construction technology for the strong professionalism and complex structure of big data in the medical domain. Secondly, the key technologies and research progress of the three modules of knowledge extraction, knowledge expression, knowledge fusion and knowledge reasoning in medical knowledge map are summarized. In addition, the application status of medical knowledge maps in clinical decision support, medical intelligence semantic retrieval, medical question answering system and other medical services are introduced. Finally, the existing problems and challenges of current research are discussed and analyzed, and its development is prospected.
    Related Articles | Metrics
    A Measurable Bayesian Network Structure Learning Method
    Qi Xiaolong, Gao Yang, Wang Hao, Song Bei, Zhou Chunlei,Zhang Youwei
    Journal of Computer Research and Development    2018, 55 (8): 1717-1725.   DOI: 10.7544/issn1000-1239.2018.20180197
    Abstract748)   HTML5)    PDF (1062KB)(2704)       Save
    In this paper, a Bayesian network structure learning method via variable ordering based on mutual information (BNS\+{vo}-learning) is presented, which includes two components: the metric information matrix learning and the “lazy” heuristic strategy. The matrix of measurement information characterizes the degree of dependency among variables and implies the degree of strength comparison, which effectively solves the problem of misjudgment due to order of variables in the independence test process. Under the guidance of metric information matrix, the “lazy” heuristic strategy selectively adds variables to the condition set in order to effectively reduce high-order tests and reduce the number of tests. We theoretically prove the reliability of the new method and experimentally demonstrate that the new method searches significantly faster than other search processes. And BNS\+{vo}-learning is easily extended to small and sparse data sets without losing the quality of the learning structure.
    Related Articles | Metrics
    Interpretation and Understanding in Machine Learning
    Chen Kerui, Meng Xiaofeng
    Journal of Computer Research and Development    2020, 57 (9): 1971-1986.   DOI: 10.7544/issn1000-1239.2020.20190456
    Abstract1875)   HTML73)    PDF (1315KB)(1599)       Save
    In recent years, machine learning has developed rapidly, especially in the deep learning, where remarkable achievements are obtained in image, voice, natural language processing and other fields. The expressive ability of machine learning algorithm has been greatly improved; however, with the increase of model complexity, the interpretability of computer learning algorithm has deteriorated. So far, the interpretability of machine learning remains as a challenge. The trained models via algorithms are regarded as black boxes, which seriously hamper the use of machine learning in certain fields, such as medicine, finance and so on. Presently, only a few works emphasis on the interpretability of machine learning. Therefore, this paper aims to classify, analyze and compare the existing interpretable methods; on the one hand, it expounds the definition and measurement of interpretability, while on the other hand, for the different interpretable objects, it summarizes and analyses various interpretable techniques of machine learning from three aspects: model understanding, prediction result interpretation and mimic model understanding. Moreover, the paper also discusses the challenges and opportunities faced by machine learning interpretable methods and the possible development direction in the future. The proposed interpretation methods should also be useful for putting many research open questions in perspective.
    Related Articles | Metrics
    Security Issues and Privacy Preserving in Machine Learning
    Wei Lifei, Chen Congcong, Zhang Lei, Li Mengsi, Chen Yujiao, Wang Qin
    Journal of Computer Research and Development    2020, 57 (10): 2066-2085.   DOI: 10.7544/issn1000-1239.2020.20200426
    Abstract1826)   HTML54)    PDF (2361KB)(1550)       Save
    In recent years, machine learning has developed rapidly, and it is widely used in the aspects of work and life, which brings not only convenience but also great security risks. The security and privacy issues have become a stumbling block in the development of machine learning. The training and inference of the machine learning model are based on a large amount of data, which always contains some sensitive information. With the frequent occurrence of data privacy leakage events and the aggravation of the leakage scale annually, how to make sure the security and privacy of data has attracted the attention of the researchers from academy and industry. In this paper we introduce some fundamental concepts such as the adversary model in the privacy preserving of machine learning and summarize the common security threats and privacy threats in the training and inference phase of machine learning, such as privacy leakage of training data, poisoning attack, adversarial attack, privacy attack, etc. Subsequently, we introduce the common security protecting and privacy preserving methods, especially focusing on homomorphic encryption, secure multi-party computation, differential privacy, etc. and compare the typical schemes and applicable scenarios of the three technologies. At the end, the future development trend and research direction of machine learning privacy preserving are prospected.
    Related Articles | Metrics
    Survey on Privacy Preserving Techniques for Blockchain Technology
    Zhu Liehuang, Gao Feng, Shen Meng, Li Yandong, Zheng Baokun, Mao Hongliang, Wu Zhen
    Journal of Computer Research and Development    2017, 54 (10): 2170-2186.   DOI: 10.7544/issn1000-1239.2017.20170471
    Abstract6962)   HTML191)    PDF (3265KB)(4368)       Save
    Core features of the blockchain technology are “de-centralization” and “de-trusting”. As a distributed ledger technology, smart contract infrastructure platform and novel distributed computing paradigm, it can effectively build programmable currency, programmable finance and programmable society, which will have a far-reaching impact on the financial and other fields, and drive a new round of technological change and application change. While blockchain technology can improve efficiency, reduce costs and enhance data security, it is still in the face of serious privacy issues which have been widely concerned by researchers. The survey first analyzes the technical characteristics of the blockchain, defines the concept of identity privacy and transaction privacy, points out the advantages and disadvantages of blockchain technology in privacy protection and introduces the attack methods in existing researches, such as transaction tracing technology and account clustering technology. And then we introduce a variety of privacy mechanisms, including malicious nodes detection and restricting access technology for the network layer, transaction mixing technology, encryption technology and limited release technology for the transaction layer, and some defense mechanisms for blockchain applications layer. In the end, we discuss the limitations of the existing technologies and envision future directions on this topic. In addition, the regulatory approach to malicious use of blockchain technology is discussed.
    Related Articles | Metrics
    Cited: Baidu(8)
    Privacy and Security Issues in Machine Learning Systems: A Survey
    He Yingzhe, Hu Xingbo, He Jinwen, Meng Guozhu, Chen Kai
    Journal of Computer Research and Development    2019, 56 (10): 2049-2070.   DOI: 10.7544/issn1000-1239.2019.20190437
    Abstract2261)   HTML75)    PDF (1644KB)(2711)       Save
    Artificial intelligence has penetrated into every corners of our life and brought humans great convenience. Especially in recent years, with the vigorous development of the deep learning branch in machine learning, there are more and more related applications in our life. Unfortunately, machine learning systems are suffering from many security hazards. Even worse, the popularity of machine learning systems further magnifies these hazards. In order to unveil these security hazards and assist in implementing a robust machine learning system, we conduct a comprehensive investigation of the mainstream deep learning systems. In the beginning of the study, we devise an analytical model for dissecting deep learning systems, and define our survey scope. Our surveyed deep learning systems span across four fields-image classification, audio speech recognition, malware detection, and natural language processing. We distill four types of security hazards and manifest them in multiple dimensions such as complexity, attack success rate, and damage. Furthermore, we survey defensive techniques for deep learning systems as well as their characteristics. Finally, through the observation of these systems, we propose the practical proposals of constructing robust deep learning system.
    Related Articles | Metrics
    TensorFlow Lite: On-Device Machine Learning Framework
    Li Shuangfeng
    Journal of Computer Research and Development    2020, 57 (9): 1839-1853.   DOI: 10.7544/issn1000-1239.2020.20200291
    Abstract1069)   HTML36)    PDF (1882KB)(1477)       Save
    TensorFlow Lite (TFLite) is a lightweight, fast and cross-platform open source machine learning framework specifically designed for mobile and IoT. It’s part of TensorFlow and supports multiple platforms such as Android, iOS, embedded Linux, and MCU etc. It greatly reduces the barrier for developers, accelerates the development of on-device machine learning (ODML), and makes ML run everywhere. This article introduces the trend, challenges and typical applications of ODML; the origin and system architecture of TFLite; best practices and tool chains suitable for ML beginners; and the roadmap of TFLite.
    Related Articles | Metrics
    Principle and Research Progress of Quantum Computation and Quantum Cryptography
    Wang Yongli, Xu Qiuliang
    Journal of Computer Research and Development    2020, 57 (10): 2015-2026.   DOI: 10.7544/issn1000-1239.2020.20200615
    Abstract1503)   HTML41)    PDF (967KB)(1472)       Save
    Quantum computation and quantum cryptography are based on principles of quantum mechanics. In 1984, Bennett and Brassard proposed the first quantum key distribution protocol called BB84, which started the study of quantum cryptography. Since then, a great deal of work has been carried out in various fields such as quantum encryption and quantum signature. In 1994, Shor designed the first practical quantum algorithm which can factor large integers in polynomial time. Shor’s algorithm used Quantum Fourier Transform, which is the kernel of most quantum algorithms. In 1996, Grover designed a new algorithm which can search the unstructured data to get the required result in the time of approximately the square root of the total account of the data. Shor’s algorithm and Grover’s algorithm not only embody the advantages of quantum computing, but also pose a threat to the traditional cryptography based on mathematical difficulties such as RSA. After half a century’s development, quantum computing and quantum cryptography have achieved fruitful results in theory and practice. In this paper, we summarize the contents from the perspectives of the mathematical framework of quantum mechanics, basic concepts and principles, basic ideas of quantum computing, research progress and main ideas of quantum cryptography, etc.
    Related Articles | Metrics
    Research Advances in the Interpretability of Deep Learning
    Cheng Keyang, Wang Ning, Shi Wenxi, Zhan Yongzhao
    Journal of Computer Research and Development    2020, 57 (6): 1208-1217.   DOI: 10.7544/issn1000-1239.2020.20190485
    Abstract2203)   HTML62)    PDF (1226KB)(1614)       Save
    The research on the interpretability of deep learning is closely related to various disciplines such as artificial intelligence, machine learning, logic and cognitive psychology. It has important theoretical research significance and practical application value in too many fields, such as information push, medical research, finance, and information security. In the past few years, there were a lot of well studied work in this field, but we are still facing various issues. In this paper, we clearly review the history of deep learning interpretability research and related work. Firstly, we introduce the history of interpretable deep learning from following three aspects: origin of interpretable deep learning, research exploration stage and model construction stage. Then, the research situation is presented from three aspects, namely visual analysis, robust perturbation analysis and sensitivity analysis. The research on the construction of interpretable deep learning model is introduced following four aspects: model agent, logical reasoning, network node association analysis and traditional machine learning model. Moreover, the limitations of current research are analyzed and discussed in this paper. At last, we list the typical applications of the interpretable deep learning and forecast the possible future research directions of this field along with reasonable and suitable suggestions.
    Related Articles | Metrics
    Survey on Automatic Text Summarization
    Li Jinpeng, Zhang Chuang, Chen Xiaojun, Hu Yue, Liao Pengcheng
    Journal of Computer Research and Development    2021, 58 (1): 1-21.   DOI: 10.7544/issn1000-1239.2021.20190785
    Abstract1380)   HTML39)    PDF (1756KB)(1290)       Save
    In recent years, the rapid development of Internet technology has greatly facilitated the daily life of human, and it is inevitable that massive information erupts in a blowout. How to quickly and effectively obtain the required information on the Internet is an urgent problem. The automatic text summarization technology can effectively alleviate this problem. As one of the most important fields in natural language processing and artificial intelligence, it can automatically produce a concise and coherent summary from a long text or text set through computer, in which the summary should accurately reflect the central themes of source text. In this paper, we expound the connotation of automatic summarization, review the development of automatic text summarization technique and introduce two main techniques in detail: extractive and abstractive summarization, including feature scoring, classification method, linear programming, submodular function, graph ranking, sequence labeling, heuristic algorithm, deep learning, etc. We also analyze the datasets and evaluation metrics that are commonly used in automatic summarization. Finally, the challenges ahead and the future trends of research and application have been predicted.
    Related Articles | Metrics
    Review on the Development of Microservice Architecture
    Feng Zhiyong, Xu Yanwei, Xue Xiao, Chen Shizhan
    Journal of Computer Research and Development    2020, 57 (5): 1103-1122.   DOI: 10.7544/issn1000-1239.2020.20190460
    Abstract2816)   HTML147)    PDF (3960KB)(1524)       Save
    With the rapid development of cloud computing and Internet of things, users’ demand for software systems tends to be diversified. Service oriented architecture (SOA) needs to strike a balance between stable service integration and flexible adaptation of requirements. Based on this situation, the microservice technology, which goes with independent process as well as independent deployment capability, emerges as the times require. It has a slew of advantages, such as distributed storage, high availability, scalability, and intelligent operation maintenance, which can make up for the shortcomings of the traditional SOA architecture. From the perspective of system integration, the paper firstly describes the application background of microservice, which include the core components of microservice, software technology development and architecture evolution to ensure the availability of microservice infrastructure. Secondly, in view of problems existing in practical applications, the paper analyzes the key technologies utilized in the specific application of the microservice architecture through the aspects of distributed communication, distributed data storage, distributed call chain, and testing complexity; then, a specific application case is given to confirm the technical feasibility of microservice. Finally, this paper intends to explore the challenges by microservice through the aspects of infrastructure, information exchange, data security, and network security. Meanwhile, the future development trend is analyzed so as to provide valuable theoretical and technical reference for the future innovation and development of microservice.
    Related Articles | Metrics
    Edge Computing—An Emerging Computing Model for the Internet of Everything Era
    Shi Weisong, Sun Hui, Cao Jie, Zhang Quan, Liu Wei
    Journal of Computer Research and Development    2017, 54 (5): 907-924.   DOI: 10.7544/issn1000-1239.2017.20160941
    Abstract3110)   HTML102)    PDF (4113KB)(2937)       Save
    With the proliferation of Internet of things (IoT) and the burgeoning of 4G/5G network, we have seen the dawning of the IoE (Internet of everything) era, where there will be a huge volume of data generated by things that are immersed in our daily life, and hundreds of applications will be deployed at the edge to consume these data. Cloud computing as the de facto centralized big data processing platform is not efficient enough to support these applications emerging in IoE era, i.e., 1) the computing capacity available in the centralized cloud cannot keep up with the explosive growing computational needs of massive data generated at the edge of the network; 2) longer user-perceived latency caused by the data movement between the edge and the cloud;3) privacy and security concerns from data owners in the edge; 4) energy constraints of edge devices. These issues in the centralized big data processing era have pushed the horizon of a new computing paradigm, edge computing, which calls for processing the data at the edge of the network. Leveraging the power of cloud computing, edge computing has the potential to address the limitation of computing capability, the concerns of response time requirement, bandwidth cost saving, data safety and privacy, as well as battery life constraint. “Edge” in edge computing is defined as any computing and network resources along the path between data sources and cloud data centers. In this paper, we introduce the definition of edge computing, followed by several case studies, ranging from cloud offloading to smart home and city, as well as collaborative edge to materialize the concept of edge computing. Finally, we present several challenges and opportunities in the field of edge computing, and hope this paper will gain attention from the community and inspire more research in this direction.
    Related Articles | Metrics
    Survey of Smart Contract Technology and Application Based on Blockchain
    He Haiwu, Yan An, Chen Zehua
    Journal of Computer Research and Development    2018, 55 (11): 2452-2466.   DOI: 10.7544/issn1000-1239.2018.20170658
    Abstract3670)   HTML151)    PDF (3644KB)(2588)       Save
    With the flourishing development of blockchain technology represented by bitcoin, the blockchain technology has moved from the era of programmable currency into the era of smart contract. The smart contract is an event-driven, state-based code contract and algorithm contract, which has been widely concerned and studied with the deep development of blockchain technology. The protocol and user interface are applied to complete all steps of the smart contract process. Smart contract enables users to implement personalized logic on the blockchain. The blockchain-based smart contract technology has the characteristics of de-centralization, autonomy, observability, verifiability and information sharing. It can also be effectively applied to build programmable finance and programmable society, which has been widely used in digital payment, financial asset disposal, multi-signature contract, cloud computing, Internet of things, sharing economy and other fields. The survey describes the basic concepts of smart contract technology, its whole life cycle, basic classification and structure, key technology, the art of the state, as well as its application scenarios and the main technology platforms. Its problems encountered at present are also discussed. Finally, based on the theoretical knowledge of the smart contract, we set up the Ethereum experimental environment and develop a system of crowdsale contract. The survey is aimed at providing helpful guidance and reference for future research of smart contract based on blockchain technology.
    Related Articles | Metrics