ISSN 1000-1239 CN 11-1777/TP

Most Down Articles

    Published in last 1 year| In last 2 years| In last 3 years| All| Most Downloaded in Recent Month | Most Downloaded in Recent Year|

    Published in last 1 year
    Please wait a minute...
    For Selected: Toggle Thumbnails
    Interpretation and Understanding in Machine Learning
    Chen Kerui, Meng Xiaofeng
    Journal of Computer Research and Development    2020, 57 (9): 1971-1986.   DOI: 10.7544/issn1000-1239.2020.20190456
    Abstract1875)   HTML73)    PDF (1315KB)(1599)       Save
    In recent years, machine learning has developed rapidly, especially in the deep learning, where remarkable achievements are obtained in image, voice, natural language processing and other fields. The expressive ability of machine learning algorithm has been greatly improved; however, with the increase of model complexity, the interpretability of computer learning algorithm has deteriorated. So far, the interpretability of machine learning remains as a challenge. The trained models via algorithms are regarded as black boxes, which seriously hamper the use of machine learning in certain fields, such as medicine, finance and so on. Presently, only a few works emphasis on the interpretability of machine learning. Therefore, this paper aims to classify, analyze and compare the existing interpretable methods; on the one hand, it expounds the definition and measurement of interpretability, while on the other hand, for the different interpretable objects, it summarizes and analyses various interpretable techniques of machine learning from three aspects: model understanding, prediction result interpretation and mimic model understanding. Moreover, the paper also discusses the challenges and opportunities faced by machine learning interpretable methods and the possible development direction in the future. The proposed interpretation methods should also be useful for putting many research open questions in perspective.
    Related Articles | Metrics
    Security Issues and Privacy Preserving in Machine Learning
    Wei Lifei, Chen Congcong, Zhang Lei, Li Mengsi, Chen Yujiao, Wang Qin
    Journal of Computer Research and Development    2020, 57 (10): 2066-2085.   DOI: 10.7544/issn1000-1239.2020.20200426
    Abstract1826)   HTML54)    PDF (2361KB)(1550)       Save
    In recent years, machine learning has developed rapidly, and it is widely used in the aspects of work and life, which brings not only convenience but also great security risks. The security and privacy issues have become a stumbling block in the development of machine learning. The training and inference of the machine learning model are based on a large amount of data, which always contains some sensitive information. With the frequent occurrence of data privacy leakage events and the aggravation of the leakage scale annually, how to make sure the security and privacy of data has attracted the attention of the researchers from academy and industry. In this paper we introduce some fundamental concepts such as the adversary model in the privacy preserving of machine learning and summarize the common security threats and privacy threats in the training and inference phase of machine learning, such as privacy leakage of training data, poisoning attack, adversarial attack, privacy attack, etc. Subsequently, we introduce the common security protecting and privacy preserving methods, especially focusing on homomorphic encryption, secure multi-party computation, differential privacy, etc. and compare the typical schemes and applicable scenarios of the three technologies. At the end, the future development trend and research direction of machine learning privacy preserving are prospected.
    Related Articles | Metrics
    TensorFlow Lite: On-Device Machine Learning Framework
    Li Shuangfeng
    Journal of Computer Research and Development    2020, 57 (9): 1839-1853.   DOI: 10.7544/issn1000-1239.2020.20200291
    Abstract1069)   HTML36)    PDF (1882KB)(1477)       Save
    TensorFlow Lite (TFLite) is a lightweight, fast and cross-platform open source machine learning framework specifically designed for mobile and IoT. It’s part of TensorFlow and supports multiple platforms such as Android, iOS, embedded Linux, and MCU etc. It greatly reduces the barrier for developers, accelerates the development of on-device machine learning (ODML), and makes ML run everywhere. This article introduces the trend, challenges and typical applications of ODML; the origin and system architecture of TFLite; best practices and tool chains suitable for ML beginners; and the roadmap of TFLite.
    Related Articles | Metrics
    Principle and Research Progress of Quantum Computation and Quantum Cryptography
    Wang Yongli, Xu Qiuliang
    Journal of Computer Research and Development    2020, 57 (10): 2015-2026.   DOI: 10.7544/issn1000-1239.2020.20200615
    Abstract1503)   HTML41)    PDF (967KB)(1472)       Save
    Quantum computation and quantum cryptography are based on principles of quantum mechanics. In 1984, Bennett and Brassard proposed the first quantum key distribution protocol called BB84, which started the study of quantum cryptography. Since then, a great deal of work has been carried out in various fields such as quantum encryption and quantum signature. In 1994, Shor designed the first practical quantum algorithm which can factor large integers in polynomial time. Shor’s algorithm used Quantum Fourier Transform, which is the kernel of most quantum algorithms. In 1996, Grover designed a new algorithm which can search the unstructured data to get the required result in the time of approximately the square root of the total account of the data. Shor’s algorithm and Grover’s algorithm not only embody the advantages of quantum computing, but also pose a threat to the traditional cryptography based on mathematical difficulties such as RSA. After half a century’s development, quantum computing and quantum cryptography have achieved fruitful results in theory and practice. In this paper, we summarize the contents from the perspectives of the mathematical framework of quantum mechanics, basic concepts and principles, basic ideas of quantum computing, research progress and main ideas of quantum cryptography, etc.
    Related Articles | Metrics
    Survey on Automatic Text Summarization
    Li Jinpeng, Zhang Chuang, Chen Xiaojun, Hu Yue, Liao Pengcheng
    Journal of Computer Research and Development    2021, 58 (1): 1-21.   DOI: 10.7544/issn1000-1239.2021.20190785
    Abstract1380)   HTML39)    PDF (1756KB)(1290)       Save
    In recent years, the rapid development of Internet technology has greatly facilitated the daily life of human, and it is inevitable that massive information erupts in a blowout. How to quickly and effectively obtain the required information on the Internet is an urgent problem. The automatic text summarization technology can effectively alleviate this problem. As one of the most important fields in natural language processing and artificial intelligence, it can automatically produce a concise and coherent summary from a long text or text set through computer, in which the summary should accurately reflect the central themes of source text. In this paper, we expound the connotation of automatic summarization, review the development of automatic text summarization technique and introduce two main techniques in detail: extractive and abstractive summarization, including feature scoring, classification method, linear programming, submodular function, graph ranking, sequence labeling, heuristic algorithm, deep learning, etc. We also analyze the datasets and evaluation metrics that are commonly used in automatic summarization. Finally, the challenges ahead and the future trends of research and application have been predicted.
    Related Articles | Metrics
    Deep Neural Architecture Search: A Survey
    Meng Ziyao, Gu Xue, Liang Yanchun, Xu Dong, Wu Chunguo
    Journal of Computer Research and Development    2021, 58 (1): 22-33.   DOI: 10.7544/issn1000-1239.2021.20190851
    Abstract1023)   HTML19)    PDF (1178KB)(975)       Save
    Deep learning has achieved excellent results on data tasks with multiple modalities such as images, speech, and text. However, designing networks manually for specific tasks is time-consuming and requires a certain level of expertise and design experience from the designer. In the face of today’s increasingly complex network architectures, relying on manual design alone increasingly becomes complex. For this reason, automatic architecture search of neural networks with the help of algorithms has become a hot research topic. The approach of neural architecture search involves three aspects: search space, search strategy, and performance evaluation strategy. The search strategy samples a network architecture in the search space, evaluates the network architecture by a performance evaluation strategy, and feed-back the results to the search strategy to guide it to select a better network architecture, and obtains the optimal network architecture through continuous iterations. In order to better sort out the methods of neural architecture search, we summarize the common methods in recent years from search space, search strategy and performance evaluation strategy, and analyze their strengths and weaknesses.
    Related Articles | Metrics
    Research Advances on Privacy Preserving in Edge Computing
    Zhou Jun, Shen Huajie, Lin Zhongyun, Cao Zhenfu, Dong Xiaolei
    Journal of Computer Research and Development    2020, 57 (10): 2027-2051.   DOI: 10.7544/issn1000-1239.2020.20200614
    Abstract1122)   HTML28)    PDF (3203KB)(913)       Save
    The wide exploitation of the theory of mobile communication and big data has enabled the flourishment of the outsourced system, where resource-constrained local users delegate batch of files and time-consuming evaluation tasks to the cloud server for outsourced storage and outsourced computation. Unfortunately, one single cloud server tends to become the target of comprise attack and bring about huge delay in response to the multi-user and multi-task setting where large quantity of inputs and outputs are respectively fed to and derived from the function evaluation, owing to its long distance from local users. To address this bottleneck of outsourced system, edge computing emerges that several edge nodes located between the cloud server and users collaborate to fulfill the tasks of outsourced storage and outsourced computation, meeting the real-time requirement but incurring new challenging issues of security and privacy-preserving. This paper firstly introduces the unique network architecture and security model of edge computing. Then, the state-of-the-art works in the field of privacy preserving of edge computing are elaborated, classified, and summarized based on the cryptographic techniques of data perturbation, fully homomorphic encryption, secure multiparty computation, fully homomorphic data encapsulation mechanism and verifiability and accountability in the following three phases: privacy-preserving data aggregation, privacy-preserving outsourced computation and their applications including private set intersection, privacy-preserving machine learning, privacy-preserving image processing, biometric authentication and secure encrypted search. Finally, several open research problems in privacy-preserving edge computing are discussed with convincing solutions, which casts light on its development and applications in the future.
    Related Articles | Metrics
    An Asynchronous Federated Learning Mechanism for Edge Network Computing
    Lu Xiaofeng, Liao Yuying, Pietro Lio, Pan Hui
    Journal of Computer Research and Development    2020, 57 (12): 2571-2582.   DOI: 10.7544/issn1000-1239.2020.20190754
    Abstract1052)   HTML30)    PDF (2431KB)(873)       Save
    With the continuous improvement of the performance of the IoT and mobile devices, a new type of computing architecture, edge computing, came into being. The emergence of edge computing has changed the situation where data needs to be uploaded to the cloud for data processing, fully utilizing the computing and storage capabilities of edge IoT devices. Edge nodes process private data locally and no longer need upload a large amount of data to the cloud for processing, reducing the transmission delay. The demand for implementing artificial intelligence frameworks on edge nodes is also increasing day by day. Because the federated learning mechanism does not require centralized data for model training, it is more suitable for edge network machine learning scenarios where the average amount of data of nodes is limited. This paper proposes an efficient asynchronous federated learning mechanism for edge network computing (EAFLM), which compresses the redundant communication between the nodes and the parameter server during the training process according to the self-adaptive threshold. The gradient update algorithm based on dual-weight correction allows nodes to join or withdraw from federated learning during any process of learning. Experimental results show that when the gradient communication is compressed to 8.77% of the original communication times, the accuracy of the test set is only reduced by 0.03%.
    Related Articles | Metrics
    Research Advances on Knowledge Tracing Models in Educational Big Data
    Hu Xuegang, Liu Fei, Bu Chenyang
    Journal of Computer Research and Development    2020, 57 (12): 2523-2546.   DOI: 10.7544/issn1000-1239.2020.20190767
    Abstract984)   HTML25)    PDF (2358KB)(860)       Save
    With the in-depth advancement of informational education and the rapid development of online education, a large amount of fragmented educational data are generated during the learning process of students. How to mine and analyze these educational big data has become an urgent problem in the education and the knowledge engineering with big data fields. As for the dynamic education data, knowledge tracing models trace the cognitive status of students over time by analyzing the students’ exercising data generated in the learning process, so as to predict the exercising performance of students in the future time. In this paper, knowledge tracing models in educational big data are reviewed, analyzed, and discussed. Firstly, knowledge tracing models are introduced in detail from the perspective of their principles, steps, and model variants, including two mainstream knowledge tracing models based on Bayesian methods and deep learning methods. Then, the application scenarios of knowledge tracing models are explained from five aspects: student performance prediction, cognitive state assessment, psychological factor analysis, exercise sequence, and programming practice. The strengths and weaknesses in Bayesian knowledge tracing models and Deep Knowledge Tracing models are discussed through the two classic models BKT and DKT. Finally, some future directions of knowledge tracing models are given.
    Related Articles | Metrics
    Blockchain-Based Data Transparency: Issues and Challenges
    Meng Xiaofeng, Liu Lixin
    Journal of Computer Research and Development    2021, 58 (2): 237-252.   DOI: 10.7544/issn1000-1239.2021.20200017
    Abstract1131)   HTML20)    PDF (1812KB)(832)       Save
    With the high-speed development of Internet of things, wearable devices and mobile communication technology, large-scale data continuously generate and converge to multiple data collectors, which influences people’s life in many ways. Meanwhile, it also causes more and more severe privacy leaks. Traditional privacy aware mechanisms such as differential privacy, encryption and anonymization are not enough to deal with the serious situation. What is more, the data convergence leads to data monopoly which hinders the realization of the big data value seriously. Besides, tampered data, single point failure in data quality management and so on may cause untrustworthy data-driven decision-making. How to use big data correctly has become an important issue. For those reasons, we propose the data transparency, aiming to provide solution for the correct use of big data. Blockchain originated from digital currency has the characteristics of decentralization, transparency and immutability, and it provides an accountable and secure solution for data transparency. In this paper, we first propose the definition and research dimension of the data transparency from the perspective of big data life cycle, and we also analyze and summary the methods to realize data transparency. Then, we summary the research progress of blockchain-based data transparency. Finally, we analyze the challenges that may arise in the process of blockchain-based data transparency.
    Related Articles | Metrics
    Internet Data Transfer Protocol QUIC: A Survey
    Li Xuebing, Chen Yang, Zhou Mengying, Wang Xin
    Journal of Computer Research and Development    2020, 57 (9): 1864-1876.   DOI: 10.7544/issn1000-1239.2020.20190693
    Abstract1175)   HTML46)    PDF (929KB)(826)       Save
    QUIC is an Internet data transfer protocol proposed by Google as an alternative for TCP (transmission control protocol). Compared with TCP, QUIC introduces lots of new features to make it theoretically outperform TCP in many fields. For example, it supports multiplexing to solve the problem of head-of-line blocking, introduces 0-RTT handshake to reduce handshake latency, and supports connection migration to be mobility-friendly. However, QUIC’s performance in the real world may not be as good as expected, because network environments and network devices are diverse and the protocol’s security is challenged by potential attackers. Therefore, evaluating QUIC’s impact on existing network services is quite important. This paper carries out a comprehensive survey of QUIC. We introduce the development history and the main characteristics of QUIC firstly. Secondly, taking the two most widely used application scenarios: Web browsing and video streaming as examples, we introduce and summarize domestic and international research analysis on the data transmission performance of QUIC under different network environments. Thirdly, we enumerate existing QUIC-enhancement work from the aspects of protocol design and system design. Fourthly, we summarize existing work on the security analysis on QUIC. We enumerate the security issues that are currently recognized by the academic community, as well as the researchers’ efforts to address these issues. Lastly, we come up with several potential improvements on existing research outcomes and look forward to new research topics and challenges brought by QUIC.
    Related Articles | Metrics
    Edge Computing in Smart Homes
    Huang Qianyi, Li Zhiyang, Xie Wentao, Zhang Qian
    Journal of Computer Research and Development    2020, 57 (9): 1800-1809.   DOI: 10.7544/issn1000-1239.2020.20200253
    Abstract1020)   HTML37)    PDF (2403KB)(791)       Save
    In recent years, smart speakers and robotic vacuum cleaners have played important roles in many peoples daily life. With the development in technology, more and more intelligent devices will become parts of home infrastructure, making life more convenient and comfortable for residents. When different types of specialized intelligent devices are connected and operated over the Internet, how to minimize network latency and guarantee data privacy are open issues. In order to solve these problems, edge computing in smart homes becomes the future trend. In this article, we present our research work along this direction, covering the topics on edge sensing, communication and computation. As for sensing, we focus on the pervasive sensing capability of the edge node and present our work on contactless breath monitoring; as for communication, we work on the joint design of sensing and communication, so that sensing and communication systems can work harmoniously on limited spectrum resources; as for computation, we devote our efforts to personalized machine learning at the edge, building personalized model for each individual while guaranteeing their data privacy.
    Related Articles | Metrics
    Overview of Threat Intelligence Sharing and Exchange in Cybersecurity
    Lin Yue, Liu Peng, Wang He, Wang Wenjie, Zhang Yuqing
    Journal of Computer Research and Development    2020, 57 (10): 2052-2065.   DOI: 10.7544/issn1000-1239.2020.20200616
    Abstract822)   HTML28)    PDF (1049KB)(770)       Save
    The emerging threats in cyberspace are endangering the interests of individuals, organizations and governments with complex and changeable attack methods. When traditional network security defense methods are not strong enough, the threat intelligence sharing and exchange mechanism has brought hope to the protection of cyberspace security. Cybersecurity threat intelligence is a collection of information that can cause potential harm and direct harm to organizations and institutions. This information can help organizations and institutions study and judge the cybersecurity threats they face, and make decisions and defenses accordingly. The exchange and sharing of threat intelligence can maximize the value of threat intelligence, reduce the cost of intelligence search and allieviate the problem of information islands, thereby improving the threat detection and emergency response capabilities of all parties involved in the sharing. This article first introduces the concept of cyber security threat intelligence and mainstream threat intelligence sharing norms; secondly, it investigates the literature on threat intelligence sharing and exchange at home and abroad in the past 10 years, and analyzes and summarizes the current situation and development trend of threat intelligence sharing and exchange. The article focuses on in-depth analysis from three perspectives of sharing models and mechanisms, the distribution of benefits of the exchange mechanism, and the privacy protection of shared data. The problems in the three parts and related solutions are pointed out, and the advantages and disadvantages of each solution are analyzed and discussed. Finally, the future research trend and direction of threat intelligence sharing and exchange are prospected.
    Related Articles | Metrics
    Adversarial Attacks and Defenses for Deep Learning Models
    Li Minghui, Jiang Peipei, Wang Qian, Shen Chao, Li Qi
    Journal of Computer Research and Development    2021, 58 (5): 909-926.   DOI: 10.7544/issn1000-1239.2021.20200920
    Abstract529)   HTML0)    PDF (1577KB)(674)       Save
    Deep learning is one of the main representatives of artificial intelligence technology, which is quietly enhancing our daily lives. However, the deployment of deep learning models has also brought potential security risks. Studying the basic theories and key technologies of attacks and defenses for deep learning models is of great significance for a deep understanding of the inherent vulnerability of the models, comprehensive protection of intelligent systems, and widespread deployment of artificial intelligence applications. This paper discusses the development and future challenges of the adversarial attacks and defenses for deep learning models from the perspective of confrontation. In this paper, we first introduce the potential threats faced by deep learning at different stages. Afterwards, we systematically summarize the progress of existing attack and defense technologies in artificial intelligence systems from the perspectives of the essential mechanism of adversarial attacks, the methods of adversarial attack generation, defensive strategies against the attacks, and the framework of the attacks and defenses. We also discuss the limitations of related research and propose an attack framework and a defense framework for guidance in building better adversarial attacks and defenses. Finally, we discuss several potential future research directions and challenges for adversarial attacks and defenses against deep learning model.
    Related Articles | Metrics
    Webpage Fingerprinting Identification on Tor: A Survey
    Sun Xueliang, Huang Anxin, Luo Xiapu, Xie Yi
    Journal of Computer Research and Development   
    Online available: 05 February 2021

    SCONV: A Financial Market Trend Forecast Method Based on Emotional Analysis
    Lin Peiguang, Zhou Jiaqian, Wen Yulian
    Journal of Computer Research and Development    2020, 57 (8): 1769-1778.   DOI: 10.7544/issn1000-1239.2020.20200494
    Abstract643)   HTML29)    PDF (1623KB)(653)       Save
    The stock market plays a critical role in the economic development of countries, and it is also a market closely related to our daily life. The sentiment of shareholders may be judged as one of the factors affecting the stock price. This paper proposes a deep learning model of stock sentiment analysis price prediction based on convolution long short-term memory, named semantic convolution (SCONV). The model utilizes long short-term memory model and word2vec to analyze the emotion, extracts emotion vector, and to calculate the emotion weight of each day. Then we put the corresponding weights of the daily stock prices respectively to the average of the previous day, the previous three days, and the average of the previous week, together with the stock price into the ConvLstm. There is a dropout between ConvLstm and the increased LSTM to avoid over-fitting. In this paper, BABA.us, 000001.sh, 000651.sz are used as experimental data. BABA.us about 3 years, 000001.sh about 1.5 years and 000651.sz about 5 months are respectively implemented in the experiment. Compared with traditional models, the experimental results show that SCONV is still able to predict more precisely the trend of the stock price on a smaller sample set.
    Related Articles | Metrics
    Review of Automatic Image Annotation Technology
    Ma Yanchun, Liu Yongjian, Xie Qing, Xiong Shengwu, Tang Lingli
    Journal of Computer Research and Development    2020, 57 (11): 2348-2374.   DOI: 10.7544/issn1000-1239.2020.20190793
    Abstract866)   HTML25)    PDF (1358KB)(616)       Save
    As one of the most effective ways to reduce the “semantic gap” between image data and its content, automatic image annotation (AIA) technology has shown its great significance to help people understand image contents and retrieve the target information from the massive image data. This paper summarizes the general framework of AIA models by investigating the literatures about image annotation in recent 20 years, and analyzes the general problems to solve in AIA problems by combining the framework with various specific works. In this paper, the main methods used in various AIA models are classified into 9 types: correlation model, hidden Markov model, topic model, matrix factorization model, neighbor-based model, SVM-based model, graph-based model, CCA (KCCA) model and deep learning model. For each type of image annotation model, this paper provides a detailed study and analysis in terms of “basic principle introduction-specific model differences-model summary”. In addition, this paper summarizes some commonly used datasets and evaluation indexes, and compares the performance of some important image annotation models with related analysis on the advantages and disadvantages of various types of AIA models. Finally, some open problems and research directions in the field of image annotation are proposed and suggested.
    Related Articles | Metrics
    Fairness Research on Deep Learning
    Chen Jinyin, Chen Yipeng, Chen Yiming, Zheng Haibin, Ji Shouling, Shi Jie, Cheng Yao
    Journal of Computer Research and Development    2021, 58 (2): 264-280.   DOI: 10.7544/issn1000-1239.2021.20200758
    Abstract842)   HTML20)    PDF (1752KB)(592)       Save
    Deep learning is an important field of machine learning research, which is widely used in industry for its powerful feature extraction capabilities and advanced performance in many applications. However, due to the bias in training data labeling and model design, research shows that deep learning may aggravate human bias and discrimination in some applications, which results in unfairness during the decision-making process, thereby will cause negative impact to both individuals and socials. To improve the reliability of deep learning and promote its development in the field of fairness, we review the sources of bias in deep learning, debiasing methods for different types biases, fairness measure metrics for measuring the effect of debiasing, and current popular debiasing platforms, based on the existing research work. In the end we explore the open issues in existing fairness research field and future development trends.
    Related Articles | Metrics
    A Sequence-to-Sequence Spatial-Temporal Attention Learning Model for Urban Traffic Flow Prediction
    Du Shengdong, Li Tianrui, Yang Yan, Wang Hao, Xie Peng, Horng Shi-Jinn
    Journal of Computer Research and Development    2020, 57 (8): 1715-1728.   DOI: 10.7544/issn1000-1239.2020.20200169
    Abstract684)   HTML21)    PDF (5462KB)(591)       Save
    Urban traffic flow prediction is a key technology to study the behavior of traffic-related big data and predict future traffic flow, which is crucial to guide the early warning of traffic congestion in the intelligent transportation system. But effective traffic flow prediction is very challenging as it is affected by many complex factors, e.g. spatial-temporal dependency and temporal dynamics of traffic networks. In the literature, some research works applied convolutional neural networks (CNN) or recurrent neural networks (RNN) for traffic flow prediction. However, it is difficult for these models to capture the spatial-temporal correlation features of traffic flow related temporal data. In this paper, we propose a novel sequence-to-sequence spatial-temporal attention framework to deal with the urban traffic flow forecasting task. It is an end-to-end deep learning model which is based on convolutional LSTM layers and LSTM layers with attention mechanism to adaptively learn spatial-temporal dependency and non-linear correlation features of urban traffic flow related multivariate sequence data. Extensive experimental results based on three real-world traffic flow datasets show that our model has the best forecasting performance compared with state-of-the-art methods.
    Related Articles | Metrics
    CATS: Cost Aware Task Scheduling in Multi-Tier Computing Networks
    Liu Zening, Li Kai, Wu Liantao, Wang Zhi, Yang Yang
    Journal of Computer Research and Development    2020, 57 (9): 1810-1822.   DOI: 10.7544/issn1000-1239.2020.20200198
    Abstract709)   HTML7)    PDF (2103KB)(590)       Save
    Due to more data and more powerful computing power and algorithms, IoT (Internet of things) applications are becoming increasingly intelligent, which are shifting from simple data sensing, collection, and representation tasks towards complex information extraction and analysis. The continuing trend requires multi-tier computing resources and networks. Multi-tier computing networks involve collaborations between cloud computing, fog computing, edge computing, and sea computing technologies, which have been developed for regional, local, and device levels, respectively. However, due to different features of computing technologies and diverse requirements of tasks, how to effectively schedule tasks is a key challenge in multi-tier computing networks. Besides, how to motivate multi-tier computing resources is also a key problem, which is the premise of the formation of multi-tier computing networks. To solve these challenges, in this paper, we propose a multi-tier computing network and a computation offloading system with hybrid cloud and fog, define a weighted cost function consisting of delay, energy, and payment, and formulate a cost aware task scheduling (CATS) problem. Furthermore, we propose a computation load based payment model to motivate cloud and fog, and include the payment related cost into the overall cost. To be specific, based on different features and requirements of cloud and fog, we propose a static payment model and a dynamic payment model for cloud and fog, respectively, which constitute the hybrid payment model. To solve CATS problem, we propose a potential game based analytic framework and develop a distributed task scheduling algorithm called CATS algorithm. Numerical simulation results show that CATS algorithm can offer the near-optimal performance in system average cost, and achieve more number of beneficial UEs (user equipment), compared with the centralized optimal method. Besides, it shows that the dynamic payment model may help fog obtain more income, compared with the static payment model.
    Related Articles | Metrics