计算机研究与发展
       ISSN 1000-1239   CN 11-1777/TP  
   RSS  Email Alert     
   
 
Nianqi Search Key words Adv search
33
2018 Vol. 55, No. 3
Published: 2018-03-01

   
 
449 Integrated Trust Based Resource Cooperation in Edge Computing
Deng Xiaoheng, Guan Peiyuan, Wan Zhiwen, Liu Enlu, Luo Jie, Zhao Zhihui, Liu Yajun, Zhang Honggang
DOI: 10.7544/issn1000-1239.2018.20170800
Edge computing, as a new computing paradigm, is designed to share resources of edge devices, such as CPU computing ability, bandwidth, storage capacity and so on, to meet the requirements of the real-time response, privacy and security, computing autonomy. With the development of Internet of things (IoT) and mobile Internet technology, edge computing is of great potential of being widely used. This paper investigates the basic features, concepts and definitions, the latest state of art, and the challenge and trends of edge computing. Based on the key challenge of guarantee of users’ quality of experience (QoE), privacy and security in edge computing, we focus on the requirement of users and consider the quality of experience of users to optimize the edge computing system. We integrate three aspects of trust properties, which are identity trust, behavior trust and ability trust, to evaluate resources and users to ensure the success of resource sharing and collaborative optimization in edge computing. This paper also investigates various computing modes such as cloud computing, P2P computing, CS and grid computing, and constructs a multi-layer, self-adaptive, uniform computing model to dynamically match different application scenarios. This model has four contributions: 1) reveal the mechanism of parameters mapping between quality of service (QoS) and quality of experience; 2) construct identity trust, behavior trust of resources and users evaluation mechanisms; 3) form an integrated trust evaluation architecture and model; 4) design a resource scheduling algorithm for stream processing scenario, considering the computing ability, storage capacity and dynamical channel capacity depends on mobility to improve the quality of experience of users. Through this model and mechanism, resources in the end point, edge network, cloud center three levels are expected to be trusted sharing and optimized using, and the users' QoE needs are well satisfied. At last, simulation results show the validity of the model.
2018 Vol. 55 (3): 449-477 [Abstract] ( 1153 ) HTML (207 KB)  PDF (8088 KB)  ( 1321 )
478 MEC Coordinated Future 5G Mobile Wireless Networks
Qi Yanli, Zhou Yiqing, Liu Ling, Tian Lin, Shi Jinglin
DOI: 10.7544/issn1000-1239.2018.20170801
Future 5G wireless networks are confronted with various challenges such as exponentially increasing mobile traffic and new services requiring high backhaul bandwidth and low latency. Integrating mobile edge computing (MEC) into 5G network architectures may be a promising solution. First of all, this paper introduces the functional framework of MEC systems. Then the standardization progress of MEC in 5G is presented. Supporting MEC, the functionalities of 5G core network are described in detail. Given MEC deployment strategies and the mobile network architectures of future 5G, a MEC coordinated 5G network architecture is proposed, which demonstrates that 5G will be a network featured by the coordination of communications and multi-level computing. The proposed network architecture can support various communication modes adaptively and enable the resource sharing efficiently with virtualization technologies. Some researches have been carried out on MEC coordinated 5G, such as basic theorems related to the 5G network capacity concerning both the communication and computing resources, and key technologies including the joint optimization of communication and computing resources, multicast based on computing and cache, and bandwidth-saving transmission. It can be seen that much more efforts need to be put on MEC coordinated 5G before the network can be fully understood.
2018 Vol. 55 (3): 478-486 [Abstract] ( 964 ) HTML (57 KB)  PDF (3340 KB)  ( 948 )
487 Standardization Progress and Case Analysis of Edge Computing
Lü Huazhang, Chen Dan, Fan Bin, Wang Youxiang, Wu Yunxiao
DOI: 10.7544/issn1000-1239.2018.20170778
Edge computing is a new network architecture and open platform that integrates network, computing, storage, and application core capabilities on the edge of the network. Edge computing changes the way of traditional centralized cloud computing which moves computing and storage capabilities to the edge of the network. Edge computing can greatly reduce the congestion and burden of core network and transmission network, due to canceling the data backhaul. And it can lower the delay, bring high bandwidth. Also at the same time it can quickly respond to users’ requests and improve service quality. Now, edge computing has become an important enabling technology for the future 5G, and it has been written into 3GPP standard. More and more operators, equipment vendors and chip manufacturers join to construct the edge computing ecological. How to build a unified standardized edge computing platform for future ecological construction is very important. Therefore, this paper focuses on the progress of standardization of the current edge computing. From edge computing architecture was first proposed by ETSI, to edge computing has been listed as the key technology for future 5G development in 3GPP. The approval of projects of the edge computing in CCSA is also included. The introduction of each part has a great deal of analysis and explanation of the standard content. In the end, this paper introduces China Unicom’s edge computing research achievement in recent years, including the important edge computing experimental project, the pilot scheme for future edge computing of China Unicom and the exploration of the network deployment plan of edge computing. We are looking forward to discussing the edge computing commercial cooperation mode with all sectors so as to jointly build the network edge ecology and comprehensively accelerate the vigorous development of 5G services.
2018 Vol. 55 (3): 487-511 [Abstract] ( 1190 ) HTML (141 KB)  PDF (10941 KB)  ( 862 )
512 Application Driven Network Latency Measurement Analysis and Optimization Techniques Edge Computing Environment: A Survey
Fu Yongquan,Li Dongsheng
DOI: 10.7544/issn1000-1239.2018.20170793
The technical advancements of Internet, mobile computing and Internet of things (IoT) have been pushing the deep integration of human, machine and things, which fostered a lot of end-users oriented network search, online social networks, economical business, video surveillance and intelligent assistant tools, which are typically referred to as online data-intensive applications. These new applications are of large scale and sensitive to the service quality, requiring stringent latency performance. However, end-user requests traverse heterogeneous environments including edge network, wide-area network and the data center, which naturally incurs a long-tail latency issue that significantly degrades users’ experience quality. This paper surveys architectural characteristics of edge-computing applications, analyzes causes of the long-tail latency issue, categorizes key theories and methods of the network latency measurement, summarizes long-tail latency optimization techniques, and finally proposes the idea of constructing an online optimization runtime environment and discusses some open challenges.
2018 Vol. 55 (3): 512-523 [Abstract] ( 727 ) HTML (93 KB)  PDF (1933 KB)  ( 678 )
524 Edge Computing Application: Real-Time Anomaly Detection Algorithm for Sensing Data
Zhang Qi, Hu Yupeng, Ji Cun, Zhan Peng, Li Xueqing
DOI: 10.7544/issn1000-1239.2018.20170804
With the rapid development of Internet of things (IoT), we have gradually entered into the IoE (Internet of everything) era. In face of the low quality of real-time gathering sensor data in IoT, this paper proposes a novel real-time anomaly detection algorithm based on edge computing for streaming sensor data. This algorithm firstly expresses the corresponding sensor data in the form of time series and establishes the distributed sensing data anomaly detection model based on edge computation. Secondly, this algorithm utilizes the continuity of single-source time series and the correlation between multi-source time series to detect anomaly data from streaming sensor data effectively and respectively. The corresponding anomaly detection result sets are also generated in the same process. Finally, the above two anomaly detection result sets would be effectively fused in a certain way so as to obtain more accurate detection result. In other words, this algorithm achieves a higher detection rate compared with other traditional methods. Extensive experiments on the real-world dataset of household heating data from the Jinan municipal steam heating system, which collects monitoring data from 3084 apartments of 394 buildings, have been conducted to demonstrate the advantages of our algorithm.
2018 Vol. 55 (3): 524-536 [Abstract] ( 886 ) HTML (167 KB)  PDF (2300 KB)  ( 834 )
537 Joint Task Offloading and Base Station Association in Mobile Edge Computing
Yu Bowen, Pu Lingjun, Xie Yuting, Xu Jingdong, Zhang Jianzhong
DOI: 10.7544/issn1000-1239.2018.20170714
In order to narrow the gap between the requirements of IoT applications and the restricted resources of IoT devices and achieve devices energy efficiency, in this paper we design COMED, a novel mobile edge computing framework in ultra-dense mobile network. In this context, we propose an online optimization problem by jointly taking task offloading, base station (BS) sleeping and device-BS association into account, which aims to minimize the total energy consumption of both devicesand BSs, and meanwhile satisfies applications’ QoS. To tackle this problem, we devise an online Lyapunov-based algorithm JOSA by exploiting the system information in the current time slot only. As the core component of this algorithm, we resort to the loose-duality framework and propose an optimal joint task offloading, BS sleeping and device-BS association policy for each time slot. Extensive simulation results corroborate that the COMED framework is of great performance: 1) more than 30% energy saving compared with local computing, and on average 10%-50% energy saving compared with the state-of-the-art algorithm DualControl (i.e., energy-efficiency); 2) the algorithm running time is approximately linear proportion to the number of devices (i.e., scalability).
2018 Vol. 55 (3): 537-550 [Abstract] ( 972 ) HTML (180 KB)  PDF (3288 KB)  ( 825 )
551 Convolutional Neural Network Construction Method for Embedded FPGAs Oriented Edge Computing
Lu Ye, Chen Yao, Li Tao, Cai Ruichu, Gong Xiaoli
DOI: 10.7544/issn1000-1239.2018.20170715
At present, applications and services with high computational consumption migrate gradually from centralized cloud computing center to embedded environment in the network edge. FPGA is widely used in the embedded systems under edge computing because of its flexibility and high efficiency. The conventional FPGA based convolutional neural network construction method has shortcomings, such as long design cycle and small optimization space, which leads to an ineffective exploration of the design space of targeted hardware accelerator, especially in network edge embedded environment. In order to overcome these issues, a high level synthesis based general method for convolutional neural network construction on embedded FPGA oriented edge computing is proposed. The highly reusable accelerator function is designed to construct the optimized convolutional neural network with a lower hardware resource consumption. Scalable design methodology, memory optimization and data flow enhancement are implemented on the accelerator core with HLS design strategy. The convolutional neural network is built on embedded FPGA platforms. The results show the advantage of performance and power when compared with Xeon E5-1620 CPU and GTX K80 GPU, and suitable for edge computing environment.
2018 Vol. 55 (3): 551-562 [Abstract] ( 977 ) HTML (91 KB)  PDF (2792 KB)  ( 935 )
563 Power Optimization Based on Dynamic Content Refresh in Mobile Edge Computing
Guo Yanchao, Gao Ling, Wang Hai, Zheng Jie, Ren Jie
DOI: 10.7544/issn1000-1239.2018.20170716
Nowadays, with the rapid development of mobile Internet and related technologies, social applications have become one of the mainstream applications. At the same time, the functions of mobile applications are also getting richer and richer, and their energy consumption requirements and information processing capabilities are also growing. In view of the problem of high energy consumption and computing power caused by mobile social platforms ignoring network status and frequently refreshing content (words, pictures, videos, etc.), a consumption optimization model based on Markov decision process (MDP) in edge computing is proposed. The model considers the network status in different environments and performs data processing through the local edge computing layer (simulating the local edge computing mode and completing data processing) according to the current power of the mobile phone and the user refresh rate. It selects optimal strategy from the decision tables generated by the Markov decision process, and dynamically selects the best network access and refreshes the best download picture format. The model not only reduces refresh time, but also reduces the power consumption of the mobile platform. The experimental results show that compared with the picture refresh mode using a single network, the energy consumption optimization model proposed in this paper reduces the energy consumption by about 12.1% without reducing the number of user refresh cycles.
2018 Vol. 55 (3): 563-571 [Abstract] ( 482 ) HTML (85 KB)  PDF (1745 KB)  ( 489 )
572 Web Enabled Things Computing System
Peng Xiaohui, Zhang Xingzhou, Wang Yifan, Chao Lu
DOI: 10.7544/issn1000-1239.2018.20170867
The rising edge computing paradigm tries to shift some computing tasks from cloud to devices recently, which reduces the computing load of cloud and traffic load of the Internet. The things computing system consists of the devices which are physical world oriented with physical functionalities. It is a great challenge to design a unified system architecture for things computing system because of the system diversity. The architecture of the modern Web system is an efficient solution for the diversity issue. However,due to the resource-constrained feature extending the Web architecture to the things computing system is also very difficult. In this paper, we first introduce the concept of edge computing system and things computing system, and summarize the challenges brought by diversity and resource-constrained features of things computing system. Then, a detailed study of the state-of-the-art technologies, including REST principle, script languages and debugging technique for extending the Web to things computing system, is presented. Most of the related work tried to modify the “Uniform Interface” principle to adapt to edge system. We conclude from the examined literature that things computing system is a massive market, but there is still no unified system architecture which supports both the Web and intelligence. Finally, we present some future research directions for things computing system including the unified system architecture, efficient Web technologies, supporting intelligence and debugging techniques.
2018 Vol. 55 (3): 572-584 [Abstract] ( 567 ) HTML (98 KB)  PDF (2599 KB)  ( 537 )
585 Process Model Repair Based on Firing Sequences
Wang Lu, Du Yuyue, Qi Hongda
DOI: 10.7544/issn1000-1239.2018.20160838
When business processes are mostly supported by information systems, the availability of event logs generated from these systems and the requirements of appropriate process models are increasing. However, some events cannot be correctly identified because of the explosion of the amount of event logs. Conformance checking techniques can be used to detect and diagnose the differences between observed and modeled behavior, but they cannot repair the actual model. The information from conformance checking can be used for model repair. By means of the firing sequences of event logs, process models can be repaired from three aspects: removing behavior, adding behavior and changing behavior. The structure with deleted activity in process model, the relationship between additional activities and adjoin activities, and the nonconformance sub-process in process model should be identified when the process model need repairing. It is obtained that the repaired model can replay (most of) event logs based on the proposed techniques, and it is as similar to the original model as possible. The methods in this paper are simulated manually. A real-world process model of hospitalization in a hospital and the corresponding event logs are employed to evaluate the proposed approaches. The correctness and effectiveness of the proposed methods are illustrated through experiments.
2018 Vol. 55 (3): 585-601 [Abstract] ( 368 ) HTML (320 KB)  PDF (4292 KB)  ( 606 )
602 Task Completion Prediction Method in Cloud Scientific Workflow
Wu Xiuguo, Su Wei
DOI: 10.7544/issn1000-1239.2018.20160899
Cloud scientific workflow supplies a collaborative research platform for scholars in different regions in order to implement tasks, such as scientific computing, using various kinds of resources and services provided by cloud computing environment. Aimed to the uncertainty of task completion before a cloud scientific workflow instance starts, a novel task completion prediction model based on data availability/unavailability is proposed in this paper, together with the data availability relationship among them, called the data availability of positive/negative. In this way, the possibility of task completion can be acquired in advance using the data availability/unavailability propagation rules, therefore improves cloud scientific workflow task completion cognition to a large extent. Furthermore, the proposed task completion prediction model has some advantages, such as excellent description and judgement ability. In addition, some experiments have shown that the proposed task completion prediction method can reflect the task accomplishments in practice, and avoid the influence of early task failure to subsequent tasks as far as possible. In other words, it improves the task completion rate while reduces the resource rental expenses in cloud scientific workflow system.
2018 Vol. 55 (3): 602-612 [Abstract] ( 381 ) HTML (140 KB)  PDF (2435 KB)  ( 411 )
613 Interactive Service Recommendation Based on Composition History
Pan Weifeng, Jiang Bo, Li Bing, Hu Bo, Song Beibei
DOI: 10.7544/issn1000-1239.2018.20160521
With the rapid increasing number of services and their types, how to discover the composible services which can meet uer’s requirements is one of the key issues that need to be resolved. Service recommendation technique has become one of the effective methods to deal with the problem of service resource overload. However, the existing service recommendation techniques usually ultilize service data which are hard to be collected and they also neglect the usability and composiblity of the services to be recommended. To avoid these limitations, this paper, utilizing service composition histories, introduces the theory and methodology in the complex network research and proposes an interactive service recommendation approach. It uses an affiliation network to abstract service composition histories (i.e., composite services, atomic services, and the affiliation relationships between them), obtains the service composition relationships by one-mode projection, and introduces the backbone network extraction technology to filter out the invalid compostion relationships; it uses degree and degree distribution to mine the service usage patterns; it takes into account the situation of the failure of services and finally proposes several algorithms for service recommendation according to three usage scenarios. Real data of services crawed from ProgrammableWeb are used as subjects to demonstrate the correctness and feasibility of the proposed approach.
2018 Vol. 55 (3): 613-628 [Abstract] ( 434 ) HTML (174 KB)  PDF (3993 KB)  ( 603 )
629 An Region Adaptive Image Interpolation Algorithm Based on the NSCT
Fan Qinglan, Zhang Yunfeng, Bao Fangxun, Shen Xiaohong, Yao Xunxiang
DOI: 10.7544/issn1000-1239.2018.20160942
Image interpolation plays a vital role in digital image processing. In order to preserve image texture detail and edge sharpness, a new method of region adaptive image interpolation based on NSCT (nonsubsampled contourlet transform) is proposed. Image is divided into different regions and interpolated by different methods respectively. Firstly, a new type of C\+2 continuous rational function interpolation model is constructed, and the error estimates are given. Secondly, image edge contour information is captured by the NSCT, and the image is divided into edge region and non-edge region adaptively according to a preset threshold. Finally, as for edge region, edge-directed interpolation technique is used to get high resolution image. Similarly, rational function interpolation algorithm is used in non-edge region. The objective image with higher resolution ratio than the input image is obtained by adaptive interpolation. Compared with the classical image interpolation algorithm, the proposed method is highly competitive not only in PSNR (peak signal to noise ratio) and SSIM (structural similarity index) but also in visual effect. Experimental results show that the proposed algorithm not only has lower time complexity, but also can preserve image details, eliminate phenomenon of edge aliasing, and have a high quality of interpolation image.
2018 Vol. 55 (3): 629-642 [Abstract] ( 446 ) HTML (116 KB)  PDF (6058 KB)  ( 490 )
643 LBP and Multilayer DCT Based Anti-Spoofing Countermeasure in Face Liveness Detection
Tian Ye, Xiang Shijun
DOI: 10.7544/issn1000-1239.2018.20160417
As security problem has become the tightest bottleneck in the application of face recognition systems, rendering a face recognition system robust against spoof attacks is of great significance to be dealt with. In this paper, aimed at video-based facial spoof attacks, an innovative face antispoofing algorithm based on local binary patterns (LBP) and multilayer discrete cosine transform (DCT) is proposed. First, we extract face images from a target video at a fixed time interval. Second, the low-level descriptors, i.e., the LBP features are generated for each extracted face image. After that, we perform multilayer DCT on the low-level descriptors to obtain the high-level descriptors (LBP-MDCT features). To be more exact, in each layer, the DCT operation is implemented along the ordinate axis of the obtained low-level descriptors, namely the time axis of the entire target video. In the last stage, the high-level descriptors are fed into a support vector machine (SVM) classifier to determine whether the target video is a spoof attack or a valid access. In contrast to existing approaches, the outstanding experimental results attained by the proposed approach on two widely-used datasets (Replay-Attack dataset and CASIA-FASD dataset) demonstrat its performance superiority as well as its low complexity and high efficiency.
2018 Vol. 55 (3): 643-650 [Abstract] ( 796 ) HTML (62 KB)  PDF (1319 KB)  ( 563 )
651 Parallel Algorithms for RDF Type-Isomorphism on GPU
Feng Jiaying, Zhang Xiaowang, Feng Zhiyong
DOI: 10.7544/issn1000-1239.2018.20160845
Resource description framework (RDF), officially recommended by the World Wide Web Consortium (W3C), describes resources and the relationships of them on the Web. With the volume of RDF data rapidly increasing, a high performance method is necessary to efficiently process SPAQRL (simple protocol and RDF query language) query over RDF data, which can be reduced to the classical problem—subgraph isomorphism. As an important class of subgraph isomorphism, type-isomorphism helps many interesting queries over RDF data to get high performance such as star or linear query structures. However, many existing approaches, which are proposed to solve type-isomorphism, mostly depend on calculative capabilities of CPU. In recent years, graphic processing units (GPU) has been adopted to accelerate graph data processing widely in several works, which have better computational performance, superior scalability, and more reasonable prices. Considering the limited calculative capabilities of CPU in handling large-scale RDF data, we propose an algorithm that processes type-isomorphism problem on parallel GPU architecture over RDF datasets. In this paper, we implement the algorithm and evaluate it in the benchmark datasets—lehigh university benchmark (LUBM) through a mass of experiments. The experimental results show that our algorithm outperforms significantly than the CPU-based algorithms.
2018 Vol. 55 (3): 651-661 [Abstract] ( 459 ) HTML (93 KB)  PDF (2700 KB)  ( 497 )
662 Bayesian Current Disaggregation: Sensing the Current Waveforms of Household Appliances Using One Sensor
Liu Jingjie, Nie Lei
DOI: 10.7544/issn1000-1239.2018.20150311
An important application of pervasive computing is to obtain the electricity usage information for each appliance in a household using one sensor. The key problem of this application is current disaggregating, which is to estimate the currents of individual appliances from the total current waveform. Existing methods to solve this problem can be classified into two classes: steady-state estimation methods and linear disaggregating methods. Based on the steady-state load assumption, the methods in the first class estimate the current for a running appliance using its steady-state current waveform. These methods can avoid the interference between appliances. But the results of these methods cannot reflect the real-time changes of the total current. The methods in the second class reduce the dimensions of the current waveforms for a specific appliance using model constraints or data constraints, and disaggregate the total current into the linear spaces with low dimensions. The results of these methods can reflect the real-time change of the total current, but similar appliances reduce the accuracy of disaggregating results. From the perspective of the Bayesian statistics, this paper relaxes the key assumptions of the above methods as the prior of position vectors and the prior of noises, and proposes a Bayesian current disaggregating method based these two priors. Using the electricity usage data generated by actual appliances, we conduct several simulation experiments to evaluate our method. The experiment results show that the accuracy of the proposed method is higher than previous methods. Our method not only reflects the real-time change of the total current, but also reduces the effects of similar appliances on the disaggregating results.
2018 Vol. 55 (3): 662-672 [Abstract] ( 307 ) HTML (99 KB)  PDF (3334 KB)  ( 456 )
计算机研究与发展
·
·
·
·
·
·
·F5000
·
More....  
More....  
EI Village
China Computer Federation
Institute of Computing, Chinese Academy of Sciences
More....  
 
 
Copyright © Editorial Board of Journal of Computer Research and Development
Supported by:Beijing Magtech