Loading...
ISSN 1000-1239 CN 11-1777/TP

Table of Content

01 April 2017, Volume 54 Issue 4
Key Techniques for Phased and Multi-Mode Hybrid Simulation of Multi-Energy Micro Grid
Zhang Shuqing, Tang Shaopu, Zhu Yaping, Yu Siqi, Sun Yubo
2017, 54(4):  683-694.  doi:10.7544/issn1000-1239.2017.20161011
Asbtract ( 876 )   HTML ( 1)   PDF (3785KB) ( 699 )  
Related Articles | Metrics
Energy Internet has become a powerful driving force of energy production and consumption revolution allover China. Two core technologies, including smart micro grid and multi-energy complementary, are widely integrated into micro energy Internet. Energy sources, loads and storages of various energy forms are constructed and integrated as a whole, which imply complicated energy conversion, transmission mechanism and complex operation characteristics. However, the existing simulation means of various energy systems always only aim at one independent energy system, thus they cannot be directly applied to the coupling and interaction mechanism of different energy forms, or to the analysis of operation and control characteristics of multi-energy system. This paper has summarized the current development and achievement of simulations of micro-grids and other types of energy systems, which are the foundations of multi-energy system simulation. Then it has analyzed and summarized the basic characteristics of multi-energy system, intricacies and main points to implement its simulation. Furthermore, phased and multi-mode hybrid simulation is designed and presented with its key techniques. By the hybrid simulation, it is expected to make full use of the foundations of the existing energy equipment-system simulation models and algorithms and to establish dynamic and transient simulation of multi-energy system. The application of the phased and multi-mode hybrid simulation in the multi energy system is illustrated by a typical example of cold combined heat and power system.
The Energy Access Administration Research Based on a Linkage Routing Network
Ren Guang, Yang Gang, Hua Haochen, Yang Fang, Bai Cuifen, Qiu Zhongtao, Cao Junwei
2017, 54(4):  695-702.  doi:10.7544/issn1000-1239.2017.20161022
Asbtract ( 765 )   HTML ( 0)   PDF (2289KB) ( 819 )  
Related Articles | Metrics
Aiming at the application requirement of the energy router in the energy Internet demonstration projects, this paper proposes one type of chain routing network and an edge diffusion way, thus it can solve the energy management issue in the user’s level in the demonstration projects. Firstly, based on introducing a single energy router, a chain routing network which consists of several energy routers is designed, and completes the regional division for the whole demonstration project. Then the edge diffusion strategy is given, in which a single energy router is regarded as a decision center, which can realize the power balance by passing the power’s imbalance to an adjacent energy router, until the external low-voltage grids. Finally, a communication network and an application scenario are respectively illustrated, so the feasibility of the chain routing network in the demonstration projects can be shown preliminarily.
Energy Efficiency Optimization Based on Storage Scheduling and Multi-Source Power Supplying of Data Center in Energy Internet
Sun Chunlei, Wen Xiangming, Lu Zhaoming, Sheng Wanxing, Zeng Nan, Li Yang
2017, 54(4):  703-710.  doi:10.7544/issn1000-1239.2017.20161016
Asbtract ( 879 )   HTML ( 0)   PDF (2800KB) ( 768 )  
Related Articles | Metrics
Recently, the optimization problem of energy efficiency for data centers has been paid widespread attention. In this paper, we investigate this problem in a new idea under the background of energy Internet, where subscribers are equipped with storage and smart energy management devices, especially for industry subscribers. In addition, there are large scale of clean energy generation and electricity-sale companies, which means that industry subscribers can purchase electricity from multi-source suppliers to cut down their energy cost and improve their energy efficiency based on real-time price, pollution index, etc. It is assumed that data centers always attempt to choose cheaper and cleaner energy in each hour and buy more electricity in valley hours with lower price compared to peak hours to reduce the energy cost. Thus both of pollution index function and real-time price are adopted to formulate the multi-source energy-purchasing cost. And both of the operation cost and potential cost are adopted to model the charging and discharging cost of storage devices, or storage cost for short. Based on this, the energy cost model with storage is formulated and is compared with the one without storage. Then we give the related algorithm to solve these problems and give the analysis of performance. Simulation results confirm that the proposed model can greatly reduce the daily cost of electricity and encourage the utilization of renewable energy resources by choosing optimal strategies of energy source selecting and daily storage scheduling.
Performance Evaluation of Power Quantum Secure Communication System for Energy Internet
Chen Zhiyu, Gao Dequan, Wang Dong, Li Guochun, Wei Xiaojing
2017, 54(4):  711-719.  doi:10.7544/issn1000-1239.2017.20161024
Asbtract ( 842 )   HTML ( 2)   PDF (3216KB) ( 942 )  
Related Articles | Metrics
With the rapid development of energy Internet, more and more distributed energy systems are accessed. The demand for network and information security of energy information is increasingly urgent. Theoretically, quantum secure communication can achieve unconditional and absolute security in information communication. However, its application in power grid is still in the stage of exploring. This paper focuses on the performance evaluation of power quantum secure communication in the global energy Internet field. Firstly, considering the diversity of the grid business scenario and transmission loss, the system architecture diagram of power quantum secure communication is proposed. Then, through the simulation of the power communication transmission environment and actual business scenario, indicators that quantum channel and data exchange channel of power quantum secure communication system are evaluated in six aspects (e.g. distance loss, pendulum loss and connection loss). Finally, the feasibility and security of quantum secure communication technology in power communication field are verified by simulation experiments, and the development of energy Internet can be effectively assured.
Multi-Agent Multi-Criticality Scheduling Based Self-Healing System of Power Grid
Wang Juanjuan, Wang Hongan
2017, 54(4):  720-730.  doi:10.7544/issn1000-1239.2017.20161026
Asbtract ( 905 )   HTML ( 5)   PDF (3421KB) ( 713 )  
Related Articles | Metrics
With the rapid development of smart grid, information and new energy technology, the concept of energy Internet has gained great research attention most recently, and the trend of flexible and secure coordination between “generation-grid-load-energy storage” has become a really hot and interesting topic. From smart grid to energy Internet, the self-healing system of power grid is always a safety-critical real-time system. When fault happens, great amount of correlated and deadline-constrained operations need to be executed timely so as to the recovery of the system. Based on the multi-agent architecture of the self-healing system of power grid towards energy Internet, a multi-agent real-time multi-criticality scheduling task model and scheduling approach are proposed in this paper. Fault chain and secure tree with deadline constraints are added into the self-healing process in the power grid in order to protect the system from unpredicted sequences of fault dealing operations,resulting in less successive fault occurrence with security assurance. The experimental results show that our proposed method can effectively increase the success ratio of self-healing tasks by average 17.74% and lower the successive fault occurrence ratio by average 7.72%, which is promising to be applied in the self-healing system of power grid for security concern.
Threat Propagation Based Security Situation Quantitative Assessment in Multi-Node Network
Tian Jianwei, Tian Zheng, Qi Wenhui, Hao Hanyong, Li Renfa, Li Xi, Qiao Hong, Xue Haiwei
2017, 54(4):  731-741.  doi:10.7544/issn1000-1239.2017.20161015
Asbtract ( 919 )   HTML ( 2)   PDF (2598KB) ( 798 )  
Related Articles | Metrics
The traditional security situation assessment mainly focuses on the small scale network system, which has neglected the risk correlation among network nodes. In view of the complex network structure in the energy Internet, a quantitative assessment for multi-node network security situation based on threat propagation is proposed. This method firstly gives concept and definition of network nodes in energy Internet, and models the energy Internet network structure by using graph theory; secondly, quantitative method is proposed based on threat propagation probability to calculate the node security situation, also a multi-node weighting method called LR-NodeRank is put forward to evaluate fusion network security situation. Finally, a security situation improvement based on the simplest threat graph is proposed to calculate the network border needed to reinforce. Experimental results show that the proposed method can accurately assess the security situation of multi-node network, and can also effectively carry out the border connections.
Block Chain Based Data Security Sharing Network Architecture Research
Wang Jiye, Gao Lingchao, Dong Aiqiang, Guo Shaoyong, Chen Hui, Wei Xin
2017, 54(4):  742-749.  doi:10.7544/issn1000-1239.2017.20160991
Asbtract ( 3162 )   HTML ( 48)   PDF (3104KB) ( 2289 )  
Related Articles | Metrics
For the process of internal and external data sharing in energy Internet enterprises, there are centralized deployment, non-uniqueness identification, and theft or tampering to affect the efficiency of data asset sharing. Based on the character of decentralization, peer-to-peer and difficult-to-change, we construct data security sharing network architecture base on block chain to support exchange information for internal and inter-enterprise. It is to achieve a trusted network environment with the distributed storage. Firstly, we propose a block chain based data security sharing network architecture, including decentralized data unified naming technology, authorized data distributed storage and data distribution protocol. Secondly, an open data index naming structure (ODIN) is designed, including single-level basic ODIN and multi-level extended ODIN. And then ODIN operation mechanism is described. Thirdly, we design the decentralized DNS (domain name sever) resolution module with ODIN. Then the part of system function is realized. And we analyze its performance.
The Data-Flow Block Based Spatial Instruction Scheduling Method
Liu Bingtao, Wang Da, Ye Xiaochun, Fan Dongrui, Zhang Zhimin, Tang Zhimin
2017, 54(4):  750-763.  doi:10.7544/issn1000-1239.2017.20160138
Asbtract ( 703 )   HTML ( 4)   PDF (4973KB) ( 904 )  
Related Articles | Metrics
Clustered superscalar processors partition hardware resources to circumvent the energy and cycle time penalties incurred by large, monolithic structures. Dynamic multi-core processors fuse hardware resources of several physical cores to provide the computation capability adapting to applications. Energy-efficient computation is achieved in these architectures with a carefully orchestrated utilization of spatially distributed hardware resources. Problems such as instruction load imbalance and operand forwarding latency between partitions may cause performance penalties, so an effective spatial instruction scheduling method is needed to distribute the computation among the partitions of spatial architectures. We present the data-flow block(DFB) based spatial instruction scheduling method. DFBs are dynamically constructed, cached and reused schedule patterns for one or more sequentially executed instruction basic blocks. DFB scheduling algorithm models the data-flow constraints of dynamic instruction stream and the scheduling space defined by hardware resources, then makes the scheduling decision according to the relative criticality, which is the quantitative scheduling slack of instructions. We present the framework and algorithm related to DFB scheduling. Through experimenting with various microarchitecture parameters closely related to scheduling method such as partition count, inter-partition latency and schedule window capacity, we prove that ideal DFB scheduling performs better and stabler than round-robin and dependence-based scheduling. At last, we show that the scheduling performance with a DFB cache implementation example closes to ideal DFB scheduling.
A Shared-Forwarding State Based Multiple-Tier Cache Coherency Protocol
Chen Jicheng, Li Yihan, Zhao Yaqian, Wang Endong, Shi Hongzhi, Tang Shibin
2017, 54(4):  764-774.  doi:10.7544/issn1000-1239.2017.20160141
Asbtract ( 698 )   HTML ( 0)   PDF (3465KB) ( 682 )  
Related Articles | Metrics
In CC-NUMA architecture system, in order to reduce the overhead of cache coherency maintenance, large scale CC-NUMA system usually employs multi-tier cache coherency domain method, so as to effectively alleviate the contradictions between system scalability and coherency maintenance overhead. The traditional MESI, MESIF, MOESI protocols mainly aim at the single tier coherency domain, and do not take into account the characteristic that query business accounts for dominant in the large database applications, therefore there are many problems such as high frequency of cross domain operations, low execution efficiency and so on when the protocols are used in multi-tier cache coherency domain. To address the above problems, this paper presents a multi-tier cache coherency protocol called MESI-SF based on the shared-forwarding state. The protocol creates a shared-forwarding state called Share-F, and thus there exists remote data copy with readable forwarding state in multiple coherency domains at the same time. In this way, within the same domain data copy with shared-forwarding state can directly response to read requests, and thus the protocol can effectively reduce the number of cross domain operations and enhance the system performance. Experimental simulation with SPLASH-2 test suites shows that, for the two tiers cache coherency domain system, compared with the MESI protocol, MESI-SF can reduce 23.0% visits that cross the clumps, and the average instruction execution cycle is reduced by 7.5%; compared with the MESIF protocol, MESI-SF can reduce 12.2% visits that cross the clumps, and the average cycles per instruction (CPI) is reduced by 5.95%.
MPD: A CC-NUMA System with Clump Having Multiple Parallel Cache Coherency Domains
Chen Jicheng, Zhao Yaqian, Li Yihan, Wang Endong, Shi Hongzhi, Tang Shibin
2017, 54(4):  775-786.  doi:10.7544/issn1000-1239.2017.20160142
Asbtract ( 568 )   HTML ( 0)   PDF (3604KB) ( 685 )  
Related Articles | Metrics
Large-scale CC-NUMA system usually employs two-tier architecture to reduce the overhead of cache coherence and enhance the performance of system. In a two-tier system, various processors and a coherence chip are located in an intra-clump cache coherency domain, and various coherence chips are interconnected by a system interconnection network so as to form an inter-clump cache coherency domain. Since every processor occupies at least one processor ID number in the cache coherency domain, and the number of processor ID numbers that can be distinguished by every processor is limited, CC-NUMA system expands the scale only by increasing the number of clumps, not by increasing the scale of clump. This leads to the over-large number of clumps and complicated topology structure in a multi-processor system, thereby increasing the bandwidth and latency of cross-clump memory access. To solve this problem, we propose a new method to construct multi-processor system, called MPD, in which a clump has multiple parallel cache coherency domains. This method solves the problem of limited clump scale brought about by limited number of processor supportable by a processor in a domain. Compared with traditional CC-NUMA system, MPD system not only significantly reduces the system topological complexity, but also effectively improves the system performance. Theoretical analysis and simulation results show: compared with 32-way CC-NUMA system, MPD system constructed by same processors can achieve 75% reduction in the number of nodes, more than 40% savings in consistency directory storage, 27.9% average reduction in access latency and about 14.4% improvement in system performance.
Partial Data Shuffled First Strategy for In-Memory Computing Framework
Bian Chen, Yu Jiong, Xiu Weirong, Qian Yurong, Ying Changtian, Liao Bin
2017, 54(4):  787-803.  doi:10.7544/issn1000-1239.2017.20160049
Asbtract ( 770 )   HTML ( 0)   PDF (4174KB) ( 804 )  
Related Articles | Metrics
In-memory computing framework has greatly improved the computing efficiency of cluster, but the low performance of Shuffle operation cannot be ignored. There is a compulsory synchronous operation of wide dependence node on in-memory computing framework, and most executors are obliged to delay their computing tasks to wait for the results of slowest worker, and the synchronization process not only wastes computing resources, but also extends the completion time of jobs and reduces the efficiency of implementation, and this phenomenon is even worse in heterogeneous cluster environment. In this paper, we establish the resource requirement model, job execution efficiency model, task allocation and scheduling model, give the definition of allocation efficiency entropy (AEE) and worker contribution degree (WCD). Moreover, the optimization objective of the algorithm is proposed. To solve the problem of optimizing, we design a partial data shuffled first algorithm (PDSF) which includes more innovative approaches, such as efficient executors priority scheduling, minimize executor wait time strategy and moderately inclined task allocation and so on. PDSF breaks through the restriction of parallel computing model, releases the high performance of efficient executors to decrease the duration of synchronous operation, and establish adaptive task scheduling scheme to improve the efficiency of job execution. We further analyze the correlative attributes of our algorithm, prove that PDSF conforms to Pareto optimum. Experimental results demonstrate that our algorithm optimizes the computational efficiency of in-memory computing framework, and PDSF contributes to the improvement of cluster resources utilization.
Extending Global Arrays on Heterogeneous System
Cheng Peng, Lu Yutong, Gao Tao, Wang Chenxu
2017, 54(4):  804-812.  doi:10.7544/issn1000-1239.2017.20151059
Asbtract ( 607 )   HTML ( 2)   PDF (2332KB) ( 608 )  
Related Articles | Metrics
The increasing requirement for computational performance has led to the rapid development of heterogeneous computing.However,heterogeneous programming is more complicated since there is no shared memory between CPU and accelerators.Besides,programmers must distinguish the local or remote access of data and transmit the data between computing devices explicitly.Global arrays (GA) can provide an asynchronous one-sided,shared memory programming environment for distributed memory systems,but creating an efficient and scalable implementation of GA for a new system is a challenge because of the sophistication of communication library inside GA.In this paper,we present CoGA,the extension of GA on heterogeneous systems consist of CPU and Intel many integrated core (MIC).CoGA,which is built on the top of symmetric communication interface (SCIF),can provide a shared memory abstraction between CPU and MIC, and simplify the programming by allowing programmers to access the shared data regardless where the referenced data is located.Furthermore,CoGA takes advantage of SCIF remote memory access and optimizes the data transmission performance between CPU and MIC.The evaluation on data transmission bandwidth,communication latency and sparse-matrix vector multiplication problem proves that CoGA is practical and effective.
Design of a Pipeline-Coupled Instruction Loop Cache for Many-Core Processors
Zhang Kun, Guo Feng, Zheng Fang, Xie Xianghui
2017, 54(4):  813-820.  doi:10.7544/issn1000-1239.2017.20160116
Asbtract ( 680 )   HTML ( 2)   PDF (1844KB) ( 520 )  
Related Articles | Metrics
Energy efficiency is a great challenge in the design of future high performance computers. Since the many-core processor becomes a key choice of future high performance computers, the optimization of its micro-architecture is very important for the improvement of energy efficiency. This paper proposes a pipeline-coupled instruction loop cache for the many-core processor. The instruction loop cache is small sized so that it will provide more energy-efficient instruction storage. As an attempt of implementation-aware micro-architecture research, the loop cache is designed under constraints of hardware costs from the beginning. In order to alleviate the impact to the pipeline performance, the loop cache adopts a prefetching technique. The instruction loop cache prefetches the exit path of the loop into the cache when a loop is detected. The prefetching mechanism guarantees that the design of the loop cache in the pipeline can lead to the improvement of the energy efficiency. The instruction loop cache is implemented in the gem5 simulator. Experiments on a set of SPEC2006 benchmarks show that a typical configuration can reduce on average 27% of instruction fetching power and 31.5% power of the pipeline front-end.
Debugging Multi-Core Parallel Programs by Gradually Refined Snapshot Sequences
Wang Bohong, Liu Yi, Zhang Guozhen, Qian Depei
2017, 54(4):  821-831.  doi:10.7544/issn1000-1239.2017.20151060
Asbtract ( 574 )   HTML ( 1)   PDF (2409KB) ( 607 )  
Related Articles | Metrics
Debugging multi-core parallel program is a well-known difficult problem. The key problem is that parallel problem may introduce many non-deterministic factors. Replay debugging is a promising method to eliminate non-deterministic. However, the state-of-art replay debugging solutions are not suitable for commercial software and hardware architecture. With the growth of concurrent degree, current replay debug method may also have unaccepted overhead. We propose a practical and novel replay debugging scheme name SDT (snapshot debug tool). The key innovation of SDT is using offline breakpoint and abstracting replay execution, instead of performing typical and physical replay execution. SDT can apply on commercial operate system and hardware, while also providing a gradually refined debugging method. According to the experimental results, using SDT will introduce 5188% extra execution time in average when using 8 threads. When the thread count increases from 1x to 4x, the overhead of SDT debugging will only increase from 1x to 2x, which shows that SDT has strong scalability. It’s a great challenge for SDT to record a large amount of data. The incremental snapshot capture used in our experiments has been proved that it can be effective to reduce the time and data which need to be record so that to improve the SDT performance.
A Similarity Measure for Process Models Based on Task Occurrence Relations
Song Jinfeng, Wen Lijie, Wang Jianmin
2017, 54(4):  832-843.  doi:10.7544/issn1000-1239.2017.20151176
Asbtract ( 734 )   HTML ( 0)   PDF (1995KB) ( 716 )  
Related Articles | Metrics
Similarity measures between process models are increasingly important for management, reuse, and analysis of process models in modern enterprises. So far, several approaches have been proposed and behavioral profile (BP) is a good concept to judge the behavioral consistency of process models, which describes the observable relations between tasks. However, all those approaches have their own advantages and disadvantages. Towards the hard problem of behavioral similarity measure between process models, especially to improve the effectiveness of BP, a new method for measuring the behavioral similarity between process models named TOR based on the occurrence relation among tasks is proposed. Based on complete prefix unfolding (CPU) technique of Petri nets, we propose the algorithms for numbering the nodes in a CPU and computing the least common precursors for each pair of nodes. Then we define the three basic occurrence relations between tasks: causal relation, concurrent relation and conflict relation. The algorithm for efficiently computing the relations and the formalism for computing the similarity are also given. TOR can handle both invisible tasks and non-free choice constructs. The experimental results show the effectiveness and efficiency of TOR. Compared with the existing mainstream behavioral similarity algorithms for process models, TOR can satisfy all the five properties that a good similarity algorithm should have.
Two Level Parallel Data Read Acceleration Method for Visualization in Scientific Computing
Shi Liu, Xiao Li, Cao Liqiang, Mo Zeyao
2017, 54(4):  844-854.  doi:10.7544/issn1000-1239.2017.20150923
Asbtract ( 774 )   HTML ( 1)   PDF (3516KB) ( 580 )  
Related Articles | Metrics
In order to match the overall computing capability of super computer, the super computer’s storage subsystem usually has good I/O performance scalability, which causes that, applications’ I/O access concurrency under the best performance of the storage subsystem and the total compute core number (tens of thousands to several millions) of super computer are usually in the same order of magnitude; however, the process number (equals to the I/O access concurrency) used in visualization in scientific computing (ViSC) applications is usually relatively small (experientially set to 1% of used computing process number, typically several to hundreds). Therefore, the best I/O performance of the storage subsystem cannot be achieved. In this paper we propose a two level parallel data read-based acceleration method for ViSC applications. Multi threads parallel data accessing is introduced into the visualization process; the I/O access concurrency of the super computer’s storage subsystem has been enhanced and visualization applications’ data read rate has been promoted through the two level parallel read, i.e. the parallelism among multi processes and the parallelism among multi threads inner process. The test results show that, under various visualization process scales, the peak data read rate using two parallel mode is higher than that using single parallel mode by 33.5%-269.5%, while the mean data read rate using two parallel mode is higher than that using single parallel mode by 26.6%-232.2%; according to the diverse scientific computing applications and various process scales, based on two level parallel data read method, the overall peak running speed of visualization applications can be accelerated by 19.5%-225.7%, and the mean speed can be accelerated by 15.8%-197.6%.
Image-Based Interactive Visualization of Large-Scale Data Sets
Wang Hongkun, Cao Yi, Xiao Li
2017, 54(4):  855-860.  doi:10.7544/issn1000-1239.2017.20151056
Asbtract ( 895 )   HTML ( 4)   PDF (1111KB) ( 575 )  
Related Articles | Metrics
With the development of supercomputers, the parallel scale of numerical simulations is increasing exponentially. It is a time-consuming task to visualize the data sets of ever-increasing scale output in the simulations. It is difficult or even impossible to analyze the data sets smoothly, even on a visualization server of high performance. To achieve good interactivity and display effect, an efficient image-based approach is proposed in this paper, and a corresponding software system is also developed. Firstly, typical visual analysis patterns are summarized based on domain knowledge. These patterns can commendably depict inner physical characters in a simulated data field. Secondly, based on these patterns, a set of high-resolution images are pre-generated from large-scale time-varying data sets on a supercomputer. These images are rendered at various viewpoints around the data field and then organized according to the patterns and the time steps. Thirdly, we analyze the simulated data field interactively from these visualization images on a common computer. These visualization images can be observed from different directions by changing the viewpoint interactively. They can also be shown in proper resolution according to the size of the observing window. As a result, the entire time-varying evolution of physical characters can be interactively explored at different viewpoints in proper resolution. The proposed approach does not suffer from large scale of data sets and high computational complexity of visualization and thus significantly improves the visual analysis efficiency for large-scale data sets from the simulations.
Voronoi-Based Group Reverse k Nearest Neighbor Query in Obstructed Space
Zhang Liping, Liu Lei, Hao Xiaohong, Li Song, Hao Zhongxiao
2017, 54(4):  861-871.  doi:10.7544/issn1000-1239.2017.20151111
Asbtract ( 873 )   HTML ( 2)   PDF (2387KB) ( 664 )  
Related Articles | Metrics
In order to solve the problem that the existing methods can not deal with the group reverse k nearest neighbor (GRkNN) query in obstructed space, the group reverse k nearest neighbor query method (OGRkNN) in obstructed space based on Voronoi diagram is put forward. The method finds data points that take any point in the query objects set as one of their obstacle k nearest neighbors. In the practical application, OGRkNN query can be used to evaluate the influence of a group of query objects. According to the changes of the obstacle set, the OGRkNN query in static obstacle environment (STA_OGRkNN query) and the OGRkNN query in dynamic obstacle environment (DYN_OGRkNN query) are given. The STA_OGRkNN query method greatly reduces the number of non-candidates by the properties of Voronoi diagram in pruning stage, and reduces the searching ranges quickly, which improves query efficiency. It also improves the accuracy of the algorithm in refining stage. Then three DYN_OGRkNN query algorithms are given. They are the OGRkNN query algorithm under the condition of dynamically increasing obstacles, the OGRkNN query algorithm under the condition of dynamically reducing obstacles and the OGRkNN query algorithm under the condition of dynamically moving obstacles. Theoretical study and experiments show that the proposed algorithms have extremely high efficiency.
Method for Optimizing Task Allocation in Workflow System Based on Cooperative Compatibility
Hu Haiyang, Ji Chaopei, Hu Hua, Ge Jidong
2017, 54(4):  872-885.  doi:10.7544/issn1000-1239.2017.20151174
Asbtract ( 763 )   HTML ( 3)   PDF (3239KB) ( 622 )  
Related Articles | Metrics
Task allocation strategy has an important influence on the performance efficiency of workflow system. When allocating tasks among executors, it needs to consider both the capability of each executor and the cooperative compatibility between the executors. Traditional methods for assigning tasks usually only consider the technical skills of executors and ignore the social cooperation compatibility among the executors. Although a few of research works have considered the social cooperation compatibility, they fail to consider how to maintain load balancing among executors when allocating the tasks. Based on the workflow log, cooperative compatibilities among executors are modeled and computed. The relations of interaction tasks are also taken into account. By analyzing the current workload of each executor, a multi-objective joint optimization framework for maintaining load balancing and maximizing the cooperative compatibility among executors is proposed. In this framework, when a new task is assigned, the current workload of each executor that can perform this task will be analyzed and its cooperation capability to other executors that have been assigned those tasks having interactions with this new task will be computed. Several corresponding algorithms are designed for optimizing different objectives and their time complexity is analyzed. Extensive experiments are conducted for comparing the proposed methods which demonstrate the correctness and effectiveness of our approaches.
Hypermedia Oriented Privacy Modeling Method for RESTful Service
Wang Jin, Huang Zhiqiu
2017, 54(4):  886-905.  doi:10.7544/issn1000-1239.2017.20151122
Asbtract ( 524 )   HTML ( 1)   PDF (5299KB) ( 655 )  
Related Articles | Metrics
Representational state transfer service(RESTful service) has gained widespread acceptance as a simpler alternative to SOAP/WS-\+*Web services. Acknowledging the hypermedia nature of RESTful service, the response of the RESTful usually contains links that can be used as the engine to fire new resource request. The complex internal state transitions in the service request/response process can lead to bigger privacy risks. How to accurately depict privacy actions in this dynamic interactive context driven by the hypermedia is one fundamental issue in RESTful service privacy protection research. In this paper we present a RESTful application state privacy model based on single-event finite automaton and discuss the automatical transformation method from RESTful service description to that formal model. We establish the privacy action meta-model to depict the atomic privacy action with accurate semantics and formally define some kernel elements of RESTful service and the relationship among them. We then discuss how to transform the RESTful service resources to the corresponding privacy actions. In addition, we propose a new data structure called resource link mapping tree to represent the relationship between the RESTful service resources and links. A transformation method based on the resource link mapping tree is introduced to generate the corresponding privacy actions from the RESTful service definition and further generate the formal single-event automata with the algorithm considering both protocol links and hypermedia links. We finally use a case-study of e-Bay “add to watch list” service and the experiments based on our prototype tools to show the feasibility of our approach.