Loading...
ISSN 1000-1239 CN 11-1777/TP

Table of Content

01 September 2017, Volume 54 Issue 9
Cross-Organizational Workflow Task Allocation Algorithms for Socially Aware Collaborative Computing
Sun Yong, Tan Wenan
2017, 54(9):  1865-1879.  doi:10.7544/issn1000-1239.2017.20160513
Asbtract ( 1178 )   HTML ( 9)   PDF (4802KB) ( 844 )  
Related Articles | Metrics
Recently, human-interactions are substantial part of Web service-oriented collaborations and cross-organizational business processes. Social networks can help to process crowdsourced workflow tasks among humans in a more effective manner. However, it is challenging to identify a group of prosperous collaborative partners with a leader to work on joint cross-organizational workflow tasks in a prompt and efficient way, especially when the number of alternative candidates is large in collaborative networks. Therefore, in this paper, a new and efficient algorithm has been proposed to find an optimal group in social networks so as to process crowdsourced workflow tasks. Firstly, a set of new concepts has been defined to remodel the social graph; then, a sub-graph connector-based betweenness centrality algorithm has been enhanced to efficiently identify the leader who serves as the host manager of the joint workflow tasks; finally, an efficient algorithm is proposed to find the workflow task members associated with the selected leader by confining the searching space in the set of connector nodes. Theoretical analysis and extensive experiments are conducted for validation purpose; and the experimental results on real data show that our algorithms outperform several existing algorithms in terms of computation time in dealing with the increasing number of workflow task executing candidates.
Retrieval of Similar Semantic Workflows Based on Behavioral and Structural Characteristics
Sun Jinyong, Gu Tianlong, Wen Lijie, Qian Junyan, Meng Yu
2017, 54(9):  1880-1891.  doi:10.7544/issn1000-1239.2017.20160755
Asbtract ( 942 )   HTML ( 1)   PDF (2317KB) ( 643 )  
Related Articles | Metrics
Workflow reuse is an important method for modern enterprises and organizations to improve the efficiency of business process management (BPM). Semantic workflows are domain knowledge-based workflows. The retrieval of similar semantic workflows is the first step for semantic workflow reuse. Existing retrieval algorithms of similar semantic workflows only focus on semantic workflows’ structural characteristics while ignoring their behavioral characteristics, which affects the overall quality of retrieved similar semantic workflows and increases the cost of semantic workflow reuse. To address this issue, a two-phase retrieval algorithm of similar semantic workflows is put forward based on behavioral and structural characteristics. A task adjacency relations (TARs) set is used to express a semantic workflow’s behavior. A TARs trees index named TARTreeIndex and a data index named DataIndex are constructed combined with domain knowledge for the semantic workflows case base. For a given query semantic workflow, firstly, candidate semantic workflows are obtained by filtering the semantic workflows case base with the TARTreeIndex and DataIndex, then candidate semantic workflows are verified and ranked with the graph matching similarity algorithm. Experiments show that the proposed algorithm improves the retrieval performance of similar semantic workflows compared with the existing popular retrieval algorithms for similar semantic workflows, so it can provide high-quality semantic workflows for semantic workflow reuse.
An Approach to Extract Public Process From Private Process for Building Business Collaboration
Mo Qi, Dai Fei, Zhu Rui, Da Jian, Lin Leilei, Li Tong, Xie Zhongwen, Zheng Ming
2017, 54(9):  1892-1908.  doi:10.7544/issn1000-1239.2017.20160754
Asbtract ( 725 )   HTML ( 0)   PDF (6232KB) ( 598 )  
Related Articles | Metrics
Organizations are permitted to communicate, interact and cooperate among them by business process collaboration to achieve specific business objectives. In order to ensure the correctness and consistency of the implementation, we need to model and analyze the business process collaboration. On the problem about building the business process collaboration of exacting the public process (the collaborative process of organizations) from the private process(the complete process of organizations), first of all, the business process model is defined to represent the private process of organizations, and the model is made up by internal views and public views, and also the internal view is free choose net; secondly, the business process modeling needs to be abstracted into four basic blocks, i.e., sequence block, selection block, concurrency block and iteration block. Their respective extraction rules are put forward to obtain the public process of organizations based on the four basic blocks. And theoretically we prove that these rules can ensure interface consistency, and thus ensuring that each extraction is context-free. Our approach is validated through the modeling for supply chain in collaborative manufacture and comparing with the current typical work, and the analysis results show that: relative to the existing work, under the condition of considering the privacy protection principles, we can model and analyze the business process collaboration more effectively.
Multi-Objective Optimization for Task Scheduling in Mobile Cloud Computing
Hu Haiyang, Liu Runhua, Hu Hua
2017, 54(9):  1909-1919.  doi:10.7544/issn1000-1239.2017.20160757
Asbtract ( 1161 )   HTML ( 6)   PDF (3631KB) ( 1130 )  
Related Articles | Metrics
Mobile cloud computing provides effective help for mobile users to migrate their workflow tasks to cloud servers for executing due to the mobile device’s limited hardware capability and battery energy carried. When scheduling workflow tasks between mobile devices and cloud servers, it needs to consider both the energy consumed by the mobile device and the total amount of time needed for the workflow application. Traditional methods for scheduling workflow tasks in mobile cloud computing usually address only one of two issues: saving energy consumption or minimizing the time needed. They fail to provide methods for jointly optimizing the time and energy consumption at the same time. Based on the relations of workflow tasks, the time needed in the workflow application is computed due to the tasks scheduling between the cloud servers and the mobile devices that use the technique of dynamic voltage and frequency scaling. The energy consumption for executing tasks on the cloud server and mobile devices are modeled and computed. The scheduling scheme and objective function for jointly optimizing the time needed and energy consumption are proposed. Algorithms based on the simulated annealing are designed for the mobile devices. Their time complexities are analyzed. Extensive experiments are conducted for comparing the proposed methods with other research works, and the experimental results demonstrate the correctness and effectiveness of our approaches.
Alignment Based Conformance Checking Algorithm for BPMN 2.0 Model
Wang Yuquan, Wen Lijie, Yan Zhiqiang
2017, 54(9):  1920-1930.  doi:10.7544/issn1000-1239.2017.20160756
Asbtract ( 786 )   HTML ( 0)   PDF (2138KB) ( 464 )  
Related Articles | Metrics
Process mining is an emerging discipline providing comprehensive sets of tools to provide fact-based insights and to support process improvements. This new discipline builds on process model-driven approaches and data-centric analysis techniques such as machine learning and data mining. Conformance checking approaches, i.e., techniques to compare and relate event logs and process models, are one of the three core process mining techniques. It is shown that conformance can be quantified and that deviations can be diagnosed. BPMN 2.0 model has so powerful expression ability that it can express complex patterns like multi-instance, sub-process, OR gateway and boundary event. However, there is no existing conformance checking algorithm supporting such complex patterns. To solve this problem, this paper proposes an algorithm (Acorn) for conformance checking for BPMN 2.0 model, which supports aforesaid complex patterns. The algorithm uses A\+* algorithm to find the minimum cost alignment, which is used to calculate fitness between BPMN 2.0 model and the log. In addition, virtual cost and expected cost are introduced for optimization. Experimental evaluations show that Acorn can find the best alignment by exploiting the meanings of BPMN 2.0 elements correctly and efficiently, and the introduction of virtual cost and expectation cost indeed reduces the search space.
Redundant Instruction Optimization Algorithm in Binary Translation
Tan Jie, Pang Jianmin, Shan Zheng, Yue Feng, Lu Shuaibing, Dai Tao
2017, 54(9):  1931-1944.  doi:10.7544/issn1000-1239.2017.20151110
Asbtract ( 788 )   HTML ( 4)   PDF (3254KB) ( 549 )  
Related Articles | Metrics
Binary translation is a main method to implement software migration. Dynamic binary translation is limited by dynamic execution and cannot be deeply optimized, resulting in low efficiency. Traditional static binary translation has difficulty to deal with indirect branch, and conventional optimization methods mostly affect in the intermediate code layer, paying less attention to a large number of redundant instructions that exist in the target code. According to this situation, this paper presents a static binary translation framework SQEMU and a target code optimization algorithm to delete redundant instructions based on the framework. The algorithm generates an instruction-specific data dependence graph(IDDG) based on the analysis of target codes, then combines liveness analysis with peephole optimization using IDDG, and effectively removes redundant instructions in target codes. Experimental results show that using the optimization algorithm for target codes, the execution efficiency is significantly increased, the maximal increase up to 42%, and the overall performance test shows that the optimized translation efficiency of nbench is increased by about 20% on average, and it is increased about 17% of SPEC CINT2006 on average.
Parallel of Decision Tree Classification Algorithm for Stream Data
Ji Yimu, Zhang Yongpan, Lang Xianbo, Zhang Dianchao, Wang Ruchuan
2017, 54(9):  1945-1957.  doi:10.7544/issn1000-1239.2017.20160554
Asbtract ( 1113 )   HTML ( 5)   PDF (3003KB) ( 898 )  
Related Articles | Metrics
With the rise of cloud computing, Internet of things and other technologies, streaming data exists widely in telecommunications, Internet, finance and other fields as a new form of big data. Compared with the traditional static data, stream data in big data has the characters of rapidness, continuity and changing with time. At the same time, the implicit distribution of the data stream will bring about the concept drift problem. In order to satisfy the requirements of stream data classification algorithms in big data, we must improve the traditional static offline data classification algorithms, and propose P-HT parallel algorithm based on distributed computing platform Storm. To meet the requirements of Storm stream processing platform, we improve the flexibility and versatility of the algorithm through sliding window mechanism, alternative tree mechanism and parallel processing mechanism, and the algorithm can adapt to the concept-drift of data stream very well. Finally, we experimentally verify the validity and high efficiency of the algorithm. The results show that the improved P-HT algorithm has better throughput and faster processing speed than the traditional C45 algorithm in the case of no reduction in accuracy.
Optimization Algorithm of Association Rule Mining for EMU Operation and Maintenance Efficiency
Zhang Chun, Zhou Jing
2017, 54(9):  1958-1965.  doi:10.7544/issn1000-1239.2017.20160498
Asbtract ( 701 )   HTML ( 4)   PDF (1962KB) ( 461 )  
Related Articles | Metrics
With the increase of EMU operation time and mileage, EMU operation and maintenance system has accumulated a large amount of data. Using the high-performance association rule mining algorithms to quickly find useful information from the EMU operation and maintenance data, is of significant importance for improving the operation and maintenance efficiency of the key components of the EMU. In the view of the characteristics of EMU operation and maintenance data—huge volume and low value density, we design the AMPHP algorithm based on the approximate minimal perfect Hash function. Compared with the traditional DHP algorithm, it can filter out all the infrequent item sets without additional database scanning. In order to break the limitation of the single machine algorithm and further improve the performance of the algorithm, we use the idea of SON algorithm for reference to parallelize the AMPHP algorithm and finally propose the AMPHP-SON algorithm. Some experiments have been performed on the operation and maintenance data of EMU traction motor. The experimental result shows that the AMPHP-SON algorithm has good time performance and the rules dug out can be effectively used to guide the optimization of the repair class and repair system of EMU, so as to improve the efficiency of EMU operation and maintenance.
A Sampling Algorithm Based on Frequent Edges in Single Large-Scale Graph Under Spark
Li Longyang, Dong Yihong, Yan Yuliang, Chen Huahui, Qian Jiangbo
2017, 54(9):  1966-1978.  doi:10.7544/issn1000-1239.2017.20160546
Asbtract ( 903 )   HTML ( 2)   PDF (5495KB) ( 704 )  
Related Articles | Metrics
With the popularity of social networks, the demand for its frequent subgraph mining becomes more intense. With the arrival of the era of big data, social networks have been expanding and frequent subgraph mining becomes increasingly difficult. In fact, it does not require to mine frequent subgraphs exactly in application, so sampling methods are adopted to improve the efficiency of mining frequent subgraphs under certain accuracy. Most existing sampling algorithms are not fit for frequent subgraph mining because they use vertex transfer or compute the topology of the original graph first which will take a lot of time. In this paper, we propose a new sampling algorithm named DIMSARI (distributed Monte Carlo sampling algorithm based on random jump and graph induction) based on frequent edge, and it runs on a distributed framework named Spark. This algorithm is created on the basis of the Monte Carlo algorithm meanwhile adding random jump. The results are added by subgraph induction step to promote the accuracy of the algorithm and prove that the algorithm is unbiased. The experiments show that the accuracy of frequent subgraph mining using DIMSARI algorithm has been greatly improved and at the same time the proposed algorithm only spends a little more time than other algorithms. The apex of sampling at different sampling rates after subgraphs has maintained a lower normalized mean square error.
Core Vector Regression for Attribute Effect Control on Large Scale Dataset
Liu Jiefang, Wang Shitong, Wang Jun, Deng Zhaohong
2017, 54(9):  1979-1991.  doi:10.7544/issn1000-1239.2017.20160519
Asbtract ( 634 )   HTML ( 0)   PDF (4699KB) ( 649 )  
Related Articles | Metrics
Attribute effect is a kind of phenomenon of data bias caused by sensitive attributes, which widely exists in real world. If not controlled, it will seriously affect the learning performance of regression model. In order to control the attribute effect in nonlinear regression model on large scale biased dataset, a novel fast equal mean-core vector regression (FEM-CVR) is proposed. First, a novel equal mean-support vector regression (EM-SVR) based on margin maximization criterion is proposed by using the constraint condition of equal mean. On this basis, the fact that the optimization problem of EM-SVR is equivalent to a center constrained-minimum enclosing ball (CC-MEB) problem is derived. Then a novel fast minimum enclosing ball based nonlinear regression learning algorithm for attribute effect control on large scale biased dataset, referred to as FEM-CVR, is further proposed by integrating the approximate minimum enclosing ball theory and reducing the original input dataset into the core set. In addition, some fundamental theoretical properties are deeply discussed. Finally, extensive experiments are conducted on synthetic and real datasets, and experimental results show that our FEM-CVR can effectively control attribute effect in nonlinear regression model on large scale biased dataset with good generalization ability, whose upper bound of the time complexity is independent of the size of the dataset, only related to the approximate parameter of the minimum enclosing ball ε.
Context Based Service Recommendation Middleware in VANET
Yang Qian, Luo Juan, Liu Chang
2017, 54(9):  1992-2000.  doi:10.7544/issn1000-1239.2017.20160640
Asbtract ( 751 )   HTML ( 2)   PDF (2625KB) ( 506 )  
Related Articles | Metrics
VANET (vehicle ad hoc network) is a very important part of smart city which is required to implement a myriad services related to vehicles safety, traffic efficiency and comfortable driving experience. The current researches on service discovery in VANET are mainly focused on quality and latency of service. But with the development of service number and service type in VANET, the information explosion in VANET is increasing seriously, so there is an urgent need for VANET to provide services considering users’ individual requirements. This paper presents a context based service recommendation middleware architecture for VANET which can recommend services for users based on vehicles’ rich context information and users’ service history. With offline theory, a context-based approach of service recommendation is provided. Only when services meet the vehicle’s context constraints and the users’ preference model, they could be recommended to the user. Experimental results show that the recommended services are reasonable and meet users’ preference, additionally, the detours probability caused by services can be reduced.
A Two-Tier Aggregation Based Tracking Algorithm in Wireless Sensor Networks
Ren Qianqian,Liu Hongyang,Liu Yong,Li Jinbao,and Wang Nan
2017, 54(9):  2001-2010.  doi:10.7544/issn1000-1239.2017.20160638
Asbtract ( 685 )   HTML ( 2)   PDF (3972KB) ( 475 )  
Related Articles | Metrics
Mobile target tracking is an important issue in wireless sensor networks. This paper discusses the energy efficient tracking problem in networks. We first construct a grid based network model, which makes nodes near the vertexes of grid cells work and others sleep to save energy with tracking quality guarantee. We analyze the relationship between target appearance position and grid cells in the network, classify the three cases of target detection and give a general target localization method applied to each case. Then, We propose a two-tier aggregation based target tracking algorithm. The algorithm implements aggregation on partial localization results to obtain the optimized final localization result. After that, a clockwiseanticlockwise scheme based shortest path selection algorithm is presented to transmit localization result to sink with minimum involved sensor nodes. Finally, a comprehensive set of simulations are presented and the experimental results show that the proposed target tracking algorithm can yield excellent performance in terms of tracking accuracy and energy saving in wireless sensor networks.
A Traceable and Anonymous Authentication Scheme Based on Elliptic Curve for Wireless Sensor Network
Chang Fen, Cui Jie, Wang Liangmin
2017, 54(9):  2011-2020.  doi:10.7544/issn1000-1239.2017.20160635
Asbtract ( 808 )   HTML ( 7)   PDF (2187KB) ( 522 )  
Related Articles | Metrics
In wireless sensor network (WSN), sensor nodes are deployed in the corresponding application fields, in order to observe their environment and send their observations to the Sink. The message source should be protected in the process of transmission between nodes and Sink. On one hand, message authentication is one of the most effective ways to keep unauthorized and corrupted messages from being forwarded in wireless sensor network; on the other hand, anonymous communication can hide sensitive nodes identity information to implement the privacy protection of nodes location. However, anonymous communication has incurred a series of problems, such as, it gives the attacker an opportunity to use anonymous technology for illegal activities. Thus, it is particularly important to track the identity of the malicious nodes. In order to solve the problems above, a traceable and anonymous authentication scheme based on elliptic curve is proposed in this paper. The scheme combines elliptic curve with ring signature, implements nodes anonymous communication and provides the intermediate nodes authentication. The simulation results demonstrate that this scheme is equal to the existing schemes on the signature and certification cost. While, by using the linkable characteristics of ring signature, the proposed scheme can realize the traceability of malicious nodes, and improve the performance and security of the network.
Multiple Attribute Decision Making-Based Prediction Approach of Critical Node for Opportunistic Sensor Networks
Liu Linlan, Zhang Jiang, Shu Jian, Guo Kai, Meng Lingchong
2017, 54(9):  2021-2031.  doi:10.7544/issn1000-1239.2017.20160645
Asbtract ( 665 )   HTML ( 3)   PDF (3849KB) ( 728 )  
Related Articles | Metrics
If critical nodes have been predicted, the network can be optimized according to the information of the critical nodes. Furthermore, maintenance time and cost of network can be dramatically reduced by checking the critical nodes at the first time when the network is crashed. Unfortunately, the existing methods of predicting critical nodes in static wireless sensor networks are not suitable for opportunistic sensor networks (OSNs). According to the characteristics of dynamic changes of network topology and high latency, for multi-region OSNs (MOSNs) with hierarchical structure, this paper analyzes the message transferring process. The stage contribution is defined to reflect the contribution of Ferry nodes in the process of message transmission, and the region contribution is defined to reflect the contribution of Ferry nodes to regions. In terms of the comprehensive contribution of Ferry nodes, the prediction method of critical nodes is proposed, which is based on multiple attribute decision making—technique for order preference by similarity to ideal solution (TOPSIS). The experimental results show that the prediction method with improved TOPSIS algorithms achieves better accuracy. Furthermore, test bed is established so as to validate the proposed method. The test bed experimental results show that the prediction method with improved TOPSIS algorithms achieves better accuracy as well.
Security Countermeasures for Time Synchronization in IEEE802.15.4e-Based Industrial IoT
Yang Wei, He Jie, Wan Yadong, Wang Qin, Li Chong
2017, 54(9):  2032-2043.  doi:10.7544/issn1000-1239.2017.20160636
Asbtract ( 1001 )   HTML ( 2)   PDF (3646KB) ( 720 )  
Related Articles | Metrics
IEEE802.15.4e is the latest MAC layer standards for the industrial Internet of things, which enables highly reliable and ultra-low power wireless networking through time synchronization technique. In cyberspace where an adversary may attack the networks through various ways, time synchronization becomes an attractive target due to its importance. If an adversary launches time synchronization attack, it will paralyze the whole network communication, the node localization and data fusion application. However, the time synchronization protocol is not insufficient to be protected in IEEE802.15.4e standard. So it is crucial to design a secure time synchronization protocol. First, we develop a secure single-hop ASN synchronization and a secure single-hop device-to-device synchronization using hardware-assisted encryption and authentication. And we also adopt the 2s+1 method and threshold filter algorithm. Second, we develop a secure multi-hop time synchronization mechanism which adopts a rank-based intrusion detection algorithm. Third, theoretical analysis and experiments show that the proposed countermeasures can successfully defend against external attacks and insider attacks, as well as high clock accurate and low power consumption.
Digital Video Stabilization Techniques: A Survey
Wei Shanshan, Xie Wei, He Zhiqiang
2017, 54(9):  2044-2058.  doi:10.7544/issn1000-1239.2017.20160078
Asbtract ( 1326 )   HTML ( 13)   PDF (2448KB) ( 728 )  
Related Articles | Metrics
Digital video stabilization (DVS) techniques have been developing for over 30 years. The improvement of device computing ability, the research on related algorithms as well as the market needs have always been driving the development of DVS techniques: from simple solutions aimed at computing simplicity in early years to complex solutions aimed at stabilization effect, and further to advanced solutions trying to meet both computing simplicity and stabilization effect in recent years. In this survey, we first analyze the existing DVS techniques chronologically and then classify them into two basic catalogues: traditional techniques and emerging techniques. Traditional techniques are strictly based on typical motion models and rely on image processing algorithms for motion estimation. Emerging techniques relax the motion models and introduce novel techniques for motion estimation. According to the motion model they adopt, the traditional techniques are further divided into traditional 2D techniques and traditional 3D techniques. Similarly, the emerging techniques are further divided into emerging 2D techniques and sensor-based techniques. In each technique survey, we first analyze the key techniques it relies on and then list its applications in DVS. Finally, we summarize the existing DVS techniques and look into the challenges and developing trend of DVS techniques in the future.
Research Advances in Screen Content Coding Methods
Liu Dan, Chen Guisheng, Song Chuanming, He Xing, Wang Xianghai
2017, 54(9):  2059-2076.  doi:10.7544/issn1000-1239.2017.20160649
Asbtract ( 1354 )   HTML ( 9)   PDF (5201KB) ( 746 )  
Related Articles | Metrics
With the widespread promotion of the applications such as cloud computing, virtual desktop, and so on, screen content image has become an integral part of the new generation of cloud—mobile computing model. It is one of the hot issues of video coding field to investigate the screen content coding methods with high compression efficiency, good real-time performance, and moderate computational complexity. On introducing the statistical characteristics of screen content image presented in the spatial domain, the frequency domain, the temporal domain, as well as the color space respectively, this study focuses on typical coding methods of the discontinuous tone images. The state-of-art methods are classified into seven categories, namely the palette-index map based methods, the template matching based methods, the block matching based methods, the dictionary-based methods, the shape representation based methods, the temporal-domain coding methods, as well as the chroma component coding methods. Then the screen content coding methods using a hybrid framework is further summarized. Meanwhile, the advantages vs. disadvantages of various methods are also compared, analyzed and discussed. Based on the above, the progress of drafting the international HEVC-SCC coding standard is introduced, and the development trend of the screen content coding is forecast in the near future.
Adaptive Interpolation Scheme Based on Texture Features
Zhang Yunfeng, Yao Xunxiang, Bao Fangxun, Zhang Caiming
2017, 54(9):  2077-2091.  doi:10.7544/issn1000-1239.2017.20160520
Asbtract ( 744 )   HTML ( 2)   PDF (9580KB) ( 612 )  
Related Articles | Metrics
A new interpolation model is proposed based on the bivariate rational interpolation. This model contains rational fractal interpolation and bivariate rational interpolation, which is identified uniquely by the values of iterated function system parameters (scaling factor and shape parameters). Due to efficient capacity of fractal in description of complex phenomenon, the fractal dimension is employed to texture analysis. Based on the analysis of local fractal dimension (LFD), a new local adaptive threshold method is proposed. And then images can be divided into texture region and non-texture region. As for texture regions, rational fractal interpolation is used to get high resolution images. Similarly, rational interpolation is used in non-texture region. Considering the parameters in rational fractal interpolation model, we propose a new method for calculating the scaling factor. Further, in order to improve the quality of interpolated image, shape parameters optimization technique is applied. Experimental results show that the presented model achieves very competitive performance with the state-of-the-art interpolation algorithms.
Continuous Queries Privacy Protection Algorithm Based on Spatial-Temporal Similarity Over Road Networks
Pan Xiao, Chen Weizhang, Sun Yige, Wu Lei
2017, 54(9):  2092-2101.  doi:10.7544/issn1000-1239.2017.20160551
Asbtract ( 874 )   HTML ( 1)   PDF (2586KB) ( 648 )  
Related Articles | Metrics
Continuous queries are one of the most common queries in location-based services (LBSs), although particularly useful, such queries raise serious privacy concerns. However, most of the existing location cloaking approaches over road networks are only applicable for snapshots queries. If these algorithms are applied on continuous queries directly, due to continuous location frequently updated, continuous query privacy will be disclosed. Moreover, combined with the network topology and other network parameters (limited speed etc.), the attackers are knowledgeable, which can easily lead to precise location privacy disclosure. We observe that mobile objects have similar spatial and temporal features due to the existing of network topology. In order to resist continuous query attacks and location-dependent attacks simultaneously, we propose a continuous queries privacy protection algorithm based on spatial-temporal similarity over road networks. The algorithm adopts user grouping and K-sharing privacy requirement strategies to constitute cloaking user sets, which is used to resist continuous queries attack. Then, with the same premise of cloaking user sets, a continuous cloaking segment sets generating algorithm is proposed to resist location-dependent attacks, which makes a balance between location privacy and service quality. Finally, we conduct series of experiments to verify our algorithm with four evaluation measures, and the experimental results show the effectiveness of the proposed algorithm.