ISSN 1000-1239 CN 11-1777/TP

Table of Content

01 July 2017, Volume 54 Issue 7
Survey on Development of Solving Methods and State-of-the-Art Applications of Satisfiability Modulo Theories
Wang Chong, Lü Yinrun, Chen Li, Wang Xiuli, Wang Yongji
2017, 54(7):  1405-1425.  doi:10.7544/issn1000-1239.2017.20160303
Asbtract ( 923 )   HTML ( 4)   PDF (5478KB) ( 646 )  
Related Articles | Metrics
The satisfiability modulo theories (SMT) problem is a decision problem for the satisfiability of first-order logical formula with respect to combinations of background theories. SMT supports many background theories, so it can describe a lot of practical problems in industrial fields or academic circles. Also, the expression ability and the efficiency of decision algorithms of SMT are both better than those of SAT (satisfiability). With its efficient satisfiability decision algorithms, SMT has been widely used in many fields, in particular in test-case generation, program defect detection, register transfer level (RTL) verification, program analysis and verification, solving linear optimization over arithmetic constraint formula, etc. In this paper, we firstly summarize fundamental problems and decision procedures of SAT. After that, we give a brief overview of SMT, including its fundamental concepts and decision algorithms. Then we detail different types of decision algorithms, including eager and lazy algorithms which have been studied in the past five years. Moreover, some state-of-the-art SMT solvers, including Z3, Yices2 and CVC4 are analyzed and compared based on the results of the SMT’s competition. Additionally, we also focus on the introduction for the application of SMT solving in some typical communities. At last, we give a preliminary prospect on the research focus and the research trends of SMT.
An Efficient Collaborative Filtering Algorithm Based on Graph Model and Improved KNN
Meng Huanyu, Liu Zhen, Wang Fang, Xu Jiadong, Zhang Guoqiang
2017, 54(7):  1426-1438.  doi:10.7544/issn1000-1239.2017.20160302
Asbtract ( 593 )   HTML ( 0)   PDF (4831KB) ( 629 )  
Related Articles | Metrics
With the rapid development of Internet, recommender system has been considered as a typical method to deal with the over-loading of Internet information. The recommender system can partially alleviate user’s difficulty on information filtering and discover valuable information for the active user. Collaborative filtering algorithm has the advantages of domain independence and supports users’ potential interests. For these reasons, collaborative filtering has been widely used. Because the user item rating matrix is sparse and in large-scale, recommender system is facing big challenges of precision and performance. This paper puts forward a GK-CF algorithm. By building a graph-based rating data model, the traditional collaborative filtering, graph algorithms and improved KNN algorithm have been integrated. Through the message propagation in the graph and the improved user similarity calculation model, candidate similar users will be selected firstly before the calculation of users similarity. Based on the topology of bipartite graph, the GK-CF algorithm ensures the quick and precise location of the candidate items through the shortest path algorithm. Under the parallel graph framework, GK-CF algorithm has been parallelized design and implement. The experiments on real world clusters show that: compared with the traditional collaborative filtering algorithm, the GK-CF algorithm can better improve recommendation precision and the rating accuracy. The GK-CF algorithm also has good scalability and real-time performance.
A Collaborative Filtering Recommendation Method Based on Differential Privacy
He Ming, Chang Mengmeng, Wu Xiaofei
2017, 54(7):  1439-1451.  doi:10.7544/issn1000-1239.2017.20160207
Asbtract ( 628 )   HTML ( 0)   PDF (5394KB) ( 524 )  
Related Articles | Metrics
Collaborative filtering with large amount of user data will raise serious risk privacy of individuals. How to protect private data information from disclosure has become one of the greatest challenges to recommender systems. Differential privacy has emerged as a new paradigm for privacy protection with strong privacy guarantees against adversaries with arbitrary background knowledge. Although several studies explored privacy-enhanced neighborhood-based recommendations, little attention has been paid to privacy preserving latent factor models. To address the problem of privacy preserving in recommendation systems, a new collaborative filtering recommendation algorithm based on differential privacy is proposed in this paper, which achieves trade-off between recommendation accuracy and privacy by matrix factorization technique. Firstly, user and item latent feature matrices are constructed for decreasing sparsity. After that, matrix factorization model with noise is generated by adding the differential noisy using objective perturbation method, and then stochastic gradient descent is utilized to minimize regularized squared error function and learn the parameters of model. Finally, we apply a differentially private matrix factorization model to predict the ratings and conduct experiments on the MovieLens and Netflix datasets to evaluate its effectiveness. The experimental results demonstrate that our proposal is efficient and has limited side effects on the precision of recommendation.
Mining Top-k Distinguishing Sequential Patterns Using Spark
Zhang Peng, Duan Lei, Qin Pan, Zuo Jie, Tang Changjie, Yuan Chang’an, Peng Jian
2017, 54(7):  1452-1464.  doi:10.7544/issn1000-1239.2017.20160553
Asbtract ( 899 )   HTML ( 2)   PDF (5657KB) ( 468 )  
Related Articles | Metrics
DSP (distinguishing sequential pattern) is a kind of sequence such that it occurs frequently in the sequence set of target class, while infrequently in the sequence set of non-target class. Since distinguishing sequential patterns can describe the differences between two sets of sequences, mining of distinguishing sequential patterns has wide applications, such as building sequence classifiers, characterizing biological features of DNA sequences, and behavior analysis for specified group of people. Compared with mining distinguishing sequential patterns satisfying the predefined support thresholds, mining distinguishing sequential patterns with top-k contrast measure can avoid setting improper support thresholds by users. Thus, it is more user-friendly. However, the conventional algorithm for mining top-k DSPs cannot deal with the sequence data set with large-scale. To break this limitation, a parallel mining method using Spark, named SP-kDSP-Miner (Spark based top-k DSP miner), is designed for mining top-k distinguishing sequential patterns from large-scale sequence data set. Furthermore, in order to improve the efficiency of SP-kDSP-Miner, a novel candidate pattern generation strategy and several pruning strategies, as well as a parallel computing method for the contrast scores of candidate patterns are proposed considering the characteristics of Spark structure. Experiments on both real-world and synthetic data sets demonstrate that SP-kDSP-Miner is effective, efficient and scalable.
An Audience Targeting Algorithm Based on Neighbor Choosing Strategy
Zhou Meng, Zhu Fuxi
2017, 54(7):  1465-1476.  doi:10.7544/issn1000-1239.2017.20160360
Asbtract ( 543 )   HTML ( 0)   PDF (4655KB) ( 360 )  
Related Articles | Metrics
Audience targeting which is designed to discover the prospective target users by analyzing the these seed users’ behavior is an important technology in the online advertising recommendation systems, and the existing audience targeting technologies mostly rely on collaborative filtering algorithms. However, the traditional collaborative filtering algorithms have the disadvantages of lower precision and weaker anti-attack capability. In order to solve the problems, an audience targeting algorithm based on neighbor choosing strategy is proposed. Firstly, the users which have the similar behavior with the seed audiences are chosen dynamically by means of the user behavior similarity. Then, on the basis of the users’ feature and behavior, the neighbors of each seed user are chosen from the behavior similar audiences by the user similarity, and all the neighbors are considered to be the candidate audiences. Finally, the prospective audiences are chosen from the candidate users by the audience targeting algorithm based on neighbor choosing strategy, so as to complete the task of audience targeting. Compared with the existing methods, the experimental results on real-world advertisement datasets show that the audience targeting algorithm not only improves the precision, but enhances the anti-attack capability as well.
A Method for Monotonic Classification Based on Decision Forest
Xu Hang, Wang Wenjian, Ren Lifang
2017, 54(7):  1477-1487.  doi:10.7544/issn1000-1239.2017.20160154
Asbtract ( 485 )   HTML ( 0)   PDF (5890KB) ( 449 )  
Related Articles | Metrics
Monotonic classification is an ordinal classification problem in which the monotonic constraint exists between features and class. There have been some methods which can deal with the monotonic classification problem on the nominal datasets well. But for the monotonic classification problems on the numeric datasets, the classification accuracies and running efficiencies of the existing methods are limited. In this paper, a monotonic classification method based on decision forest (MCDF) is proposed. A sampling strategy is designed to generate decision trees, which can make the sampled training data subsets having a consistent distribution with the original training dataset, and the influence of non-monotonic noise data is avoided by the sample weights. It can effectively improve the running efficiency while maintaining the high classification performance. In addition, this strategy can also determine the number of trees in decision forest automatically. A solution for the classification conflicts of different trees is also provided when the decision forest determines the class of a sample. The proposed method can deal with not only the nominal data, but also the numeric data. The experimental results on artificial, UCI and real datasets demonstrate that the proposed method can improve the monotonic classification performance and running efficiency, and reduce the length of classification rules and solve the monotonic classification problem on large datasets.
Analyzing Rating Data and Modeling Dynamic Behaviors of Users Based on the Bayesian Network
Wang Fei, Yue Kun, Sun Zhengbao, Wu Hao, Feng Hui
2017, 54(7):  1488-1499.  doi:10.7544/issn1000-1239.2017.20160556
Asbtract ( 703 )   HTML ( 1)   PDF (7278KB) ( 517 )  
Related Articles | Metrics
With the rapid development of Web2.0 and the e-commerce applications, large-scale online rating data are generated, which makes it possible to analyze users behavior data and model user behaviors. Considering the dynamic property of rating data and user behaviors, in this paper we adopt the Bayesian network with a latent variable (abbreviated as latent variable model) as the framework for describing mutual dependencies and corresponding uncertainties, and then construct the model that can reflect not only the uncertainty of dependence relationships among attributes in rating data but also the dynamic property of user behaviors. We first adopt the Bayesian information criterion (BIC) as the coincidence measure between candidate model and rating data, and then propose the scoring-and-search based method to construct the latent variable model. Then, we give the method for filling latent variable values based on the expectation maximization (EM) algorithm. Further, we propose the method for constructing the latent variable model between adjacent time slices based on conditional mutual information and irreversibility of time series. Finally, experimental results established on the MovieLens data set verify the efficiency and effectiveness of the method proposed in this paper.
Local Optimal Granularity Selections in Incomplete Multi-Granular Decision Systems
Gu Shenming, Gu Jinyan, Wu Weizhi, Li Tongjun, Chen Chaojun
2017, 54(7):  1500-1509.  doi:10.7544/issn1000-1239.2017.20160349
Asbtract ( 474 )   HTML ( 0)   PDF (2089KB) ( 244 )  
Related Articles | Metrics
Granular computing is an approach for knowledge representing and data mining. With the view point of granular computing, the notion of a granule is interpreted as one of the numerous small particles forming a larger unit. In many real-life applications, there are different granules at different levels of scale in data sets having hierarchical scale structures. Many people apply granular computing for problem solving by considering multiple levels of granularity. This allows us to focus on solving a problem at the most appropriate level of granularity by ignoring unimportant and irrelevant details. In this paper, rule extraction based on the local optimal granularities in incomplete multi-granular decision systems is explored. Firstly, the concept of incomplete multi-granular decision systems is introduced. Then the notions of the optimal granularity and the local optimal granularity in consistent incomplete multi-granular decision system are defined, and the approaches to attribute reduction and rule extraction based on the local optimal granularities are illustrated; the generalized decisions in inconsistent incomplete multi-granular decision systems are further introduced, the generalized optimal granularity and the generalized local optimal granularity are also defined, and the approaches to attribute reduction and rule extraction based on the generalized local optimal granularities are investigated. Finally, the experimental results on the public datasets are discussed.
Lattice-Based Forward Secure and Certificateless Signature Scheme
Xu Qian, Tan Chengxiang, Feng Jun, Fan Zhijie, Zhu Wenye
2017, 54(7):  1510-1524.  doi:10.7544/issn1000-1239.2017.20160427
Asbtract ( 569 )   HTML ( 0)   PDF (3131KB) ( 373 )  
Related Articles | Metrics
Certificateless signature scheme has solved key escrow problems existing in traditional identity-based signature schemes. The secret key of the user in certificateless signature scheme consists of two parts, one is partial secret key, which is generated by key generation centre, and the other is secret value from user itself. However, there are still three points to be improved in such scheme. Firstly, although some lattice-based certificateless signature schemes based on the random oracle model have been proposed in order to achieve the post-quantum security, their standard model counterparts remain unrealized. Secondly, most of the existing lattice-based certificateless signature schemes only consider the outside attacker and neglect the threats from semi-trusted user. Thirdly, the existing certificateless signature schemes all rely on the security of the secret key, which cannot be satisfied due to the key exposure problem. In this paper, based on the forward secure and certificateless signature scheme in the random oracle model, we propose the first lattice-based certificateless signature scheme which is provably secure in the standard model to eliminate key exposure and key escrow problems without introducing a third party proxy. With the help of the small integer solution problem, we have proved that our schemes can guarantee the forward secure and strongly existential unforgeability against the adaptive chosen message and adaptive chosen identity attack.
Impossible Differential Attack on Crypton
Cui Jingyi, Guo Jiansheng, Liu Yipeng
2017, 54(7):  1525-1536.  doi:10.7544/issn1000-1239.2017.20160415
Asbtract ( 440 )   HTML ( 0)   PDF (4586KB) ( 237 )  
Related Articles | Metrics
Crypton is one of the candidates of AES that designed based on Square which is a SP-network block cipher. Crypton attracts much attention of the world because of its excellent performance on hardware. The security of Crypton block cipher under impossible differential attack was studied in this paper. The properties of the diffusion layer and nonlinear layer of Crypton are analyzed and combined with the quick sort technique, the divide-and-conquer strategy, the early abort technique, the impossible differential attack on 7-round Crypton is improved with a lower data complexity and time complexity. By using 4 impossible differential distinguishers in parallel, combined with the property of key schedule, the master key of 7-round Crypton is recovered. Based on the impossible differential attack on 7-round Crypton, one more round is extended to maintain the attack on 8-round Crypton-256 to recover the 256-bit key with a data complexity of 2\+{103} chosen plaintexts, a time complexity of 2\+{214} 8-round encryptions, a memory complexity of 2\+{154.4 }B. The results show that with the usage of several techniques and the properties of Crypton, the best impossible differential attacks on Crypton are proposed in this paper known before. These techniques can also be used to analyze the other SP-network block ciphers.
The Malware Detection Based on Data Breach Actions
Wang Lina, Tan Cheng, Yu Rongwei, Yin Zhengguang
2017, 54(7):  1537-1548.  doi:10.7544/issn1000-1239.2017.20160436
Asbtract ( 483 )   HTML ( 5)   PDF (7670KB) ( 538 )  
Related Articles | Metrics
The advanced persistent threat (APT) attack is a big challenge towards enterprise and governmental data protection. The use of 0-day exploits is prevalent with malwares capable of APT attacks, and traditional security systems relying on known features can hardly detect them. In order to detect malwares which steal sensitive information, first of all we analyze existing APT malwares and describe the steps of their attacks. Based on the analysis, we propose a malware detection method focusing on data breach actions to the same kind of malwares. Combining anomaly detection with misuse detection, this method enables persistent monitoring, protecting hosts and network with low cost. Also proposed are inference rulesets which describe high-level malicious events observed in attack steps. Once suspicious events are detected, low-level actions from the hosts and the network will be further collected and correlated to high-level malicious events by the inference rules. Eventually we reconstruct the data breach attack procedure to judge the existence of the attacks. Simulation experiment verify the effectiveness of the method.
Protocols for Secure Computation of Set-Inclusion with the Unencrypted Method
Chen Zhenhua, Li Shundong, Wang Daoshun, Huang Qiong, Dong Lihong
2017, 54(7):  1549-1556.  doi:10.7544/issn1000-1239.2017.20160034
Asbtract ( 358 )   HTML ( 0)   PDF (1940KB) ( 244 )  
Related Articles | Metrics
The most existing protocols for secure computation of set-inclusion are based on multiple public-key encryption algorithms and cannot either be computed publicly, which makes the computation complexity high and the application range limited as well. To cope with these problems, we design two protocols for secure computation of set-inclusion with the unencrypted method. Protocol 1 first transforms the original problem into the scalar-product problem, and then solves it with the hard problem in cryptography. Next, we present the practical protocol 2 under a certain scenario with a third untrusted party, in which we solve the set-inclusion problem combing the bilinear pairing with the hard problem in cryptography. Lastly, we give the security proof of two protocols, the performance analysis and comparison with others. Both of our protocols employ neither any public-key encryption algorithm nor multiple matching searches so as to avoid the intricate procedures of key-generation, encryption and decryption. This makes our protocols more efficient and concise than the previously known. In addition, we extend the secure computation of set-inclusion to a new application scenario, in which we can determine the set-inclusion relation publicly without revealing the privacy of both sets even if the untrusted third party exists.
Dynamically Detecting Multiple Types of Deadlocks Using Lock Allocation Graphs
Yu Zhen, Su Xiaohong, Qiu Jing
2017, 54(7):  1557-1568.  doi:10.7544/issn1000-1239.2017.20160369
Asbtract ( 309 )   HTML ( 1)   PDF (4342KB) ( 374 )  
Related Articles | Metrics
Deadlock bugs are hard to expose, reproduce and diagnose. Once happening, they will cause increasing response time, decreasing throughput, or even crash to the target multithreaded programs. However, current deadlock detection techniques can detect only one mutex-caused deadlock at a time. In order to detect all possible deadlocks one time caused by multiple threads and multiple mutexes or rwlocks, this paper proposes the concept of multiple-type lock allocation graph (MLAG) and its construction method. Then a MLAG-based dynamic detection algorithm to detect multiple types of deadlocks is proposed. By instrumenting all lockunlock operations on mutexes and rwlocks, our method dynamically constructs and real-time updates a MLAG which reflects the synchronization state of the target program. Our method detects deadlock bugs by detecting cycles on the MALG and checking whether or not a cycle is a deadlock cycle. When a deadlock is detected, the method outputs information about that bug to assist debugging. The experimental results on benchmarks show that our method is strong in deadlock detection for successfully detecting all 13 deadlock bugs with 5 types, and has a slight impact on target programs for incurring at most 10.15% slowdown to openldap-2.2.20’s performance, and is scalable because the overhead increase gently along with an exponentially increasing number of threads.
QEMU-Based Dynamic Function Call Tracing
Xiang Yong, Cao Ruidong, Mao Yingming
2017, 54(7):  1569-1576.  doi:10.7544/issn1000-1239.2017.20160094
Asbtract ( 717 )   HTML ( 0)   PDF (3069KB) ( 342 )  
Related Articles | Metrics
Function call has always been an important research topic in Linux kernel analysis. There are two main approaches to obtain function calls, static analysis and dynamic analysis. Using dynamic tracing approach can provide accurate and real-time function calls. It is great help to analyze and debug software programs. Considering that existing tools need some particular compile options or their tracing data is not very comprehensive, a new dynamic function call tracing tool that supports multiple CPU architectures based on an open source emulator QEMU is designed and implemented. It can provide function call and function return information including those in the Linux kernel booting phase on three architectures, x86_32, x86_64 and ARM. When the system is running, this tool intercepts procedure call and return assembly instructions. Then it logs necessary state information to file. Based on the property that these kinds of instructions must be the last one of a QEMU translation block, the amount of checked instructions is lowered and the efficiency is promoted. Only the symbol table of the program not the source code is needed to parse function call data. Test result shows that the behavior indicated by tracing data concurs with the corresponding source code. This tool has higher performance and supports more CPU architectures than S2E. It is easier to extend to other architectures.
Web Database top-k Diverse Keyword Query Suggestion Approach
Meng Xiangfu, Bi Chongchun, Zhang Xiaoyan, Tang Xiaoliang, Tang Yanhuan
2017, 54(7):  1577-1591.  doi:10.7544/issn1000-1239.2017.20160005
Asbtract ( 436 )   HTML ( 0)   PDF (6668KB) ( 472 )  
Related Articles | Metrics
Web database users often use the keywords that are familiar to them for expressing their query intentions and this may lead to unsatisfactory results due to the limitation of the users’ knowledge. Providing top-k diverse and relevant queries can broaden user knowledge scope and thus can help them to formulate more efficient queries. To address this problem, this paper proposes a top-k diverse keyword query suggestion approach. It first leverages frequency of co-occurrence and correlations between different keywords in query history to measure the intra-and inter-keyword couplings. And then, a semantic matrix, which reserves the coupling relationships between keywords, is generated. Based on the semantic matrix, the semantic similarities between keyword queries can be measured by using a kernel function. To quickly provide the top-k diverse and semantically related queries, this approach first finds the typical queries from query history by using the probabilistic density estimation method. After this, it finds the representative queries from the set of typical queries and then creates the orders for each representative query according to the similarities of remaining queries in the set of typical queries to the representative query. When a new query coming, the similarities between the given query and representative queries are computed, and then the top-k diverse and semantically related queries can be selected by using threshold algorithm (TA) over the orders of representative queries. The experimental results demonstrate that both the keyword coupling relationship and query semantic similarity measuring methods can achieve the high accuracy, and the effectiveness of top-k diverse query selection method is also demonstrated.
A Multi-Way Spatial Join Querying Processing Algorithm Based on Spark
Qiao Baiyou, Zhu Junhai, Zheng Yujie, Shen Muchuan, Wang Guoren
2017, 54(7):  1592-1602.  doi:10.7544/issn1000-1239.2017.20160558
Asbtract ( 497 )   HTML ( 0)   PDF (6150KB) ( 202 )  
Related Articles | Metrics
Aiming at the problem of spatial join query processing in cloud computing systems, a multi-way spatial join query processing algorithm BSMWSJ is proposed, which is based on Spark platform. In this algorithm, the whole data space is divided into grid cells with the same size by grid partition method, and spatial objects in each type data set are distributed into these grid cells according to their spatial locations. Spatial objects in different grid cells are processed in parallel. In multi-way spatial join query processing, a boundary filtering method is proposed to filter the useless data, which calculates the MBRs of the candidate results generated by the previous join processing, and uses these MBRs to filter the subsequent join data sets. This allows it to filter out the useless spatial objects, and reduce the redundant projection and replication of spatial objects. At the same time, a duplication avoidance strategy is applied to reduce the outputs of redundant results, and further minimizes the cost of the subsequent join processing. Many experiments on synthetic and real data sets show that the proposed multi-way spatial join query processing algorithm BSMWSJ has obvious advantages and better performance than the existing multi-way spatial join query processing algorithms.
An Access Control Method Supporting Multi-User Collaborative Edit in Cloud Storage
Shi Jiaoli, Huang Chuanhe, He Kai, Shen Xieyang, Hua Chao
2017, 54(7):  1603-1616.  doi:10.7544/issn1000-1239.2017.20151135
Asbtract ( 355 )   HTML ( 0)   PDF (7723KB) ( 431 )  
Related Articles | Metrics
As for attribute-based access control in cloud storage, most of researches focus on reading permission control when multiple users read the same out-sourced data simultaneously. They dot’t consider writing permission control when multiple users modify the same data simultaneously. In multi-user collaborative edit scene, challenges have emerged: 1) A data owner with limited capabilities of computation, storage and communication, would like cloud to aid him with writing permission control, but would not like it to know the content of data, or get what is matched, or even predict the users’ writing permission either. 2) Boolean formula cannot describe writing permission policy. 3) Bilinear pairing operations bring great computational costs. In this work, a collaborative edit access control method is presented in cloud storage. That is, a data owner defines writing permission policy represented by a circuit, and semi-trusted cloud decides whether or not the writing succeeds by matching writing policy without the prediction of acceptability of the next edit request. Analyses and simulations show that our method is provided with the ability of multi-user collaborative access control for cloud storage, and the storage cost and the computation cost of encrypting and decrypting are both lesser at user end in reading permission control with cloud-aided decryption.
A Distributed Deadline Propagation Approach to Reduce Long-Tail in Datacenters
Ren Rui, Ma Jiuyue, Sui Xiufeng, Bao Yungang
2017, 54(7):  1617-1628.  doi:10.7544/issn1000-1239.2017.20160247
Asbtract ( 829 )   HTML ( 0)   PDF (6790KB) ( 261 )  
Related Articles | Metrics
Long-tail latency is inevitable and may be amplified for highly modular datacenter applications such as Bing, Facebook, and Amazon’s retail platform, due to resource sharing, queuing, background maintenance activities, etc. Thus how to tolerate the latency variability in shared environments is crucial in datacenters. This paper proposes a distributed deadline propagation (D\+2P) approach for datacenter applications to reduce long-tail latency. The key idea of D\+2P is inspired by the traffic light system in Manhattan, New York City, where one can enjoy a chain of green lights after one stop at a red light, and it allows local nodes to perceive global deadline information and to propagate the information among distributed nodes. Local nodes can leverage the information to do scheduling and adjust processing speed to reduce long-tail latency. Then, we propose stage-service model and parallel-unit model to describe sequential/dependent pattern and partition/aggregate pattern, and implement a distributed deadline propagation framework. At last, based on distributed deadline propagation framework, we use D\+2P-enabled deadline-aware scheduling algorithm to reduce long-tail latency in our experiments, and the preliminary experimental results show that D\+2P has the potential of reducing the long-tail latency in datacenters by local nodes leveraging the propagated deadline information.