ISSN 1000-1239 CN 11-1777/TP

Table of Content

01 January 2020, Volume 57 Issue 1
A Flexible Accuracy-Controllable Searchable Symmetric Encryption Scheme
Li Ximing, Tao Ruyu, Su Chen, Huang Qiong, Huang Xinyi
2020, 57(1):  3-16.  doi:10.7544/issn1000-1239.2020.20190233
Asbtract ( 604 )   HTML ( 20)   PDF (2476KB) ( 316 )  
Related Articles | Metrics
In the traditional keyword-based searchable symmetric encryption technology, the keyword set is usually generated by the keyword extraction algorithm, so that the content and quantity of the keywords are limited by the keyword extraction algorithm. Therefore, in the keyword-based encryption search system, in addition to the keyword set generated by the system at the time of initial construction, the user cannot search for other related contents, thereby limiting the application of the encryption search technology. In view of the above problems, this paper proposes a flexible accuracy-controllable searchable symmetric encryption (FASSE) that supports flexible and precise control. By flexibly generating keywords and indexes generated by document summary during system operation, the dependence of the keyword collection effectively improves the flexibility of the encryption search technology. FASSE provides three basic searches, namely one-shot search reinforcement search and filter search, which respectively correspond to the user finding the keyword record in the dictionary only once, not finding the keyword record in the dictionary and only using it once. The search finds records or three search cases where keyword records are found in the dictionary and abstract multiple times. At the same time, the system also combines three kinds of search to design a fuzzy reinforcement search to further enhance the practicability of the system. The specific implementation language of the FASSE program is the Java programming development language and the final experiment shows that FASSE averages 114.26 ms in searching each paper in the https://eprint.iacr.org/complete/ paper dataset.
A Survey on Many-Lights Rendering Methods
Liu Yifan, Xu Kun
2020, 57(1):  17-31.  doi:10.7544/issn1000-1239.2020.20190208
Asbtract ( 531 )   HTML ( 16)   PDF (4092KB) ( 268 )  
Related Articles | Metrics
Many-lights rendering has always been an important research topic in computer graphics. It is one of the important methods to achieve global illumination effects, and it is a great demand in related industries such as games, movies, and animations, etc. However, efficient many-lights rendering is still a big challenge in both fields of off-line rendering and real-time rendering. This paper reviews the main progress of many-lights rendering methods in recent years. Among those many-lights rendering methods, their major goal is to improve the rendering efficiency. In the field of off-line rendering, we first review the algorithms of accelerating visibility test computation for reducing the average rendering time of a single light source. Then, we discuss light source clustering algorithms, and review accelerated rendering algorithms which are directly based on light source clustering. Different strategies for light source clustering are discussed, including strategies using hierarchical structures and strategies based on matrix analysis. After that, we review important sampling methods based on light clustering. In the field of real-time rendering, we introduce several rendering algorithms based on light culling. We also provide comparison and analysis of the advantages and disadvantages of various methods, and summarize the research trends and challenges of many-lights rendering.
Recent Progress in Large-Scale Ridesharing Algorithms
Xu Yi, Tong Yongxin, Li Wei
2020, 57(1):  32-52.  doi:10.7544/issn1000-1239.2020.20190239
Asbtract ( 778 )   HTML ( 20)   PDF (2479KB) ( 677 )  
Related Articles | Metrics
Ridesharing, a new shared mobility service where passengers with different origins and destinations agree to take the same vehicle for their rides and share the cost, is experiencing widespread adoption with the development of sharing economy. In the era of mobile Internet and ubiquitous computing, ridesharing becomes large-scale and exhibits new characteristics including enormous data, dynamic scenarios, diverse objectives, and varied applications. These characteristics fundamentally complicate the ridesharing problem and have spurred extensive new research on large-scale ridesharing algorithms. Furthermore, research on social aspects of large-scale ridesharing has also attracted increasing research attention. This paper aims at a systematic and comprehensive survey on recent advances in large-scale ridesharing algorithms. We first introduce the basic concepts and workflow of ridesharing, and then systematically review existing algorithms for route planning, the core algorithmic problem for large-scale ridesharing. We also discuss social aspects critical for practical large-scale ridesharing applications such as incentive mechanisms, privacy and safeguard measures and finally point out potential future research directions.
A Survey of Smart Health: System Design from the Cloud to the Edge
Qiu Yu, Wang Chi, Qi Kaiyue, Shen Yao, Li Chao, Zhang Chengmi, Guo Minyi
2020, 57(1):  53-73.  doi:10.7544/issn1000-1239.2020.20190002
Asbtract ( 1158 )   HTML ( 45)   PDF (1196KB) ( 799 )  
Related Articles | Metrics
Smart health is a real-time, intelligent, ubiquitous healthcare service based on the IoT aware network and sensing infrastructure. Thanks to the rapid development of related technologies such as cloud computing, fog computing and IoT, research on smart health is gradually on the right track. This paper analyzes research on smart health in recent years and then discusses the development of smart health from both cloud and edge, including cloud computing, fog computing, IoT sensors, blockchain, and privacy and security. At present, in the research of cloud and smart health, the focus is on how to use the cloud to complete the challenges of massive health data and improve service performance, including related issues such as storage, retrieval and calculation of health big data in the cloud. At the edge, research focuses on the collection, transmission, and computation of health data, including sensors and wearable devices for collecting health data, various sensor networks, and how to process health data and improve service performance at the edge. As an emerging technology, blockchain has a wide range of applications in smart health. We discuss typical smart health application, blockchain in smart health and privacy and security issues related to smart health. Finally, we present challenges and opportunities for smart health in the edge computing era.
Transport Protocols for Data Center Networks: A Survey
Zeng Gaoxiong, Hu Shuihai, Zhang Junxue, Chen Kai
2020, 57(1):  74-84.  doi:10.7544/issn1000-1239.2020.20190519
Asbtract ( 1184 )   HTML ( 29)   PDF (889KB) ( 571 )  
Related Articles | Metrics
Driven by the need of prevailing Web applications and services (e.g., search, online retailing, and cloud computing), data centers (DCs) have been built at an unforeseen rate and scale around the globe in the recent decades. In particular, data center networks (DCNs) have drawn great attention from both academia and industry. Under such a background, this paper surveys one of the key aspects of DCN—transport layer protocol. While transport protocol has a long history on the Internet, it is seldom systematically explored in the DCN context until 2010s. DCN presents different characteristics (e.g., single administrative domain, and homogeneous network structure) from the Internet. This brings about both opportunities and challenges for transport protocol design over it. Motivated by this, a bunch of transport protocols have been proposed. This paper classifies the early work (2010—2015) on DCN transport design into three categories—endhost-based congestion control, switch-assisted arbitration, and network priority scheduling. The paper discusses the pros and cons of the work from the three categories. At last, the paper analyzes the recent research trend on DCN transport design—receiver-driven proactive congestion control, and RDMA (remote direct memory access) transport design.
A Survey of Data Consistency Research for Non-Volatile Memory
Xiao Renzhi, Feng Dan, Hu Yuchong, Zhang Xiaoyi, Cheng Liangfeng
2020, 57(1):  85-101.  doi:10.7544/issn1000-1239.2020.20190062
Asbtract ( 1015 )   HTML ( 29)   PDF (1043KB) ( 1050 )  
Related Articles | Metrics
As DRAM technology is facing the bottleneck in density scaling and the problem of high power leakage, novel non-volatile memory (NVM) has drawn extensive attention from academia and industry, due to its superiority in non-volatility, high-density, byte addressability, and low static power consumption. Novel non-volatile memory such as phase change memory (PCM) is likely to substitute or complement DRAM as system main memory. However, due to the non-volatility of NVM, when system failed, data stored in NVM may be inconsistent by reason of partial updates or memory controller write reordering. In order to guarantee the consistency of data in NVM, it is essential to ensure the serialization and persistence in NVM write operations. NVM has inherent drawbacks, such as limited write endurance and high write latency, thus reducing the number of writes can help prolong the lifetime of NVM and improve the performance of NVM-based systems as long as data consistency in NVM is guaranteed. This paper focuses on data consistency based on NVM, especially on persistent indexes, file systems and persistent transactions, and to provide better solutions or ideas for achieving low data consistency overhead. Finally, the possible research directions of data consistency based on NVM are pointed out.
Reliability in Cloud Computing System: A Review
Duan Wenxue, Hu Ming, Zhou Qiong, Wu Tingming, Zhou Junlong, Liu Xiao, Wei Tongquan, Chen Mingsong
2020, 57(1):  102-123.  doi:10.7544/issn1000-1239.2020.20180675
Asbtract ( 1456 )   HTML ( 55)   PDF (1657KB) ( 850 )  
Related Articles | Metrics
As a new computing paradigm, cloud computing has attracts extensive concerns from both academic and industrial fields. Based on resource virtualization technology, cloud computing provides users with services in the forms of infrastructure, platform and software in a “pay-as-you-go” manner. In the meanwhile, since cloud computing provides highly scalable computing resources, more and more enterprises and organizations choose cloud computing platforms to deploy their scientific or commercial applications. However, with the increasing number of cloud users, cloud data centers continuously expand and the architecture becomes increasingly complex, leading to growing runtime failures in cloud computing systems. Therefore, how to ensure the system reliability in cloud computing systems with large scale and complex architecture has become a huge challenge. This paper first summarizes various failures in cloud systems, introduces several methods to evaluate the reliability of cloud computing, and describes some key fault management mechanisms. Since fault management techniques inevitably increase energy consumption of cloud systems, this paper reviews current researches on the trade-off between reliability and energy efficiency in cloud computing. In the end, we propose some major challenges in current research of cloud computing reliability and concludes our paper.
Personalized Recommendation Model Based on Quantifier Induced by Preference
Guo Kaihong, Han Hailong
2020, 57(1):  124-135.  doi:10.7544/issn1000-1239.2020.20190166
Asbtract ( 490 )   HTML ( 11)   PDF (631KB) ( 257 )  
Related Articles | Metrics
A novel model for extracting expected value from users is presented, with which to establish a user preference-based personalized quantifier. A sample of multi-attribute alternatives is given first, and then the involved user is asked to provide a ranking of alternatives of this sample on the basis of his/her personal preference or decision attitude. With this ranking, an extraction model for users’ expected value about the sample information is constructed. The principles of OWA (ordered weighted averaging) aggregations and TOPSIS (technique for order preference by similarity to ideal solution) are followed during the modeling, upon which the developed technique is based. The user’s preference or attitude can then be derived from this expected value to help build a personalized quantifier, with which to aggregate the attribute values of new products with the aim of realizing personalized recommendation. Case study and experimental results show that the developed model and quantifier can well capture and reflect many varieties of personality characteristics of users with different ability levels and knowledge structures. As such, the developed technique could be considered as an effective tool in practical applications for the “satisfactory solutions” in accord with some particular attitude, rather than the “optimal solutions” in general terms, characterized by greater applicability and flexibility by contrast with a similar kind of method.
Mixture of Variational Autoencoder
Chen Yarui, Jiang Shuoran, Yang Jucheng, Zhao Tingting, Zhang Chuanlei
2020, 57(1):  136-144.  doi:10.7544/issn1000-1239.2020.20190204
Asbtract ( 387 )   HTML ( 6)   PDF (2135KB) ( 183 )  
Related Articles | Metrics
Variational autoencoder (VAE) is a generated model with continuous latent variables, where the objective function is constructed by the variational approximation, and both the generated part and the inference part are based on neural networks. The traditional variational autoencoder assumes that the multi-dimensional latent variables of inference model are independent, which simplifies the inference process, but makes poor lower bound of the objective function and also limits the representation ability of the latent space. In this paper, we propose a mixture of variational autoencoder (MVAE), which generates the data through the variational autoencoder components. For this model, we first take the continuous latent vector with a Gaussian prior as the hidden layer, and take the discrete latent vector with a polynomial prior as the indicator vector of the component. Then we use the reparameterization trick and stick-breaking parameterization to rewrite the variational optimization objective, and train the model parameters through stochastic gradient descent. The mixture of variational autoencoder improves the inference accuracy using the complex model structure, expands the representation space of latent vector by the mixture of components, and effectively solves the corresponding optimization problems through the reparameterization and the stick-breaking parameterization technologies. Finally, we design the comparison experiments on datasets to demonstrate the higher inference accuracy and stronger representation ability of the latent variables.
Action Recognition of Temporal Segment Network Based on Feature Fusion
Li Hongjun, Ding Yupeng, Li Chaobo, Zhang Shibing
2020, 57(1):  145-158.  doi:10.7544/issn1000-1239.2020.20190180
Asbtract ( 708 )   HTML ( 18)   PDF (7881KB) ( 430 )  
Related Articles | Metrics
Action recognition is a research hot topic and a challenging task in the field of computer vision nowadays. Action recognition analysis is closely related to its network input data type, network structure and feature fusion. At present, the main input data of action recognition network is RGB images and optical flow images, and the network structure is mainly based on two-stream and three dimension convolution. While the selection of features directly affects the efficiency of recognition and there are still many problems to be solved in multi-layer feature fusion. In view of the limitation of the RGB images and optical flow images which are the input of the popular two-stream convolution network, using sparse features in low rank space can effectively capture the information characteristics of moving objects in video and supplement the network input data. Meanwhile, for the lack of information interaction in the deep network, the high-level semantic information and the low-level detailed information are combined to recognize actions together, which makes temporal segment network performance more advantageous. Extensive experiments in subjective and objective comparison are performed on UCF101 and HMDB51 and the results show that the proposed algorithm is significantly better than several state-of-the-art algorithms, and the average accuracy rate of the proposed algorithm reaches 97.1% and 76.7%. The experimental results show that our method can effectively improve the recognition rate of action recognition.
Causal Relation Extraction Based on Graph Attention Networks
Xu Jinghang, Zuo Wanli, Liang Shining, Wang Ying
2020, 57(1):  159-174.  doi:10.7544/issn1000-1239.2020.20190042
Asbtract ( 1341 )   HTML ( 57)   PDF (1344KB) ( 1057 )  
Related Articles | Metrics
Causality represents a kind of correlation between cause and effect, where the happening of cause will leads to the happening of effect. As the most important type of relationship between entities, causality plays a vital role in many fields such as automatic reasoning and scenario generation. Therefore, extracting causal relation becomes a basic task in natural language processing and text mining. Different from traditional text classification methods or relation extraction methods, this paper proposes a sequence labeling method to extract causal entity in text and identify direction of causality, without relying on feature engineering or causal background knowledge. The main contributions of this paper can be summarized as follows: 1) we extend syntactic dependency tree to the syntactic dependency graph, adopt graph attention networks in natural language processing, and introduce the concept of S-GAT(graph attention network based on syntactic dependency graph); 2) Bi-LSTM+CRF+S-GAT model for causal extraction is proposed, which generates causal label of each word in sentence based on input word vectors; 3) SemEval data set is modified and extended, and rules are defined to relabel experimental data with an aim of overcoming defects of the original labeling method. Extensive experiments are conducted on the expanded SemEval dataset, which shows that our model achieves 0.064 improvement over state-of-the-art model Bi-LSTM+CRF+self-ATT in terms of prediction accuracy.
An Approach for Reconciling Inconsistent Pairs Based on Factor Graph
Xu Yaoli, Li Zhanhuai, Chen Qun, Wang Yanyan, Fan Fengfeng
2020, 57(1):  175-187.  doi:10.7544/issn1000-1239.2020.20180691
Asbtract ( 352 )   HTML ( 7)   PDF (1161KB) ( 114 )  
Related Articles | Metrics
Entity resolution (ER) is a critical and fundamental problem in data integration and data cleaning systems. Although there have been numerous methods proposed for entity resolution, those approaches explicitly or implicitly depend on ad-hoc assumptions or employ different strategies. Given an ER task, there exist many inconsistent pairs due to conflicting results resolved by these approaches. It is of great challenges of reconciling these pairs without any labeled data: 1)without labeled data, it is impractical to estimate the performance of existing approaches and pick out the best; 2)although an optional way is to reconcile these conflicting results for a better and consistent labeling solution, an effective reconciliation mechanism for combining all hints remains to be investigated. To this end, an approach for reconciling inconsistent pairs based on factor graph is proposed. It firstly achieves inconsistent and consistent pairs through conducting existing entity resolution approaches for a given ER task. Secondly, the features that can indicate the matching status of inconsistent pairs, are extracted by leveraging techniques like kernel density estimation and matching information transfer and so on. Then these features are modeled as factor functions of the factor graph, which represents a joint probability distribution with factor weights. Finally, the weight of each factor is estimated based on the maximum likelihood estimation, and the inconsistent pairs are reconciled according to the distribution represented by the factor graph. Experimental results on real-world datasets show our method is effective and can outperform the state-of-the-art approach.
Association Mining Based Consistent Service Configuration
Wang Tao, Chen Wei, Li Juan, Liu Shaohua, Su Lingang, Zhang Wenbo
2020, 57(1):  188-201.  doi:10.7544/issn1000-1239.2020.20190079
Asbtract ( 419 )   HTML ( 10)   PDF (2019KB) ( 136 )  
Related Articles | Metrics
Componentized service-oriented software systems always consist of loosely coupled hetero-geneous service components, each of which contains a large number of configuration items configured with high flexibility. Complex dependencies exist between service components, resulting in their interrelated configuration items, so the operations of deploying, updating and migrating components are prone to errors. For configuration items related with each other, changing one configuration item requires to modify other related configuration items. Otherwise, violating constraints perhaps happens, which should cause system failure. Therefore, analyzing associations between configuration items is one key to ensure the reliability of a system. This paper proposes a service configuration approach based on association mining. Our approach crawls configuration files from Internet, narrows the analysis scope to frequently changed configuration items, generates association coefficients for item pairs according to the similarity of items’ name, value and type, determines the set of candidate item pairs with rules, and then outputs a list of ordered configuration item pairs for query. We have deployed two typical open-source software systems to validate our approach for mining configuration associations between configuration items with case studies. Experimental results show that our approach can accurately detect most associations between configuration items.
Plagiarism Detection of Multi-Threaded Programs by Mining Behavioral motifs
Tian Zhenzhou, Wang Ningning, Wang Qing, Gao Cong, Liu Ting, Zheng Qinghua
2020, 57(1):  202-213.  doi:10.7544/issn1000-1239.2020.20180871
Asbtract ( 423 )   HTML ( 7)   PDF (1626KB) ( 174 )  
Related Articles | Metrics
Thread scheduling nondeterminism in multi-threaded programs severely perturbs dynamic birthmarking techniques, which serves as one of the most promising approaches in solving obfuscation-resilient software plagiarism detection problem. In the extreme cases, a traditional dynamic birthmarking method even fails to detect plagiarism between a multi-threaded program and itself. To solve the problem, a novel method for plagiarism detection of multi-threaded programs is proposed by abstracting behavioral motifs, which are less susceptible to thread scheduling, from the similar parts of same-input-driven execution traces of a multi-threaded program. Specifically, the method captures execution traces during program runs, from which a series of motifs are inferred by performing trace pruning, gram-based matching, as well as bidirectional extending and abstraction, to depict the behavior of the multi-threaded program. Plagiarism between two programs is then determined by measuring the similarity of their motifs and a threshold. An empirical study conducted on a public benchmark consisting of totally 234 different versions of multi-threaded programs shows that the proposed method is resilient against most state-of-the-art code obfuscation techniques and outperforms two existing dynamic birthmarking methods TreSB (thread-related system call birthmark) and TOB (thread-oblivious birthmark) framework which also target multi-threaded program plagiarism detection. It indicates that the mined behavioral motifs form a new kind of robust thread-oblivious birthmark.
Calculation Principle and Algorithm for the Window of Exact Acceleration in Real-Time Model Checking
Wang Guoqing, Zhuang Lei, He Mengyang, Song Yu, Ma Ling
2020, 57(1):  214-226.  doi:10.7544/issn1000-1239.2020.20190052
Asbtract ( 399 )   HTML ( 4)   PDF (1423KB) ( 119 )  
Related Articles | Metrics
When real-time systems are modeled as timed automata, different time scales may lead to a lot of fragmentations of the symbolic state space. This problem can be solved by computing the zones which in practice use the abstraction technique. The state-of-the-art abstraction methods produce an approximation that is closer to the actual reachable clock valuation, which are coarser abstractions. The exact acceleration is a finer abstraction way to reduce the required storage space and it can solve or alleviate the state space explosion problem of the fragmentations. Calculating the acceleratable cycle’s window is the key technology in exact acceleration. The calculation of the window in acceleratable cycle depends on the location invariant, edge constraint and clock reset. By comprehensively analyzing the exact acceleration principle, an algorithm is proposed to calculate the window in exact acceleration. Firstly, it is important to identify all acceleratable cycles in time automaton. Choose one node with clock reset on incoming edge as the start and check whether there is a clock reset on outgoing edge for every nodes. Secondly, all the recorded nodes link as a new cycle according to the recording ordering. Each node in new cycle has the original location invariant and each edge contains clock reset. In addition, edge constraints need to calculate. Finally, the window of acceleratable cycle can denote an interval [a,b], where a means the sum of edge constraints and b means the sum of location invariants. The time complexity of the algorithm is O(n\+2). According to the calculation principle, the algorithm can get the valid data of the window, reduce other useless data and nodes, compress the acceleratable cycle, make the model simpler and enhance the computing efficiency. The algorithm lays the foundation for the research and development of the automatic model checking program.
The Method of the K-Dominant Space Skyline Query in Road Network
Li Song, Dou Yanan, Hao Xiaohong, Zhang Liping, Hao Zhongxiao
2020, 57(1):  227-239.  doi:10.7544/issn1000-1239.2020.20190026
Asbtract ( 425 )   HTML ( 5)   PDF (1542KB) ( 146 )  
Related Articles | Metrics
In order to make up for the shortcomings of the existing research results in dealing with the K-dominat space Skyline query problem in the road network environment, the method of the K-dominant space Skyline query in road network based on the network Voronoi diagram is proposed. This method applies K-dominant to the Skyline query of road network to deal with the multi-attribute data objects and can be used to solve the multi-objective decision problems in practical applications in the road network. The method mainly includes reduction data set process and K-dominant checking process in road network. Firstly, the Voronoi diagram is constructed based on spatial data points, and the query convex hull is built for query points. The Voronoi diagram of the data and the positional relationship of the query area are used to cut the data set. Thus the data set is optimized and the phenomenon of repeated search of query points is effectively reduced. Then, the refined set is obtained by K-dominant checking on the non-spatial attributes of the candidate set. Finally, the final spatial Skyline set is obtained by dominating the refined set.Theoretical research and experiments show that the proposed method has higher efficiency and can handle the K-dominant space Skyline query problem in the road network better.