Loading...
ISSN 1000-1239 CN 11-1777/TP

Table of Content

01 March 2017, Volume 54 Issue 3
Participatory Sensing: People-Centric Smart Sensing and Computing
Yu Ruiyun, Wang Pengfei, Bai Zhihong,Wang Xingwei
2017, 54(3):  457-473.  doi:10.7544/issn1000-1239.2017.20151021
Asbtract ( 1376 )   HTML ( 11)   PDF (3490KB) ( 751 )  
Related Articles | Metrics
More and more mobile devices are equipped with various kinds of sensors, and wireless mobile networks, such as 4G, Wi-Fi, have been largely developed and popularized in recent years. All of these factors promote the development of participatory sensing which is also called urban sensing, user-centric sensing, mobile crowd sensing. Participatory sensing can overcome the weakness of wireless sensor networks which are expensive and hard to deploy on a large scale. The participatory sensing system utilizes embedded sensors, social networks, user mobile usage behaviors and other sources where sensing data is generated and recorded in smart mobile devices to sense the physical environment, society and personality information etc. The sensing data is collected, transported and analyzed by the participatory sensing server, and processed useful sensing information is sent to data consumers. Participatory sensing is of great significance in achieving the concept of smart city, ubiquitous computing and Internet of things. The related concepts of participatory sensing and prototypes are introduced first in the paper. Then, the current hot research fields of participatory sensing, such as the participatory sensing prototype design, sensing data related problems, incentive mechanisms, privacy preserving, malicious behaviors, and frameworks in different wireless networks are elaborated in this paper. Finally, a general research method of studying participatory sensing is given.
Survey of Delay-Constrained Data Collection with Mobile Elements in WSNs
Wang Wenhua, Wang Tian, Wu Qun, Wang Guojun, Jia Weijia
2017, 54(3):  474-492.  doi:10.7544/issn1000-1239.2017.20150953
Asbtract ( 880 )   HTML ( 1)   PDF (3218KB) ( 544 )  
Related Articles | Metrics
Data collection is one of the hot topics in wireless sensor networks. In traditional wireless sensor networks, those sensor nodes near the sink will deplete their energy prematurely for forwarding data sensed by both themselves and other nodes, which becomes the energy bottleneck and shortens the lifetime of whole networks. To save the energy of sensors in the wireless sensor networks, mobility elements are introduced to collect data in a lot of research work since their energy can be replenished because of mobility. However, the velocity of the mobile elements is slow, which may lead to long data collection delay. To address this problem, the problem of how to maximize the network lifetime while guaranteeing the data collection delay being less than a certain value has become a hot topic. In this paper, we investigate this kind of delay-constrained data collection methods with mobile elements in detail. We first sum up the characteristics of the delay-constrained mobile data collection methods via a novel classification. These methods are compared with each other according to a serial of key parameters. Moreover, we analyze the advantages, disadvantages and the application scope of these methods, summarize the main problems to be addressed, and further point out the future outlook on the research and application directions.
Data Collection Method in Clustering Network Based on Hybrid Compressive Sensing
Li Zhetao, Zang Lang, Tian Shujuan, Li Renfa
2017, 54(3):  493-501.  doi:10.7544/issn1000-1239.2017.20150885
Asbtract ( 1008 )   HTML ( 2)   PDF (3002KB) ( 481 )  
Related Articles | Metrics
In order to reduce the number of transmissions and balance the network load in wireless sensor network, this paper presents a data collection method by using hybrid compressive sensing (cs) in clustering network. Firstly we choose some nodes that are close to the temporary cluster-centroid as the candidate cluster head(CH), secondly determine the CH nodes on the basis of the distance of the candidate nodes to determined CH orderly. Then the common sensor nodes join their nearest cluster. Lastly we build a data transmission tree root of sink node that connects to all CHs greedy. When the number of data transmissions is higher than the threshold, nodes transmit data by using CS. On scenarios of compressive ratio equals 10,the simulation results demonstrate that the number of transmissions for the proposed method is 75% and 65% less than that of Clustering without CS and SPT without CS, 35% and 20% less than that of SPT with Hybrid CS and Clustering with Hybrid CS; The standard deviation of nodes transmissions for the proposed method is 62% and 81% less than that of Clustering without CS and SPT without CS, 41% and 19% less than that of SPT with Hybrid CS and Clustering with Hybrid CS.
A New Algorithm Combining with the Characteristic of the Problem for Model-Based Diagnosis
Ouyang Dantong, Zhou Jianhua, Liu Bowen, Zhang Liming
2017, 54(3):  502-513.  doi:10.7544/issn1000-1239.2017.20150952
Asbtract ( 867 )   HTML ( 0)   PDF (1738KB) ( 498 )  
Related Articles | Metrics
Model-based diagnosis has been popular in the field of artificial intelligence. In recent years, with a gradual increase of the efficiency of SAT solvers, model-based diagnosis is converted into SAT problem. After deeply studying CSSE-tree algorithm—a method for solving model-based diagnosis, combining with the characteristics of diagnose problem and SAT solving process, we solve the problem by diagnosing the candidate solutions which contain more elements first, thereby reducing the scale of SAT problem. Based on the minimal diagnostic solutions and non-minimal pruning methods on diagnostic solutions, we firstly propose a non-diagnostic solution theorem and a non-solution space pruning algorithm, which implements the non-solution space pruning effectively. We first solve the candidate solutions which contain more elements. According to the features of solution and non-solution method, we construct LLBRS-tree method based on reverse search. Experimental results show that compared with the algorithm of CSSE-tree, the algorithm of LLBRS-tree has less number of SAT solving, has smaller scale of the problem, better efficiency, especially when solving multiple diagnose problems its efficiency is more significant.
A Constraint Network Model and Parallel Arc Consistency Algorithms Based on GPU
Li Zhe, Li Zhanshan,Li Ying
2017, 54(3):  514-528.  doi:10.7544/issn1000-1239.2017.20150912
Asbtract ( 782 )   HTML ( 1)   PDF (1846KB) ( 463 )  
Related Articles | Metrics
Constraint satisfaction problem is a popular paradigm to deal with combinatorial problems in artificial intelligence. Arc consistency (AC) is one of basic solution compression algorithms of constraint satisfaction problem, which is also a core algorithm of many excellent advanced algorithms. When constraints are considered independently, AC corresponds to the strongest form of local reasoning. An effective underlying AC can improve the efficiency of reducing the search space. Recent years, GPU has been used for constituting many super computers, which solve many problems in parallel. Based on GPU's computation, this paper proposes a constraint networks presentation model N-E and its parallel generation algorithm BuildNE. According to fine-grained arc consistency AC4, a parallel edition AC4\+GPU and its improved algorithm—AC4\+GPU, are proposed. The two parallel algorithms extend arc consistency to GPU. Experimental results verify the feasibility of these new algorithms. Compared with AC4, the parallel versions have made the 10% to 50% acceleration in some smaller instances, and obtained 1 to 2 orders of magnitude in some bigger instances. They provide a core algorithm to other constraint satisfaction problem solving in parallel for further study.
The Optimal Individual Convergence Rate for the Projected Subgradient Method with Linear Interpolation Operation
Tao Wei, Pan Zhisong, Zhu Xiaohui, Tao Qing
2017, 54(3):  529-536.  doi:10.7544/issn1000-1239.2017.20160155
Asbtract ( 837 )   HTML ( 0)   PDF (1988KB) ( 472 )  
Related Articles | Metrics
The projected subgradient method is one of the simplest algorithms for solving nonsmooth constrained optimization problems. So far, only the optimal convergence rate in terms of the averaged output has been obtained. Its individual convergence rate is even regarded as an open problem. Recently, by incorporating a linear interpolation operation into the dual averaging methods, Nesterov and Shikhman achieved a quasi-monotone subgradient method for nonsmooth convex minimization, which is proved to have the optimal individual convergence rate. Unfortunately, their discussion is only limited to the dual averaging methods. This paper focuses on the individual convergence rate of projected subgradient methods. By employing the same technique, we present a projected subgradient method with linear interpolation operation. In contrast to the work of Nesterov and Shikhman, the projected subgradient method itself in the proposed method has to be modified slightly so as to ensure the individual convergence rate. We prove that the proposed method has the optimal individual convergence rate for solving nonsmooth convex problems. Further, the corresponding stochastic method is proved to have the optimal individual convergence rate. This can be viewed as an approximate answer to the open problem of optimal individual convergence of the projected subgradient methods. The experiments verify the correctness of our analysis and demonstrate the high performance of the proposed methods in real-time stabilization.
Multi-Task Feature Learning Algorithm Based on Preserving Classification Information
Wang Jun, Wei Jinmao,Zhang Lu
2017, 54(3):  537-548.  doi:10.7544/issn1000-1239.2017.20150963
Asbtract ( 1059 )   HTML ( 2)   PDF (2041KB) ( 575 )  
Related Articles | Metrics
In pattern recognition, feature selection is an effective technique for dimension reduction. Feature evaluation criteria are utilized for assessing the importance of features. However, there are several shortcomings for currently available criteria. Firstly, these criteria commonly concentrate all along on class separability, whereas class correlation information is ignored in the selection process. Secondly, they are hardly capable of reducing feature redundancy specific to classification. And thirdly, they are often exploited in univariate measurement and unable to achieve global optimality for feature subset. In this work, we introduce a novel feature evaluation criterion called CIP (classification information preserving). CIP is on the basis of preserving classification information, and multi-task learning technology is adopted for formulating and realizing it. Furthermore, CIP is a feature subset selection method. It employs Frobenius norm for minimizing the difference of classification information between the selected feature subset and original data. Also l2,1 norm is used for constraining the number of the selected features. Then the optimal solution of CIP is achieved under the framework of the proximal alternating direction method. Both theoretical analysis and experimental results demonstrate that the optimal feature subset selected by CIP maximally preserves the original class correlation information. Also feature redundancy for classification is reduced effectively.
Stable Equilibrium Optimization for 3D Printed Objects
Wu Fenfen, Liu Ligang
2017, 54(3):  549-556.  doi:10.7544/issn1000-1239.2017.20150911
Asbtract ( 714 )   HTML ( 0)   PDF (2865KB) ( 357 )  
Related Articles | Metrics
3D printing is increasingly popular, giving the public the ability to create objects. First of all, a 3D digital model is constructed by 3D modeling software, and then manufactured via 3D printer. Virtual environment with no physical rules makes the model stand with diverse gesture, but failed in the real world. We put forward the stable equilibrium optimization problem for 3D printed object for the first time. To solve this problem, we propose a stable equilibrium optimization algorithm based on the principle of the tumbler, which can make the object achieve stable equilibrium. Specifically, we voxelize the object iteratively from horizontal and vertical direction voxel by voxel until obtaining the best carving result, measured by an equation; for some exceptions, we continue to modify the bottom within an acceptable small range to achieve stable equilibrium. This novel heuristic carving strategy avoids possible problems such as suspension, self-intersection, etc. Modification of the bottom part within the small range also ensures that the shape of the object is basically unchanged. The feasibility of the proposed method is demonstrated by several examples of successful optimization. The limitation is that multiple material is not considered in this paper for the complexity of operation, which is to be explored in the future.
A DSP-Topk Query Optimization Algorithm Supporting Indoor Obstacle Space
Li Bohan, Zhang Chao, Li Dongjing, Xu Jianqiu, Xia Bin, Qin Xiaolin
2017, 54(3):  557-569.  doi:10.7544/issn1000-1239.2017.20150895
Asbtract ( 724 )   HTML ( 0)   PDF (3039KB) ( 435 )  
Related Articles | Metrics
Multi-objective optimization is one of the key technologies in moving objects data management. While in the procession of multi-objective optimization query, the concerned target object's attributes may depend on the other moving objects' attributes. Therefore, moving objects can affect each other, which will lead to the uncertainty of the object's properties. The existing multi-objective optimization algorithms need to traverse all the target objects, furthermore, they can't effectively solve the problem of dynamic change of object attributes. We propose an effective multi-objective optimization algorithm DSP-Topk (dynamic and support pruning Topk) to solve the query in the obstacle space. Visual area model can calculate the distance between the moving objects with obstacles. The method of viewable area which applies the maximum differential angle can improve the calculation performance of the distance between moving objects. Then, we study the uncertainty of the object's attributes by using the dynamic adjustment mechanism. The given pruning strategy with preprocessing improves the efficiency of the query. The results of experiments indicate that DSP-Topk algorithm improves the query efficiency significantly compared with the existing Topk algorithm and DS-Topk algorithm. Combined with the real testing data of goods in stores, the rationality and effectiveness of the algorithm are also verified.
TMS: An Novel Algorithm for Top-k Queries with Multi-Dimensional Selections on Massive Data
Han Xixian, Liu Xianmin, Li Jianzhong,Gao Hong
2017, 54(3):  570-585.  doi:10.7544/issn1000-1239.2017.20150615
Asbtract ( 1235 )   HTML ( 2)   PDF (5656KB) ( 501 )  
Related Articles | Metrics
In many applications, top-k query is an important operation to return a set of interesting points among potentially huge data space. Usually, a multi-dimensional selection is specified in top-k query. It is found that the existing algorithms cannot process the query on massive data efficiently. A sorted-list based algorithm TMS (top-k with multi-dimensional selection) is proposed to process top-k query with selection condition on massive data efficiently. TMS employs selection attribute lattice (SAL) of hierarchical structure to distribute tuples and obtains a horizontal partitioning of the table. Each partition is stored in column-oriented model and the lists of ranking attributes are arranged in descending order of attribute values. Given multi-dimensional selection, the relevant lattice cells are determined by SAL and this reduces the number of the retrieved tuples significantly. Double-sorting operation is devised to perform progressive selection evaluation. The efficient pruning is developed to discard the candidates which do not satisfy selection condition or score requirement. The extensive experimental results show that TMS has a significant performance advantage over the existing algorithms.
Incremental and Interactive Data Integration Approach for Hierarchical Data in Domain of Intelligent Livelihood
Xia Ding, Wang Yasha, Zhao Zipeng, Cui Da
2017, 54(3):  586-596.  doi:10.7544/issn1000-1239.2017.20151048
Asbtract ( 841 )   HTML ( 3)   PDF (2394KB) ( 421 )  
Related Articles | Metrics
Intelligent livelihood is an important domain of the smart city. In this domain, there are many application systems that have accumulated a large number of multi-source hierarchical data. In order to form an overall and unified view of the multi-source data in the whole city, variant data schemas should be integrated. Considering the distinct characteristics of the data from intelligent livelihood domain, such as lacking support for semantic matching of Chinese labels, numerous quantities of schemas, missing element labels, the existing schema integration approaches are not suitable. Under such circumstances, we propose an incremental and iterative approach which can deduce the massive computation workload due to the big number of schemas. In each iteration, both meta information and instance data are used to create more precise results, and a similarity entropy based criteria is carefully introduced to control the human intervention. Experiments are also conducted based on real data of second-hand housing in Beijing fetched from multiple second-hand Web applications. The results show that our approach can get high matching accuracy with only little human interventions.
Elastic Resource Provisioning Approach for Container in Micro-Service Architecture
Hao Tingyi, Wu Heng, Wu Guoquan,Zhang Wenbo
2017, 54(3):  597-608.  doi:10.7544/issn1000-1239.2017.20151043
Asbtract ( 1261 )   HTML ( 11)   PDF (3223KB) ( 753 )  
Related Articles | Metrics
As a logical abstraction of physical resources, container-based virtualization has been adopted widely in cloud computing environment for elastic resource provisioning, which is lower overhead and potentially better performance. Nowadays, more and more enterprises seek to move large-scale Internet-based applications with micro-service architecture into the container-based infrastructure, and focus on efficient resource management. Unfortunately, many existing approaches were restricted by physical machine or virtual environment, the resources are hard to be elastically or timely provisioning. Therefore, Internet-based applications may suffer from frequent service-level agreement(SLA) violations under flash-crowd conditions. To address this limitation, this thesis proposes a quality of service(QoS) sensitive resource provisioning approach for containers in micro-service architecture based on the feed-forward control. We employ a performance model based on queuing theory. Firstly, we capture the relationship among workload, resource utilization and response time. Secondly, we predict the response time with fuzzy federal adaptive Kalman filtering based on the feed-forward control, and if the prediction result is against pre-defined QoS, elastic resource scheduling process is triggered. Experimental results based on CloudStone show that the feed-forward algorithm converges quickly. The prediction result of the response time has only maximum error of 10%, and is more effective and accurate compared with existing approaches. Furthermore, our approach can effectively protect resource provisioning for flash-crowds workload.
Test Suite Generating for Stateful Web Services Using Interface Contract
Li Yin
2017, 54(3):  609-622.  doi:10.7544/issn1000-1239.2017.20151045
Asbtract ( 750 )   HTML ( 0)   PDF (4631KB) ( 478 )  
Related Articles | Metrics
As Web services have the characteristics of only providing interface documents, complex technical specifications and run-time transient change, it is still a difficult problem to automatically generate test data effectively. At present, there is less current research on testing operation sequence for stateful Web services. Moreover, the existing approaches take insufficient account of service behavior information and the dependency between operations, and are lack of effective means of test automation, which may lead to high cost and short of specific for the test data. In this paper, a test case generation approach is proposed based on EFSM model operation interface contract. This approach constructs the operation model according to the standard WSDL document to describe the interaction relationship between operations and then add semantic annotation for them. Based on EFSM model, the paper proposes an automated operation sequences generation method and finally obtain the test suite using operation interface contract. The experiment shows that the proposed approach can generate reasonable test suite for stateful Web service efficiently, which can enhance the fault detection ability and improve the efficiency of test cases compared with the existed approaches.
Cross-Browser Issues Detection in JavaScript-Based Web Applications Based on Record/Replay
Wu Guoquan, He Meimei, Wei Jun, Zhong Hua, Huang Tao
2017, 54(3):  623-632.  doi:10.7544/issn1000-1239.2017.20151051
Asbtract ( 782 )   HTML ( 4)   PDF (1597KB) ( 495 )  
Related Articles | Metrics
With the advent of Web 2.0 application, and the increasing number of browsers and platforms on which the Web applications can be executed, XBI (cross browser incompatibilities) is becoming a serious problem for organizations to develop Web based software with good user experience. Although some techniques and tools have been proposed to identify XBI, they cannot assure the same execution when Web application is rendered in different platforms as only user interactions events are considered, which may result in generating both false positives and false negatives. To address this limitation, by leveraging existing record/replay technique, this paper develops X-CHECK, a novel cross browser incompatibilities testing approach and tool, which can faithfully reproduce a previous execution and facilitate XBI detection by directly replaying the captured event trace in different platforms. The same execution in different platforms improves the accuracy in detecting XBI. By observing DOM mutations during the replay, X-CHECK also designs an incremental cross-browser detection algorithm, which only detects mutational content of Web page. This algorithm improves the performance in detecting XBI. The empirical evaluation shows that X-CHECK is effective, efficient and improves on the state of the art, and can provide useful support to developers for diagnosis and (eventually) elimination of XBI.
Software Defect Prediction Model Based on the Combination of Machine Learning Algorithms
Fu Yiqi, Dong Wei, Yin Liangze,Du Yuqing
2017, 54(3):  633-641.  doi:10.7544/issn1000-1239.2017.20151052
Asbtract ( 1498 )   HTML ( 15)   PDF (1940KB) ( 1111 )  
Related Articles | Metrics
According to the metrics information and defects found in a software product, we can use software defect prediction technology to predict more defects that may also exist as early as possible, then testing and validation resources are allocated based on the prediction result appropriately. Defect prediction based on machine learning techniques can find software defects comprehensively and automatically, and it is becoming one of the main methods of current defect prediction technologies. In order to improve the efficiency and accuracy of prediction, selection and research of machine learning algorithms is the critical part. In this paper, we do comparative analysis to different machine learning defect prediction methods, and find that different algorithms have both advantages and disadvantages in different evaluation indexes. Taking these advantages, we refer to the stacking integration learning method and present a combined software defect prediction model. In this model, we first predict once, then add the prediction results of different methods in the original dataset as new software metrics, and then predict again. Finally, we make experiments on Eclipse dataset. Experimental results show that this model is technical feasibility, and can decrease the cost of time and improve the accuracy.
An Approach to Automatically Build Customizable Reference Process Models
Ling Jimin, Zhang Li
2017, 54(3):  642-653.  doi:10.7544/issn1000-1239.2017.20151047
Asbtract ( 677 )   HTML ( 0)   PDF (3481KB) ( 515 )  
Related Articles | Metrics
Process models are becoming more and more widespread in contemporary organizations. It is a complex and high-cost work to develop individual process models for specific business requirements. The modeling procedure can be accelerated and cost-decreased by using reference process models as a basis for individual process models development, so reference process models are widely adopted by organizations. Because building reference process models requires a mass of modeling and analyzing work by domain experts, a major challenge has emerged that how to automatically build a preliminary reference model inductively based on the existing process variants to provide assistance to domain experts. The existing methods of building reference process models have some shortcomings, such as output reference models of most methods have high model complexity and reference models described by traditional process modeling language could not entirely represent various recommended practice in a specific domain. To build reference process models with high representativeness and understandability, this paper proposes an approach to automatically build customizable reference models which support hierarchical sub-process based on fragments clustering. The base model, change options and constraints in customized process models are fully supported to build automatically by our method. The evaluation results show that the generated reference models could achieve fine domain representativeness and model complexity.
Objectives Conformity Argument Patterns for Software Testing Process in DO-178C
Yang Yang, Wu Ji, Yuan Chunchun, Liu Chao, Yang Haiyan, Xing Liang
2017, 54(3):  654-668.  doi:10.7544/issn1000-1239.2017.20151055
Asbtract ( 949 )   HTML ( 0)   PDF (8539KB) ( 387 )  
Related Articles | Metrics
Safety-critical software has been widely used in many fields. As the specific requirement of safety-critical software is preventing catastrophes, this kind of software must comply with its relevant safety standards. But now it does not have any effective ways to construct objectives conformity argument model for standards. By analyzing the features of objectives of software testing process in DO-178C, an objective conformity argument pattern description framework based on GSN is proposed, and these patterns are described through four fields: the problems that we need to solve, the specification for the solution, the approach to use them and the effect after using them. At the same time, some extensions for safety case patterns are proposed to describe the objectives conformity argument patterns. On this basis, three objectives conformity argument patterns based on software testing process in DO-178C are proposed, which are code-requirement conformity argument pattern, test coverage of requirements argument pattern and test coverage of structure argument pattern. At the same time, the instantiated method to build the objectives conformity argument structure for a specific program based on these patterns is proposed. People can construct objectives conformity argument structure for objectives of software testing process in DO-178C effectively through the proposed way. At last, one case study, which is an embedded real-time operating system, indicates that the objectives conformity argument patterns proposed here are useful and effective.
EFSM Amorphous Slicing Based Test Case Generation
Su Ning, Guo Junxia, Li Zheng,Zhao Ruilian
2017, 54(3):  669-680.  doi:10.7544/issn1000-1239.2017.20151053
Asbtract ( 809 )   HTML ( 3)   PDF (3654KB) ( 444 )  
Related Articles | Metrics
Model based testing is a crucial dimension in software testing. However, with the increase of model scale, model based test case generation is becoming more and more arduous. Extended Finite State Machine (EFSM) has been widely used in industry, which is extended from Finite State Machine (FSM), and can depict the dynamic behavior of software system more accurately. EFSM based test case generation mainly includes two parts: test transition paths generation and test data generation that covers the test transition paths, in which search based technology is adapted in test data generation. In order to improve the efficiency of test case generation in large-scale EFSM models, EFSM slicing based test case generation and test case compensating are proposed based on the previous research on EFSM dependence analysis and slicing for non-termination of EFSM models. Two case studies are introduced to show that model slicing based test case generation is more accurate in feasible path generation and test intensity improvement. In this paper, the experiments on 7 standard EFSMs are conducted, and the results show that all of the test case generated from slice can be used in the original model, and in most cases, test case generation efficiency on slice is higher than that on the original model.