Loading...
ISSN 1000-1239 CN 11-1777/TP

Table of Content

15 March 2013, Volume 50 Issue 3
Paper
Joint TOA/AOA Localization Based on UWB for Wireless Sensor Networks
Xiao Zhu, Tan Guanghua, Li Renfa, and Zhang Xiaoming
2013, 50(3):  453-460. 
Asbtract ( 815 )   HTML ( 1)   PDF (2047KB) ( 739 )  
Related Articles | Metrics
A novel localization scheme for wireless sensor networks (WSNs) is proposed based on the joint estimation algorithm of TOA (time of arrival) and AOA (angle of arrival) under UWB (ultra wideband) signaling. One reference node is enough for the proposed method to localize other sensor nodes in 2-dimension, and the synchronization of the network is not necessary, satisfying the low complexity requirement of WSNs. RTT (round trip time) is used to estimate TOA, and the multi-path detection algorithm for TOA estimation is presented. TDOA is adopted to estimate AOA, avoiding the complicated antenna beam-forming for AOA. Simultaneously, the error model to the localization performance of the proposed scheme is theoretically analyzed and tested under IEEE 802.15.4a channels. The results demonstrate the effectiveness of the proposed scheme.
Extending the Network Lifetime Using Topology Control in Ad Hoc Networks
Li Xiaohong, Wang Wenyan, and Wang Dong
2013, 50(3):  461-471. 
Asbtract ( 545 )   HTML ( 1)   PDF (3030KB) ( 430 )  
Related Articles | Metrics
Topology control is an important technique to improve the energy efficiency and prolong the lifetime of the wireless ad hoc network. Based on the widely used definition of the network lifetime and energy model for wireless ad hoc networks, this paper analyses the relationship of the network lifetime, the node’s transmit range, electronics consumption, and load of the relationship between the nodes; then we find that there is a certain logcal relationship between topology control and network lifetime. Different from the former work on topology control where the objective is to minimize the total energy consumption in the network, we propose a distributed topology control algorithm for explicitly maximizing network lifetime, referred to as MLTC. After obtaining the information of all the one-hop neighbors by broadcasting messages, each node independently constructs a resulting spanning tree preserving all maximum-lifetime paths to its neighbors. Finally, each node adjusts its transmitting range at a proper value covering all of the optimum adjacent nodes it has chosen. The resulting maximum-lifetime topology has several nice properties: connectivity, symmetric and less transmitting range of per-node. Specifically, this structure is adjustable in response to the changes of reception energy consumption of wireless network interface. Theoretical analysis and simulation results demonstrate the correctness and effectiveness of the proposed algorithm.
An Adaptive Beacon Exchange Algorithm Based on Link Broken Probability
Zhang Hengyang, Zheng Bo, Chen Xiaoping, and Yu Jia
2013, 50(3):  472-480. 
Asbtract ( 624 )   HTML ( 1)   PDF (3227KB) ( 743 )  
Related Articles | Metrics
In mobile wireless sensor networks, greedy geographical routing protocols usually adopt a period beacon exchange algorithm to build and update neighbor table, which brings about the phenomenon of temporary communication blindness in the mobility scenery. To address this problem, we propose a new adaptive beacon exchange algorithm based on link broken probability which corresponds with the running time of mobility scenery. Impact of node mobility on network connectivity is analyzed based on the Markov chain model of link status. The formula of link broken probability is derived from theoretical analysis. Then, the work node calculates variable beacon period according to the link broken probability threshold relative to up node. Idle node calculates variable beacon period according to the threshold link broken probability relative to its all neighbors. The threshold probability can be adjusted to meet the performance requirement of networks. Forwarding node removes the next hop from neighbors table, if it did not receive the feedback beacon after two beacon periods. The adaptive beacon exchange algorithm can get the accurate neighbors table and mitigate the phenomenon of temporary communication blindness. Simulation shows that the proposed algorithm can obtain high packet success delivery ratio and low consumption. So it is scalable and applicable to large-scale mobile wireless sensor networks.
An Adaptive Dynamic Redundant Reservation Strategy in Grid Computing
Xiao Peng, and Hu Zhigang
2013, 50(3):  481-489. 
Asbtract ( 521 )   HTML ( 0)   PDF (2776KB) ( 402 )  
Related Articles | Metrics
Redundant mechanism has been widely applied as an effective means to enhance the reliability of grid systems. However, it will lead to higher ineffective utilization, which in turn brings about many negative effects on applications’ performance. To address this problem, an adaptive redundant reservation strategy is proposed with the aim to mitigate the negative effects of conventional strategies. It provides a mechanism that enables the grid systems adjust redundant degree at runtime without decreasing the reliability. The quantitative relationship between redundant degree and reliability is presented theoretically, and extensive experiments based on real-world workload are conducted to examine the performance of the proposed strategy compared with other redundant strategies. The experimental results show that the proposed redundant strategy outperforms other existing strategies in terms of effective resource utilization. Also it brings about tradeoff between the reliability and the execution performance of applications, which significantly mitigates the negative effectives of conventional redundant strategies.
Secure and Efficient Top-k Query Processing in Two-Tier Sensor Network
Liao Xiaojing, Li Jianzhong, and Yu Lei
2013, 50(3):  490-497. 
Asbtract ( 545 )   HTML ( 0)   PDF (1615KB) ( 536 )  
Related Articles | Metrics
In two-tier sensor networks, resource-rich storage nodes at the upper tier collect sensing data from resource-poor sensor nodes at the low tier and answer queries from the network owner. Sensor nodes perform sensing task and submit sensing data in one time-slot to the nearest storage node while storage nodes answer and process the query from the network owner. However, in hostile environments, the storage nodes performing query processing and storing sensing data confront serious security concerns, thus storage nodes may be compromised and then instructed to return fake query results. Therefore, it is important to verify the correctness of query results. A secure and efficient query processing scheme is proposed to verify the authenticity and completeness of top-k query answers on sensing data in one time-slot, which is the first of secure top-k query processing performed on the time-slot sensing data set. Through hypothesis testing method combined with computing commitment, we effectively increase the detection rate and reduce the additional communication cost for enabling verifiable top-k queries, which force compromised storage nodes to return both authentic and complete top-k query results to the network owner. Theoretical analysis and simulation results successfully validate the efficacy and efficiency of the proposed schemes.
Network Intrusion Detection Model Based on Context Verification
Tian Zhihong, Wang Bailing, Zhang Weizhe, Ye Jianwei, and Zhang Hongli
2013, 50(3):  498-508. 
Asbtract ( 743 )   HTML ( 1)   PDF (2360KB) ( 679 )  
Related Articles | Metrics
Network intrusion-detection systems (NIDSs) are considered an effective second line of defense against network-based attacks directed to computer systems. Because of the increasing severity and likelihood of such attacks, the NIDSs are employed in almost all large-scale IT infrastructures. The Achille’s heel of NIDSs lies in the large number of false positives. However, today’s NIDSs often try to detect not only intrusions, but also successful intrusion attempts. This is because it can be difficult for an NIDS to determine the result of an intrusion attempt. A popular approach of verifying intrusion attempt results is to let an IDS be aware of the environment and configuration of the systems under attack. Based on the above idea, in order to eliminate the negative influence on IDS stability caused by non-relevant alerts, a network intrusion detection model is designed based on context verification. With the combination of environment context, weakness context, feedback context and anomaly context, our model constructs an effective, stable, integrated, and extendable non-relevant alerts processing platform which focuses on context verification and integrates multiple security techniques. It achieves the automatic validation of alarming and automatic judgments of their effectiveness to eliminate the non-relevant alerts, and thus it establishes the reliable foundation for alerts association.
TIV and Access Delay in the Internet Delay Space
Wang Zhanfeng, Chen Ming, Xing Changyou, Bai Huali, and Wei Xianglin
2013, 50(3):  509-516. 
Asbtract ( 506 )   HTML ( 0)   PDF (1161KB) ( 504 )  
Related Articles | Metrics
Many researches demonstrate that triangle inequality violation (TIV) is a universal phenomenon in the Internet delay space, which is the result of routing inefficiency. However, there are still no investigations on the TIVs position and its relationship with the access delay. Firstly, the Internet is divided into two parts: access network and core network, by different definitions. A delay model of the Internet delay space is proposed to analyze the TIVs position. Based on this model, it is found that TIV appears in the core of the Internet rather than the access networks and the access delay can reduce the TIV number and alleviate the TIVs severity. Then, a series of experiments are carried out on the PlanetLab to measure the end-to-end delay matrix and its corresponding topology. Afterwards, a TIV searching algorithm ScoutTIV is designed to count the TIV number in the measurement dataset by different definitions of edges between the access network and core network. In the experiments, the datasets are divided into three small datasets based on the country attributes of hosts and one random dataset. The experimental results on different datasets accord with our conclusion, which can help network coordinate systems to achieve better prediction accuracy.
Centralized-Calculating-Based 2-Disjoint Multipath Routing Algorithm for Wireless Sensor Networks
Yu Leilei, Chen Dongyan, Liu Yuemei, and Huang Xu
2013, 50(3):  517-523. 
Asbtract ( 603 )   HTML ( 0)   PDF (1864KB) ( 598 )  
Related Articles | Metrics
In the multipath routing (MPR) algorithms for wireless sensor networks (WSNs), disjoint multipath routing (DMPR) approaches perform better in reliability and fault tolerance. However, DMPR poses significant challenges in terms of the optimization of the disjoint paths and the data transmission along the disjoint paths. Considering certain industrial monitoring applications (for example, mine safety monitoring) which have relatively stable network topologies, we propose a centralized-calculating-based 2-disjoint multipath routing algorithm for WSNs (CCDMPR). Based on the global information, the algorithm firstly calculates near to optimize 2-node (link) disjoint paths from a source to the destination using the number of hops and the path quality as metrics, and generates a tiny routing table which is merely composed of the master parent node, secondary parent node couple and the path bit series. Then the tiny routing tables are disseminated to each sensor node along the generated paths. In order to improve the resilience of route maintenance, the algorithm designs a centralized adaptive path maintenance mechanism. In the data routing phase, packets are transmitted according to the path bit series carried in their heads, without any control overhead. The experimental results show that CCDMPR algorithm can shorten the average path length, reduce the total energy consumption and improve the transmission reliability compared with existing approaches.
A Line of Sight Fingerprint Localization Algorithm Resisting Multipath and Shadow
Chen Yongle, , Zhu Hongsong, and Sun Limin
2013, 50(3):  524-531. 
Asbtract ( 422 )   HTML ( 0)   PDF (2748KB) ( 502 )  
Related Articles | Metrics
Indoor localization utilizing wireless technology is becoming an eager interest of research community in recent years. To provide location aware service, obtaining the position of a user accurately is important. The fingerprint localization based on received signal strength (RSS), is a radio map of the area by measuring the power present in a received radio signal, which don’t require the additional hardware cost. However, utilizing RSS for localization has a number of limitations. For example, the fingerprint localization algorithm generally faces the multipath and shadow interference caused by the physical blocking in the practical application. This paper analyzes how the physical blocking to change the signal strength in fingerprint algorithm, and finds that the line of sight(LoS) fingerprint instead of the original fingerprint can avoid the emergency of shadow and reduce the multipath influence. Based on above analysis, we propose a line of sight fingerprint-based localization algorithm (LoSF). Since the line of sight fingerprint has the abnormal values, we design an error processing algorithm to eliminate errors in the fingerprint data. Compared with the existing algorithms, the performance of LoSF algorithm achieves a median error of 2m, much less than the 6m of RADAR algorithm based on the original fingerprint, and less than the half of the COMPASS algorithm with 4.2m precision.
A New Key Hiding Scheme Based on Fingerprint
Li Ximing, Yang Bo, Guo Yubin, and Yao Jintao
2013, 50(3):  532-539. 
Asbtract ( 642 )   HTML ( 0)   PDF (2299KB) ( 415 )  
Related Articles | Metrics
In view of shortage of the existing fingerprintbased key hiding method, a new key hiding scheme based on fingerprint is proposed, which fully uses the information of minutiae set and the texture information around every minutia, depending on a new fingerprint model: minutiae texture strings model. The scheme is called minutiae texture strings key hiding scheme. In this scheme, minutiae set is extracted from fingerprint firstly, and Gabor filter is then used to the area around every minutia to extract texture information. Minutiae set and texture information around every minutia construct the minutiae texture strings model. Secret key generated by a symmetric encryption system or distributed by PKI is divided into n shares by a (n, k) secret sharing algorithm, which are then hidden by minutiae texture strings secretly. Query fingerprint can only recover the hidden key when at least k shares of key are retrieved. Experiments are made on FVC2002 DB1 and DB2, in which one fingerprint is used to hide key and another fingerprint is used to recover the key. Equal error rate (EER) of the scheme is no more than 1%~2.2%, which is better than that of normal fuzzy vault. Security analysis of the scheme shows that information of the key and the fingerprint model is protected effectively, with security higher than that of normal fuzzy vault schemes.
An End-to-End Authentication Protocol for Satellite Communication Network
Zhang Xiaoliang, Tu Yongce, Ma Hengtai, Yang Zhian, and Hu Xiaohui
2013, 50(3):  540-547. 
Asbtract ( 563 )   HTML ( 3)   PDF (3204KB) ( 491 )  
Related Articles | Metrics
With the development of the network technology for satellite communication network, the security protection technologies of satellite communication network are more and more concerned. The end-to-end authentication technology is the key of them. Considering that the present authentication protocols have low efficiency and long authentication time, in this paper, on the basis of the satellite communication network authentication model, we design a new end-to-end authentication protocol, construct a security authentication simulation system, and simulate the protocol-used voice and video business. The simulation results show that the time of security authentication wasted is much shorter than that of transmission, which reflects the high performance of our authentication protocol.
A Membership Degree Refinement-Based Evolutionary Clustering Algorithm
Hou Wei, Dong Hongbin, and Yin Guisheng
2013, 50(3):  548-558. 
Asbtract ( 566 )   HTML ( 0)   PDF (1917KB) ( 574 )  
Related Articles | Metrics
Though FCM has already been widely used in clustering, its alternative calculation of the membership and prototype matrix causes a computational burden for large-scale data sets. An efficient algorithm, called accelerated fuzzy C-means (AFCM), is presented for reducing the computation time of FCM and FCM-based clustering algorithms. The proposed algorithm works by sampling initiation to generate better initial cluster centers, and motivated by the observation that there is the increasing trend for large membership degree values of data points at next iteration, updating cluster center using one step k-means for those data points with large membership degree values and only updating membership of data points with small values at next iteration. To verify the effectiveness of the proposed algorithm and improve the efficiency of EA for fuzzy clustering, AFCM also is applied to fuzzy clustering algorithms based on EAs, such as differential evolution (DE) and evolutionary programming (EP), to observe their performances. Experiments indicate that with a small loss of quality, the proposed algorithm can significantly reduce the computation time of clustering.
Measuring Ontology Inconsistency Based on Dempster-Shafer Theory
Li Dongmei, Lin Youfang, Huang Houkuan, and Tian Xuan
2013, 50(3):  559-567. 
Asbtract ( 581 )   HTML ( 0)   PDF (849KB) ( 555 )  
Related Articles | Metrics
As the size of ontologies grows and applications become more complex, it is often difficult to construct an ontology which is error-free. Inconsistencies arise naturally in the design and development of ontologies. Measuring ontology inconsistency is the basis and prerequisite of inconsistency-dealing, which can help us to decide how to act on an inconsistency, whether to ignore it or to resolve it. A number of proposals have been made for measuring the degree of inconsistency of an ontology. Most of these efforts start with flat ontology. However, the importance of ontology items is regarded unequal in numerous applications, while weights are associated with items of ontology. Therefore, measuring ontology inconsistency with weighted formulae is a significant but hard work. After analyzing the characteristics of uncertainty reasoning method based on evidence theory and ontology inconsistent measurement, this paper proposes an ontology inconsistency measurement method named ETOICM and then proves its correctness. Finally, measurement formula and algorithm are given. Compared with the related methods, features of the new method are illustrated. In addition, the measurement results of ETOICM method can be used as weights and provide basis for further resolving inconsistency and reasoning with inconsistent ontology.
An In-Building Localization Algorithm Based on Wi-Fi Signal Fingerprint
Niu Jianwei, Liu Yang, Lu Banghui, and Song Wenfang
2013, 50(3):  568-577. 
Asbtract ( 700 )   HTML ( 1)   PDF (3123KB) ( 499 )  
Related Articles | Metrics
Since GPS cannot be used under in-building environment and current in-building localization approaches require pre-installed infrastructure, in-building localization becomes a problem demanding prompt solutions for location-based services. Therefore, this paper proposes a novel room-level in-building localization algorithm R-kNN (relativity k-nearest neighbor), which solves the localization problem by leveraging MAC address and RSSI (received signal strength indication) of Wi-Fi access points (APs) deployed in buildings. R-kNN falls into category of property-weighted k-nearest neighbor algorithm. By assigning the weight of each AP according to the relativity between AP pairs, R-kNN can reduce the negative effect of dimension redundancy. Moreover, since it makes no assumption on the physical distribution of rooms and APs, R-kNN can work well with existing APs without deploying any new infrastructure or modifying the existing ones. Experimental results demonstrate that when a large number of APs are available, the localization accuracy of R-kNN is bigger than those of the original kNN algorithm and nave Bayes classifier, while its false positive ratio and false negative ratio is smaller than those of the original kNN algorithm and Nave Bayes classifier in most cases.
A Clustering Hybrid Based Algorithm for Privacy Preserving Trajectory Data Publishing
Wu Yingjie, Tang Qingming, Ni Weiwei, Sun Zhihui, and Liao Shangbin
2013, 50(3):  578-593. 
Asbtract ( 619 )   HTML ( 0)   PDF (5694KB) ( 612 )  
Related Articles | Metrics
Recently, privacy preserving trajectory data publishing has become a hot topic in data privacy preserving research fields. Most previous works on privacy preserving trajectory data publishing adopt clustering techniques. However, clustering based algorithms for trajectory data publishing only consider preserving the privacy of each single trajectory, ignoring the protection of the characteristics of trajectory clustering groups. Therefore, the publishing trajectory data by clustering are vulnerable to suffer re-clustering attacks, which is verified by theoretical analysis and simulated experiments. In order to avoid re-clustering attacks, a (k,δ,Δ)-anonymity model and a clustering hybrid based algorithm CH-TDP for privacy preserving trajectory data publishing are presented. The key idea of CH-TDP is to firstly hybridize between clustering groups, which are generated by the (k,δ)-anonymity model and the related algorithms, and then adopt perturbation within each clustering group. The aim of CH-TDP is to avoid suffering re-clustering attacks effectively while assuring the data quality of the released trajectory data not less than a threshold Δ. CH-TDP and the traditional algorithms are compared and experimental results show that CH-TDP is effective and feasible.
A Column-Store Based Bucket Partition Algorithm for Range Queries
Li Yefeng, Le Jiajin, and Wang Mei
2013, 50(3):  594-601. 
Asbtract ( 1271 )   HTML ( 0)   PDF (1243KB) ( 592 )  
Related Articles | Metrics
Range query is significant to databases. In a column-store database, using range queries on attribute values to obtain the resulting row-id set, would affect the performance of tuple reconstruction. Compared with tree structure, Hash tables are more effective in exact queries but less effective in range queries. With this situation, a bucket partition algorithm for range queries is proposed. Firstly, In order to give a good introduction to the algorithm, a Hash storage model used for range queries (ranged hash, RH) is proposed, along with the definition of the bucket range and the serialization. Then, according to the “read-optimized” feature of column store databases, an improved bucket partition algorithm used for range queries is proposed based on the RH model. The algorithm could generate serializable Hash functions to partition attribute values into buckets, and could improve not only the efficiency of range queries but also the storage efficiency. Finally, the experimental results prove the efficiency of the algorithm.
Hylomorphisms with Parameters and Its Associated Calculational Laws
Yu Shanshan, Li Shixian, and Su Jindian
2013, 50(3):  602-618. 
Asbtract ( 467 )   HTML ( 1)   PDF (2364KB) ( 382 )  
Related Articles | Metrics
Hylomorphisms in functional programming languages have the disadvantages of describing those recursive functions with parameters. To solve this problem, we use the polynomial functors defined on the category of completed partial orders to propose two different kinds of hylomorphisms with fixed or accumulating parameters, named phylo or ahylo morphisms respectively; and prove that they are both unique under the conditions of fixed or accumulating parameters, which as a result extends the research work of Pardo on recursive functions with parameters, named pfold and afold respectively, to hylomorphisms with parameters. Our work can allow hylomorphisms to directly carry extra parameters as the inputs of calculations or to store those accumulating values produced by programs temporally during the calculations. We also point out and analyze the relations of phylo morphisms and ahylo morphisms with other recursive and corecursive functions, and present their associated calculational laws in a categorical and abstract sense. Most of our work is implemented using the functional programming language Haskell.
Application of Artificial Neural Network in Software Multi-Faults Location
He Jialang and Zhang Hong
2013, 50(3):  619-625. 
Asbtract ( 560 )   HTML ( 0)   PDF (1593KB) ( 570 )  
Related Articles | Metrics
There is no bug-free program because of the complexity of software. It is always challenging for programmers to effectively and efficiently debug program and remove bugs. Software fault location is one of the most expensive activities in program debugging. So there is a high requirement for automatic fault localization techniques that can guide programmers to the locations of faults with minimal or no human intervention. Various techniques have been proposed to meet this requirement. However, the interactions between multi-faults which have not been fully considered in previous studies make the fault location more complicated. In order to solve this problem, a novel neural-network-based multi-faults location model is proposed in this paper. By fault relation analysis, the model calculates the support degree of the input for each fault. And then it learns the relationship between the faults and the candidate locations of faults using the constructed neural network. Constructing an ideal input as the input of learned neural network, the model can calculate the suspicious degree of each candidate location of fault, then obtain the sequence sorting by the suspicious degree, and complete the task of multi-faults location. Experimental results show that compared with traditional methods, the proposed method has strong ability to distinguish fault locations and can improve the efficiency of software debugging for multi-faults.
Formal Description of Software Dynamic Correctness
Ma Yanfang, Zhang Min, and Chen Yixiang
2013, 50(3):  626-635. 
Asbtract ( 387 )   HTML ( 0)   PDF (1046KB) ( 465 )  
Related Articles | Metrics
Correctness is a key attribute of software trustworthiness. Abstractly, software correctness can be represented as whether or not the implementation of software satisfies its specification. However, in the real world, it is difficult to get the satisfaction absolutely. In the course of developing and designing software, implementation is often modified in order to satisfy its specification. This means that the software is more and more close to correctness, i.e. software correctness is a dynamic course. In order to describe the dynamic correctness of software, in this paper, the abstract characterization and the limit theory of dynamic correctness based on parameterized bisimulation are proposed. Firstly, the infinite evolution mechanism of parameterized bisimulation is established. Parameterized limit bisimulation is defined in order to characterize the relation between a series of software implementations obtained in the real design and its specification, and some special examples are shown. Secondly, parameterized bisimulation limit is given, and the recursive characterization of parameterized bisimulation limit is stated. Finally, some algebraic properties are proved, such as the uniqueness of parameterized bisimulation limit and the consistence between parameterized bisimulation limit and parameterized bisimulation.
A Data Placement Approach for Workflow in Cloud
Zhang Peng, Wang Guiling, and Xu Xuehui
2013, 50(3):  636-647. 
Asbtract ( 489 )   HTML ( 0)   PDF (3238KB) ( 1004 )  
Related Articles | Metrics
Cloud-based workflow tools suit well for supporting dynamic and cross-organizational business collaborations that can be commonly found in scenarios such as emergency management, supply chain management, healthcare management, etc. However, in these scenarios, a large amount of concurrent workflow instances could involve a great deal of data. When one task needs data located in different places, especially for private data located in client, the data transfer becomes a challenge. To effectively place these data, the workflow manager must intelligently select the place in which these data including private data will reside. In this paper, an approach to data placement for workflow in cloud is proposed, which places the data on cloud-side or client-side according to the privacy requirements, and adjusts to the data placement according to the control flow to minimize data transfers. By integrating the approach into a workflow system, we can expect improvement in data transfer. Experiments show that the proposed approach can effectively reduce data transfers during the workflows execution.
Auto-Tuning of SpMV for Diagonal Sparse Matrices
Sun Xiangzheng, Zhang Yunquan, Wang Ting, Li Yan, and Yuan Liang,
2013, 50(3):  648-656. 
Asbtract ( 834 )   HTML ( 2)   PDF (1905KB) ( 589 )  
Related Articles | Metrics
SpMV(sparse matrix-vector multiplication) is an important computational kernel in scientific applications. Its performance is mainly affected by the nonzero distribution of sparse matrices. In this paper, we propose a new storage format for diagonal sparse matrices, called CRSD(compressed row segmented with diagonal-pattern). We design a diagonal pattern to represent the diagonal distribution. Unlike general-purpose methods, our CRSD based SpMV implementation is application specific since it needs to sample the sparse matrix of one given application. The optimal implementation is selected automatically for a given hardware platform during the software installation phase. The information collected during auto-tuning process is also utilized for parallel implementation to maintain load-balance. The implementation is also tuned on GPU platform. Experimental results demonstrate that the speedup reaches up to 2.37X(1.7X on average) in comparison with DIA and 4.6X(2.1X on average) in comparison with CSR under the same number of threads on Intel and AMD platforms.
A Polynomial Time Approximation Scheme for the Traveling Salesman Problem in Curved Surfaces
Wang Gang and Luo Zhigang
2013, 50(3):  657-665. 
Asbtract ( 821 )   HTML ( 0)   PDF (1388KB) ( 502 )  
Related Articles | Metrics
The polynomial time approximation scheme (PTAS) for Euclidean traveling salesman problem (TSP) combines the two methods of recursive partition and dynamic programming. Similar techniques have been used to construct PTASes for some other Euclidean combinatorial optimization problems. To extend the applications of this method further, TSP in curved surfaces is studied. Observing that the sphere surface has no recursive regular partition as the plane, for a spherical TSP instance that can be covered by an open hemisphere, which we call a small scale instance, we project it onto a sphere-inscribed square to get a plane TSP instance. We perturb the new instance and construct its partition grid. Then we project the perturbed instance and the grid back onto the sphere surface. Things left, such as dynamic programming and randomization, are the same as the PTAS for the plane TSP. This result is generalized to non-small scale spherical TSP and TSP in a set of other curved surfaces. Because of the irregularity of the projections between the sphere surface and the plane, this method is not a PTAS reduction from the spherical TSP to the plane TSP.
An Algorithm in Tile Assembly Model for Maximum Clique Problem
Li Kenli, Luo Xing, Wu Fan, Zhou Xu, and Huang Xin
2013, 50(3):  666-675. 
Asbtract ( 616 )   HTML ( 1)   PDF (3502KB) ( 532 )  
Related Articles | Metrics
DNA computing is an efficient method to solve computational problems, especially for NP complete problems that cannot be efficiently solved using the traditional Turing machine. However, with the development of DNA computation, it presently suffers from several challenging problems such as relatively high error rates, DNA space explosion, etc. How to decrease error rates has become an important issue for DNA computation. To solve the maximum clique problem which is an NP complete problem, with low DNA hybridization error rates, this paper introduces a DNA self-assembly model and presents a novel DNA computing algorithm. This algorithm decreases error rates by decreasing experiment operations. Moreover, the DNA structures of initial molecular, regular molecular and detection molecular are designed, and encoding scheme of DNA molecular is described, too. The proposed algorithm needs Θ(n+|E|) types of tiles and Θ(1) times of experiment operations, where n and |E| are the numbers of vertex and edge of a graph respectively. According to the analysis, our algorithm can not only improve the accuracy of the solutions, but also greatly reduce the complexity of the experiment, which leads to easier experiment operability, as compared with the other DNA models including sticker model, insert-delete system, etc.