ISSN 1000-1239 CN 11-1777/TP

Table of Content

15 May 2012, Volume 49 Issue 5
Secure Information Flow in Java by Optimized Reachability Analysis of Weighted Pushdown System
Sun Cong, Tang Liyong, Chen Zhong, and Ma Jianfeng
2012, 49(5):  901-912. 
Asbtract ( 413 )   HTML ( 1)   PDF (3383KB) ( 337 )  
Related Articles | Metrics
A semantic-based approach is commonly considered more precise than the type-based approach to enforcing secure information flow for the program. As a standard criterion to formalize secure information flow, noninterference has not been analyzed with semantic-based approaches at bytecode level. We propose a semantic-based approach to model checking weighted pushdown system for noninterference. In order to overcome the limitations brought by the language feature and application scenario, we extend ordinary self-composition to low-recorded self composition. In this extension the meta-level indices of heap are recorded, and the auxiliary interleaving assignments, as well as the branch condition to illegal-flow state, are modeled to validate the reachability analysis. We prove the correctness that unreachability of illegal-flow state implies the noninterference property of bytecode program. We also propose three model optimizations: companion methods elimination, parameter reordering, and inner-block optimized abstraction of additional code. The experimental results show the availability, efficiency and scalability of our approach, and the effectiveness of the optimizations.
A QI Weight-Aware Approach to Privacy Preserving Publishing Data Set
Xu Yong, Qin Xiaolin, Yang Yitao, Yang Zhongxue, and Huang Can
2012, 49(5):  913-924. 
Asbtract ( 529 )   HTML ( 0)   PDF (3955KB) ( 456 )  
Related Articles | Metrics
In recent years, publishing data about individuals without revealing their identity information has become an active issue, and k-anonymity based models are the effective techniques that can prevent linking attacks. Most of the previous works, however, focus on the efficiency and the scope of application of the models. Specific requirements of quality of published microdata for the analyzing task in various scenarios and the difference of contributions of each QI attribute to the result have not been addressed. If the contribution of different generalizing paths and orders of QI attributes has not been considered, the published microdata may have bad utility in the application. Paying more attention to them, which makes the published table have different utility, is valuable. By analyzing the differences among several application areas, a scheme which provides an effective and secure tradeoff of privacy and utility, is proposed. Firstly the basic ODP is revised to indicate the characters of special domain. Secondly, the weight on quasi-attribute is introduced to reflect the effect for the data analyzing task. And then QI weight-aware k-anonymity (WAK), which is an algorithm based on the weight of attribute, is introduced. Theoretical analysis and experimental results testify that the scheme is effective and can preserve privacy of the sensitive data well, meanwhile maintaining better data utility.
Construction of Transformation Matrix with a Given Period Modulo N
Liu Duo and Dai Yiqi
2012, 49(5):  925-931. 
Asbtract ( 347 )   HTML ( 0)   PDF (735KB) ( 345 )  
Related Articles | Metrics
The scrambling transformation technology plays a key role in digital image information hiding and digital image encryption to obtain security. Transformation matrix which has a big period is one of the basic tools of scrambling and thus is very important in practice. Many works in some literature have been done on investigating transformation matrix. However, up to our knowledge, all known works focused on determining the periods of certain transformation matrices modulo positive integers. Different from any known ideas, the general methods to generate transformation matrices for a given period and a given modulus are studied. The supremum of the period of transformation matrix modulo a power of a given prime is analyzed. The necessary and sufficient conditions when the supremum is reached are also presented. A new algorithm and two improvements on constructing transformation matrix which has the maximum period modulo a power of a prime are proposed. Furthermore, the upper bound of the period of transformation matrix modulo a general integer N is investigated, based on which two algorithms on constructing transformation matrix modulo the given N using Chinese Remainder Theorem are designed. The result of the former one has maximum period, while those of the latter one have foreseeable periods.
Two Identity Based Threshold Cryptosystem with Reduced Trust in PKG
Long Yu, Xu Xian, and Chen Kefei
2012, 49(5):  932-938. 
Asbtract ( 404 )   HTML ( 0)   PDF (765KB) ( 430 )  
Related Articles | Metrics
In the traditional identity based cryptosystems, since private key generator (PKG) is able to compute the private key corresponding to any identity, the malicious activities of PKG would be hard to find, which restricts the use of identity based cryptosystems. People have employed multiple PKGs to solve this problem. However it brings other problems. In this paper, two identity based threshold cryptosystems are proposed, which reduce the trust in PKG. The traceable identity based encryption technique suggested by Goyal is used, which successfully restricts the potential misbehaviors of PKG, and the idea of public verifiable encryption which is widely used in the threshold cryptosystems to achieve distributed decryption. These two schemes solve the malicious PKG tracing problem effectively, and if PKG generates more than one private key to a single user, the misbehavior could be detected with evidence. We define the formal model of identity based threshold cryptosystem with reduced trust in PKG. We also prove the two schemes' security against the adaptive chosen ciphertext attack, the find key attack and the compute new key attack under the formal security model. The result shows that their security is based on the truncated augmented bilinear Diffie-Hellman exponent assumption.
Trusted Intra-Domain Fast Authentication Protocol Based on Split Mechanism Network
Zheng Lijuan, and Han Zhen
2012, 49(5):  939-948. 
Asbtract ( 480 )   HTML ( 0)   PDF (2219KB) ( 304 )  
Related Articles | Metrics
Spilt mechanism network cleanly separates the host location from its identity information and it is designed to divide the whole Internet into two parts: the core network and the access network. It can solve the extension and mobility of the Internet. In split mechanism network, when the terminal handoffs in intra-domain, the rapidity and security of the authentication process must be guaranteed. In this paper, combined with trusted computing, an authentication scheme for intra-domain fast authentication based on the split mechanism network is proposed. The proposed scheme can realize the terminal platform authentication and terminal platform integrity verification as well as the user identity authentication. In the proposed scheme, the access switch router uses the token to authenticate the mobile terminal without communicating with the authentication center when the handover occurs in intra-domain. Through comparison with other intra-domain fast authentication schemes from the authentication costs, authentication latency and security, it demonstrates that the proposed scheme is more secure and more effective. It provides identity anonymity and platform anonymity, resists man-in-the-middle attack, anti-replay attack, and ensures key negotiation fairness and one-time pad. Also, the scheme reduces the burden of the authentication centers and it has great advantages over the current schemes.
Research on Unknown Malicious Code Automatic Detection Based on Space Relevance Features
Li Peng, Wang Ruchuan, and Wu Ning
2012, 49(5):  949-957. 
Asbtract ( 524 )   HTML ( 0)   PDF (2714KB) ( 478 )  
Related Articles | Metrics
Unknown malicious code sample automatic detection scheme is proposed based on space relevance features. According to the characteristics quantitative vectors of character space, malicious code samples are divided into space relevance blocks based on the intelligence region growing segmentation algorithm. In each block of malicious code sample, the spatial relations of character moment, information entropy, and correlation coefficient are calculated, the feature vectors are extracted, and the normalization processes are manipulated. Then, the reference of spatial relational feature vectors have been set up through the analysis of general spatial properties of malicious code samples. In order to match the previous unknown malicious codes, the similarity preferred matching algorithm which is based on comprehensive analysis of multiple features is adopted. In addition, the spatial relational distances are weighted and considered together, so as to improve the accuracy of the search work. Experimental flow graph is designed, spatial relational feature vectors properties of multiple malicious code sample blocks are portrayed, and the comparisons of malicious code detection accuracy rate between single feature match method and comprehensive multiple features match method are drawn. Experiments result analyses show that the proposed automatic detection scheme can match the previous unknown malicious code with high accurate degree and can determine the corresponding subordinate type of malicious code samples.
Security Analysis of an RFID Protocol Based on Insubvertible Encryption
Wei Yongzhuang, Ouyang Ning, and Ma Chunbo
2012, 49(5):  958-961. 
Asbtract ( 361 )   HTML ( 0)   PDF (528KB) ( 372 )  
Related Articles | Metrics
Radio frequency identification (RFID) is a very important technique for object identification in modern life (for instance it can be widely used in manufacture,transportation,medical treatment, etc).RFID has many advantages such as its celerity, low cost, veracity in processing data through unique identification and so on.Insubvertible encryption is a new type of re-encryption method, which plays an important role in the security design of RFID system. Recently, Osaka et. al. presented an RFID protocol based on insubvertible encryption and guardian proxy. They claimed that their RFID protocol was secure against the tag spoofing and swapping attacks and so on. However, in this paper, we found that there is a differential invariable relationship between the random numbers of read and guardian proxy in computing the sharing key. Based on this observation, we propose an asynchronous attack on this RFID protocol. By forging two random numbers from read and guardian proxy, we can successfully fulfill all the authentication steps of Read and back-end database server. Moreover, the sharing secret between the tag and server is changed such that a legitimate tag cannot normally pass the authentication in RFID protocol. It means that this RFID protocol is very insecure under the asynchronous attack.
A Group Proxy Signature Scheme Based on Sub-Secret Evolution
Gu Ke, Jia Weijia, and Jiang Chunlin,
2012, 49(5):  962-973. 
Asbtract ( 439 )   HTML ( 1)   PDF (1182KB) ( 347 )  
Related Articles | Metrics
Few sharing methods of delegating information are used in current group proxy signature schemes. A new secure group proxy signature scheme based on sub-secret evolution is proposed in this paper, which is based on CDHP and bilinear pairings. In this scheme, double sub-secret of delegating information is constructed by secret sharing method and cellular automata theory so that sharing secrets can be evolved with the number of proxy signatures. And the correctness, the security and the efficiency of the scheme are analyzed. The scheme is more efficient without trust center, and is proved to have the existential group proxy signature unforgeability under the provable security model of group proxy signature and to be able to can trace proxies abusing proxy signature. Compared with other group proxy signature schemes, this scheme is more secure.
Intrusion Detection System Using CVM Algorithm with Extensive Kernel Methods
Wang Qi'an and Chen Bing
2012, 49(5):  974-982. 
Asbtract ( 515 )   HTML ( 0)   PDF (1594KB) ( 439 )  
Related Articles | Metrics
Intrusion detection system based on core vector machine with extensive kernel methods is presented to get rid of the restriction of kernels and the sub-quadratic problem. Firstly, the center-constrained minimum enclosing ball of a training data set is solved by the algorithm. The new MEB (minimum enclosing ball) is obtained by the simple update of the center and radius of the ball. Then the optimal separating hyperplane is constructed by the solutions of the core set. The convergence, time complexity and space complexity are proved theoretically. Finally, according to the distribution of the core set, the different intrusion actions can be detected. The related experiment indicates that the algorithm is feasible and effective.
A Method of Disjoint Quantitative Analysis for Dynamic Fault Tree
Zhang Honglin, Zhang Chunyuan, Liu Dong, and Fu Jian
2012, 49(5):  983-995. 
Asbtract ( 515 )   HTML ( 2)   PDF (2336KB) ( 307 )  
Related Articles | Metrics
Dynamic fault tree is widely used in the reliability analysis of dynamic system, where the cut sequence describes the failure mode of the system. Though the disjoint of cut sequence set can simplify the solving of top event failure probability, there is no available method of disjoint quantitative analysis that applies to the dynamic fault tree for the moment. A method of disjoint quantitative analysis is proposed. The conception of extended cut sequence is brought forward, which incorporates with the temporal logic based on cut sequence and has more representative capabilities than other similar conceptions. The extended cut sequence set is splited into the disjoint extended cut sequence set according to the basic event set and the temporal constraint set. The disjoint extended cut sequence is then transformed into the standard extended cut sequence. Conflict detection, reduction of temporal restriction set and topological sort of basic event set for each cut item, are carried out to quantitatively analyze each standard extended cut sequence. The involved algorithms and their complexities are particularly proved and analyzed. Finally this method is applied to a case and compared to the MCS method based on the rule of inclusion-exclusion. The result shows that the time cost of this method is greatly decreased. This method can obtain the disjoint extended cut sequence set of dynamic fault tree,and decrease the time cost of the solving.
XenRPC:Design and Implementation of Security VM Remote Procedure Call
Chen Hao, Peng Cuifen, Sun Jianhua, and Shi Lin
2012, 49(5):  996-1004. 
Asbtract ( 451 )   HTML ( 0)   PDF (3155KB) ( 300 )  
Related Articles | Metrics
In virtual machine environment, VMs often need to communicate with each other, but the fact is that the VMs are actually in the same physical machine. The existed remote procedure call mechanisms do not suit for virtual machine environment. In this paper, an Xen-specific remote procedure call mechanism named XenRPC is presented. XenRPC uses the interfaces provided by XenAccess and the event channel mechanism provided by Xen, to share memory between the two communicating processes. XenRPC removes the marshalling while data packets are sent, triggers an immediate context switching, and notifies events asynchronously through event channel to greatly enhance the communication performance. In addition, to avoid stack overflow attacks, XenRPC protects the memory shared and checks the return address of the shared stack. If the return address is modified by the malicious program, XenRPC will recover the return address to protect users from the stack overflow attacks. Performance evaluations show that the throughput, latency and CPU consumption of XenRPC are better than that of SunRPC and Ice, which are the two well-known remote procedure call mechanisms.
A Division Based Composite Service Selection Approach
Zhang Mingwei, Zhang Bin, Zhang Xizhe, and Zhu Zhiliang
2012, 49(5):  1005-1017. 
Asbtract ( 459 )   HTML ( 1)   PDF (2376KB) ( 283 )  
Related Articles | Metrics
Composite service selection is one of the core research issues in the service computing field. There are complex correlations among atomic service QoS in a composite service, making some atomic services become more efficient and some become less efficient if they are used together. Previous service selection algorithms almost ignored this problem, which makes the Web service QoS used to service selection be inaccurate and the selected composite service be not the optimal one in the real executing environment. To address this problem, a division based composite service selection approach is proposed in this paper. Firstly, the set of efficient composite service execution instances are extracted from the log repository. Then the concrete service sets which are frequently used together are mined. On this basis, the composite service is divided into some dots, and the corresponding dot patterns of each dot are generated. Finally, composite service selection can be done taking the divided composite service dots as selection units, and the dot patterns of each dot as its candidate service set. Because dot patterns are testified by many execution instances and represent selection experience, they are usually more efficient compared with the results of doing selection independently for each service in one dot. Experimental results show that the proposed approach can improve the quality of selected composite services effectively.
Energy-Aware Scheduling of Hard Real-Time Tasks in VFD-Based Multi-Core Systems
Wu Xiaodong, Han Jianjun, and Wang Tianjiang
2012, 49(5):  1018-1027. 
Asbtract ( 411 )   HTML ( 0)   PDF (2642KB) ( 551 )  
Related Articles | Metrics
With the continuing increase of energy consumption, energy savings have become one of the most critical issues. In this paper, we propose an energy-efficient approach to schedue periodic hard real-time tasks in the multi-core context, while taking into account the mechanisms of voltage frequency domain (VFD) and globally asynchronous, Locally Synchronous (GALS). Firstly, a simple static voltage/frequency scaling (StaticVS) schedule is introduced. According to the StaticVS, the real time tasks are mapped onto the processing cores by the worst-fit decreasing strategy, in order to balance the utilizations of the cores, and thus the shared operation frequencies of the VFDs can be reduced when the voltage scaling is performed. Within a VFD, the utilization of the heaviest-loaded core is selected as the shared operating frequency of the processing cores in this VFD by StaticVS. Next, based upon the StaticVS schedule, a slack reallocation (SR) policy is proposed to further reclaim the unused slack under the frequency synchronization constraint. The SR policy tries to redistribute the slack uniformly to the cores on the same VFD, such that the synchronous operating frequency of this VFD can be appropriately scaled down. Experimental results demonstrate that the proposed scheduling algorithm is capable of reducing energy consumption effectively.
A Dependability Evaluation Model for Internetware Based on Bayesian Network
Si Guannan, Ren Yuhan, Xu Jing, and Yang Jufeng
2012, 49(5):  1028-1038. 
Asbtract ( 427 )   HTML ( 2)   PDF (2573KB) ( 412 )  
Related Articles | Metrics
Dependability evaluation is an essential issue for Internetware which is in open, dynamic and decentralized Internet environment. But dependability is generally evaluated by simple metrics based on black box without considering system structure in many cases. In this paper, an evaluation model of dependability for Internetware based on Bayesian network is proposed. It builds a multitier dependability evaluation metrics system by structure analysis and structure pattern of Internetware. It uses bottom-up approach on the basis of Bayesian network to calculate dependability metrics of Internetware system and its components from many aspects. A unique dependability metrics is obtained and amended by objective data as a result. Experimental results show that the model can evaluate dependability of Internetware clearly and objectively. It also contributes to design, development and deployment of Internetware.
Automated Test Data Generation Using Evolutionary Algorithm Based on Maintaining Population Diversity
Wang Jianmin and Cai Yuan
2012, 49(5):  1039-1048. 
Asbtract ( 395 )   HTML ( 3)   PDF (1125KB) ( 364 )  
Related Articles | Metrics
The automatic test data generation technology tries to find a relatively small set of test data to satisfy adequacy criterion, in order to reduce testing cost and increase testing efficiency. In this paper, an innovative test data generation algorithm based on maintaining population diversity is proposed, which satisfies condition/decision coverage criterion. This algorithm is based on an extended branch coverage table. Normalized Manhattan distance is employed to calculate the diversity between test data and eliminate the data with lower diversity, to maintain population diversity. Meanwhile, a new approach is introduced to evaluate the fitness values of test data. Then a greedy algorithm is used to reduce the number of test cases. Finally, this paper presents some experiments over a large benchmark composed of fourteen programs that include fundamental and practical aspects of computer science.
A Cache Replacement Policy Based on Reuse Distance Prediction and Stream Detection
Lin Junmin, Wang Wei, Qiao Lin, and Tang Zhizhong
2012, 49(5):  1049-1060. 
Asbtract ( 680 )   HTML ( 5)   PDF (3847KB) ( 465 )  
Related Articles | Metrics
Traditional cache replacement policy fails to accommodate the special streaming access patterns exhibited by many modern applications, leading to decreased cache performance, which is known as cache thrashing. This paper proposes a new reuse distance prediction mechanism based on period detection, which is able to capture the periodic regularity in reuse distance sequences. Then it analyzes the regularity in reuse distances and the complexity in streaming access sequences exposed by applications. Finally, a new replacement algorithm, RDP in short, is presented to accommodate both simple and complicate streaming access patterns with the proposed reuse distance prediction mechanism. The basic idea of RDP is to predict the reuse distance in memory accesses, maintain the reuse distance counters dynamically, and thus adjust the replacement priority of cache blocks dynamically according to the prediction results, against traditional replacement decision. In addition, it reduces storage overhead with stream sampling. The experiments show that RDP adapts well to diverse streaming access patterns in programs, and outperforms both LRU and DIP policies. On large last-level caches, RDP improves the cache performance across one class of programs with strong streaming access patterns significantly. For example, on a 32MB cache, RDP reduces the cache misses by 27.5% on average over LRU policy.
A Bus Arbitration Scheme for Memory Access Performance Optimization
Liu Dan, Feng Yi, Tong Dong, Cheng Xu, and Wang Keyi
2012, 49(5):  1061-1071. 
Asbtract ( 564 )   HTML ( 2)   PDF (3762KB) ( 349 )  
Related Articles | Metrics
Memory access performance is strongly dependent on the processing sequence of memory transactions. On a system bus, the outstanding memory transactions issued by a bus device often have consecutive address and the same read or write (R/W) types. Under traditional bus arbitration schemes, however, outstanding transactions from different devices are most likely to be interleaved with each other, which incurs non-sequential addressing access as well as different R/W types access. Due to the limited scheduling performance of the memory controller, such sequences usually prevent the memory controller from accessing the memory effectively. In this paper, we propose a novel bus arbitration scheme, CGH, to minimize the number of memory row addressing and R/W type context switches. CGH can recognize and grant outstanding transaction sequence from the same bus device with the same row address and R/W type. It also prioritizes the requests which have the same memory row address and R/W type as the most recent transaction during grant handoff to achieve further improvement. Being applied to the PKUnity-SK SoC, the proposed arbitration scheme significantly elevates the memory access performance by 21.37% with only 2.83% area overhead. It also reduces the memory power consumption by 15.15% because of less row activations.
Anaphoricity Determination for Coreference Resolution in English and Chinese Languages
Kong Fang, Zhu Qiaoming, and Zhou Guodong
2012, 49(5):  1072-1085. 
Asbtract ( 519 )   HTML ( 0)   PDF (1294KB) ( 588 )  
Related Articles | Metrics
This paper systematically explores noun phrase anaphoricity determination for coreference resolution in both English and Chinese languages in various ways. Firstly, a rule-based method is used to detect the non-anaphors which are insensitive to the context or have some obvious patterns. Then, both flat feature-based and structured tree kernel-based methods are used to determinate the non-anaphors sensitive to the context. Finally, a composite kernel is proposed to combine the flat features with structured ones to further improve the performance. Experimental results on both the ACE 2003 English corpus and the ACE 2005 Chinese corpus show that all the proposed methods perform well on anaphoricity determination. In addition, the anaphoricity determination module is applied to coreference resolution systematically. Experimental also results show that proper anaphoricity determination can significantly improve the performance of coreference resolution in both English and Chinese languages.
Double Center Particle Swarm Optimization Algorithm
Tang Kezong, Liu Bingxiang, Yang Jingyu, and Sun Tingkai
2012, 49(5):  1086-1094. 
Asbtract ( 550 )   HTML ( 0)   PDF (2024KB) ( 488 )  
Related Articles | Metrics
Particle swarm optimization (PSO) algorithm is a new promising swarm intelligence optimization technology, and it has been extensively applied to solve all kinds of complex optimization problems because of its advantages of simpler theory, less parameter and better performance. However, each particle's individual minimum and population's minimum are two major factors to affect the algorithm's convergence speed and precision. This paper proposes a double center particle swarm optimization algorithm (DCPSO). We analyze particle's flying trajectory and introduce general center particle and special center particle in the DCPSO, which consequently improves PSO algorithm's converging speed and precision without increasing computing complexity. Six classical test functions, including Rosenbrock, Rastrigrin and so on, are used to verify the proposed algorithm in two ways: fixed iteration number test and fixed time length test. Experimental results show that the proposed algorithm is feasible and efficient.
Privacy-Preserving Data Publication for Clustering
Ni Weiwei, Chen Geng, Chong Zhihong, and Wu Yingjie
2012, 49(5):  1095-1104. 
Asbtract ( 624 )   HTML ( 0)   PDF (1960KB) ( 462 )  
Related Articles | Metrics
Privacy-preserving data publication has attracted sustained attention in recent years. It seeks a trade-off between preserving data privacy and maintaining data utility. Clustering is a crucial step for advanced data analysis, which has been widely studied in data mining. There exists some inconsistency between clustering and data obfuscation. Process of clustering heavily depends on characteristics of individual records to segment data into different clusters. On the contrary, the process of data obfuscation usually adopts the idea of suppressing individual characteristics for the sake of avoiding leakage of individual privacy. It becomes difficult to accommodate data privacy and clustering utility of the published data simultaneously. Various distortion and limited distribution techniques are delved into this problem. The state-of-the-art of data obfuscation methods for clustering application is surveyed. The constraint mechanism among clustering character granularities to be kept, clustering usability maintenance and security of data privacy is discussed. Further, the principles and merits of some prevalent methods, such as data anonymity, data randomization, data swapping and synthetic data substitution, are compared from a view of accommodating data privacy preservation and clustering usability maintenance. Following a comprehensive analysis of the existing techniques, some unaddressed problems and future directions are highlighted.
Frequent Patterns Mining over Uncertain Data Streams Based on Probability Decay Window Model
Liao Guoqiong, Wu Lingqin, and Wan Changxuan,
2012, 49(5):  1105-1115. 
Asbtract ( 591 )   HTML ( 0)   PDF (2247KB) ( 412 )  
Related Articles | Metrics
In recent years, a large amounts of uncertain data are emerging due to the wide usage of new technologies such as wireless sensor networks and radio frequency identification. Considering the uncertainty of uncertain data streams, a new kind of probability frequent pattern tree—PFP-tree and a probability frequent pattern mining method—PFP-growth are proposed in this paper. PFP-growth uses transactional uncertain data stream model and a time-based probability decay window model to find probability frequent patterns through calculating expected supports. The main characteristics of PFP-growth include: 1)Because the contributions on the expected supports of items arriving at different time within a window may be different, a time-based probability decay window model is used to improve mining precision ratios; 2)In order to enhance retrieval speed on PFP-tree,an item index table and a transaction index table are designed; 3)A pruning algorithm is designed to delete the nodes which are not possible to be frequent patterns, to reduce greatly the overhead of both time and space; 4)A transaction probability list is set for every node to meet the requirement that some data items may have different probabilities in different transactions. Experimental results have shown that the PFP-growth method can not only ensure a higher mining precision ratio, but also need less processing time and storage space than the existing methods.
Piecewise Linear Approximation of Rational Triangular Surfaces
Zhou Lian, and Wang Guojin
2012, 49(5):  1116-1122. 
Asbtract ( 394 )   HTML ( 1)   PDF (2388KB) ( 322 )  
Related Articles | Metrics
Piecewise linear approximation of rational triangular surfaces is useful in surfaces intersection, surfaces rendering and mesh generation. The approximation error bound is usually estimated based on the information about second-order derivative bounds of the rational triangular surfaces. But the derivative bounds of rational triangular surfaces are difficult and less effective to be estimated. To solve this problem, using homogeneous coordinates and inequality method, we present an algorithm to estimate subdivision depths for rational triangular surfaces which are defined in any arbitrary triangle. The estimation is performed on the polynomial surfaces, of which the given rational surfaces are the images under the standard perspective projection. It is more efficient than evaluating the derivative bounds of the given surfaces directly. The subdivison depth is obtained in advance, however, it guarantees the required flatness of the given surface after the subdivision. Moreover, using Mbius reparameterization technique, the variance of the log weights of rational triangular Bézier surfaces is minimized, which can obviously improve the efficiency of the algorithm. In particular, the optimal reparameterization is solved explicitily, so reparameterization hardly increases operating times. Numerical examples suggest that this algorithm not only possesses more powerful properties, but also is more effective compared with any other old methods.
A Data-Clustering Based Robust SIFT Feature Matching Method
Fan Zhiqiang and Zhao Qinping
2012, 49(5):  1123-1129. 
Asbtract ( 446 )   HTML ( 1)   PDF (3689KB) ( 420 )  
Related Articles | Metrics
We present a data clustering method for robust SIFT matching. Our matching process contains an offline module to cluster features from a group of reference images and an online module to match them to the live images in order to enhance matching robustness. The main contribution lies in constructing a composite k-d data structure which can be used not only to cluster features but also to implement features matching. Then an optimal keyframe selection method is proposed using our composite k-d tree, which can not only put the matching process forward but also give us a way to employ a cascading feature matching strategy to combine matching results of composite k-d tree and keyframe. Experimental results show that our method dramatically enhances matching robustness.
A Distributed Parallel Algorithm for SIFT Feature Extraction
Jiang Guiyuan, Zhang Guiling, and Zhang Dakun
2012, 49(5):  1130-1141. 
Asbtract ( 608 )   HTML ( 2)   PDF (4197KB) ( 512 )  
Related Articles | Metrics
SIFT(scale invariant feature transform) has been widely applied to object detection and recognition, image registration and fusion, texture recognition, scene classification, human face detection, image retrieval, 3D reconstruction, digital watermarking, and object tracking. However, it is compute-intensive and time-consuming. A distributed parallel algorithm for extracting SIFT features (DP-SIFT algorithm) is proposed using data parallel strategy on PC clusters/COW (cluster of workstation) based on message passing. An algorithm for data blocking with limitation on height and width is designed according to the specific characteristic of feature extraction space. Data distribution and feature adjustment methods are also presented. A strategy of data blocking coordinate with data passing approaches for communication optimization in image parallel processing is proposed after the effect of data blocking methods and data passing approaches on communication time are investigated. Experimental results verify that the DP-SIFT algorithm has remarkable performance on speedup and efficiency. On clusters of PCs with 32 cores linked by gigabit Ethernet, the speedup and efficiency can reach as high as 20 and 0.6 respectively when input image scale is 1024×768, and 18 and 0.56 when input image scale is 2048×1536.
Two Novel Directional Filter Banks with Diamond Structure for Fingerprint Enhancement
Cai Xiumeiand Fan Jiulun
2012, 49(5):  1142-1148. 
Asbtract ( 560 )   HTML ( 0)   PDF (3773KB) ( 475 )  
Related Articles | Metrics
Fingerprint enhancement is important for improving the accuracy of the minutiae extraction, even for the total automated fingerprint identification system (AFIS). Directional filter bank (DFB) is frequently used to enhance fingerprint images. Based on the famous Fibonacci sequences, two novel DFBs with diamond structure are proposed. The especial structure can ensure the consistency of all the filter orientations' structure. The two DFBs not only solve the overflow problem of filter orientations, but also ensure that the coefficients on the rotated masks distribute more regularly. The performance of the two DFBs is evaluated with some typical low quality images of FVC database. Experimental results indicate that the proposed DFBs have dramatic effect on low quality fingerprint images. Compared with the DFB reported in the literature the proposed DFBs can reduce the noise, connect the broken fingerprint and enhance the ridges more efficiently. They can cause less pseudo minutiae in the course of enhancement.