Loading...
ISSN 1000-1239 CN 11-1777/TP

Table of Content

15 November 2008, Volume 45 Issue 11
Paper
Study of Generalized Hyper-Sphere Support Vector Machine
Zhang Xinfeng and Liu Yaowei
2008, 45(11):  1807-1816. 
Asbtract ( 442 )   HTML ( 0)   PDF (907KB) ( 441 )  
Related Articles | Metrics
Hyper-sphere support vector machine is an important method for unbalanced classification which is an important issue in biomedical engineering such as tongue image classification in traditional chinese medicine. By introducing the other kind of samples, one class support vector domain description classifier is modified to the binary hyper-sphere classifier to improve its generalization performance. However, among the methods in the present references, it is proved that the solution of margin between two classes of samples is uncertain when there is no support vector in one class samples from the point of the optimal solution in this paper. Meantime, the margin between two classes is proved to be zero when there is at least one normal support vector in each kind of samples. The generalization performance is poor if the margin is an uncertain value or it equals to zero to some extent. To improve the right classification rate of the samples, a generalized hyper-sphere SVM is proposed by introducing the parameter n and b (n>b) and the margin which is greater than zero may be acquired, which balances the volume of the hyper-sphere, margin and the experimental error. Theory analysis and experimental results show that the proposed algorithm has better generalization performance and less experimental risk.
A Transductive Multi-Label Text Categorization Approach
Jiang Yuan, She Qiaoqiao, Li Ming, and Zhou Zhihua
2008, 45(11):  1817-1823. 
Asbtract ( 636 )   HTML ( 0)   PDF (751KB) ( 591 )  
Related Articles | Metrics
Real-world text documents usually belong to multiple classes simultaneously, and therefore, using multi-label learning technique to classify text documents is an important research direction. Existing multi-label text categorization approaches usually require using a large amount of documents with correct class labels to achieve good performance. However, in real applications it is often the case that only a small number of labeled documents can be obtained as training samples because of human and material resources. As there are a large amount of unlabeled documents that can be readily obtained, exploiting the unlabeled documents automatically become a basic motivation of this work. Random walk is a popular technique used in semi-supervised learning as well as in transductive learning. In this paper, the authors propose a random walk based transductive multi-label text categorization approach, which is able to exploit abundant unlabeled documents to help improve classification performance. In the proposed approach, labels are spread from the labeled documents to the unlabeled documents. Thus, a small number of labeled documents and a large amount of unlabeled documents are utilized simultaneously in the process of learning. Experimental results show that compared with the existing semi-supervised multi-label method CNMF(constrained non-negative matrix factorization), the proposed approach has a better performance.
A Logical Reinforcement Learning Method Based on Heuristic Contour List
Liu Quan, Gao Yang, Chen Daoxu, Sun Jigui, and Yao Wangshu
2008, 45(11):  1824-1830. 
Asbtract ( 624 )   HTML ( 1)   PDF (907KB) ( 518 )  
Related Articles | Metrics
Reinforcement learning gets optimal policy through trial-and-error and interaction with dynamic environment. Its properties of self-improving and online learning make reinforcement learning become one of most important machine learning methods. Against reinforcement learning has been “curse of dimensionality” troubled by the problem the question, a method of heuristic contour list is proposed on the basis of relational reinforcement learning. The method can represent states, actions and Q-functions through using first-order predications with contour list. Thus advantages of Prolog list can be exerted adequately. The method is to combine logical predication rule with reinforcement learning. A new logical reinforcement learning—CCLORRL is formed and its convergence is proved. The method uses contour shape predicates to build shape state tables, drastically reducing the state space; Using heuristic rules to guide the choice of action can reduce choice blindness when the sample does not exist in the state space. The CCLORRL algorithm is used in the Tetris game. Experiments show that the method is more efficient.
A Self-Organization Based Divide and Conquer Algorithm for Distributed Constraint Optimization Problems
Huang Jing, Liu Dayou, Yang Bo, and Jin Di
2008, 45(11):  1831-1839. 
Asbtract ( 548 )   HTML ( 1)   PDF (916KB) ( 393 )  
Related Articles | Metrics
Distributed constraint optimization problem (DCOP) is a kind of optimization problem oriented to large-scale, open and dynamic network environments, which has been widely applied in many fields such as computational grid, multimedia networks, e-business, enterprise resource planning and so on. Besides the features such as non-linear and constraint-satisfaction which the traditional optimization problems have, DCOP has its distinct features including dynamic evolution, regional information, localized control and asynchronous updating of network states. It has become a challenge for computer scientists to find out a large-scale, parallel and intelligent solution for DCOP. So far, there have been a lot of methods for solving this problem. However, most of them are not completely decentralized and require prior knowledge such as the global structures of networks as their inputs. Unfortunately, for many applications the assumption that the global views of networks can not be obtained beforehand is not true due to their huge sizes, geographical distributions or decentralized controls. To solve this problem, a self-organizing behavior based divide and conquer algorithm is presented, in which multiple autonomous agents distributed in networks work together to solve the DCOP through a proposed self-organization mechanism. Compared with existing methods, this algorithm is completely decentralized and demonstrates good performance in both efficiency and effectiveness.
Applications of the Heuristics Based on Problem Structure to Disjunctive Temporal Problems
Liu Yuechang, Jiang Yunfei, and Qian Hong
2008, 45(11):  1840-1849. 
Asbtract ( 420 )   HTML ( 0)   PDF (1845KB) ( 453 )  
Related Articles | Metrics
Many temporal problems arising in intelligent planning and scheduling can be expressed as disjunctive temporal problems (DTPs). Most of DTP solvers in the literature treat DTPs as constraint satisfaction problems (CSPs) or satisfiability problems (SATs), and solve them using standard CSP (SAT) techniques. They are proved to be very powerful in solving DTPs, however, unfortunately little work has been done on utilizing topological information encoded in DTPs to guide the search for solutions. Presented in this paper is a heuristics which is extracted via the analysis of DTPs topological structure. The heuristics tries to design qualitative and quantitative criteria for selecting those “best” values for current variables, and based on that a dynamic variable ordering heuristics is obtained for selecting the next “best” variable to try. The proposed strategy can be called “topology based value selection” strategy (TVS for short) for value selection and “topology based variable ordering” (TVO) for variable ordering. The approach is implemented based on a graphical model of DTPs defined in this paper, the disjunctive temporal network (DTN). Experimental results show that the both TVS and TVO strategy can effectively reduce the visited nodes for DTP solving. Compared with similar techniques in the literature, TVS behaves comparatively to removal of subsumed variable (RSV), and TVO performs usually better than minimal remaining values (MRV) heuristics even by one order-of-magnitude. Moreover, experimental results reveal that an efficient DTP solver—DTN-DTP can be obtained which incorporates other CSP techniques with TVO and TVS.
Time Complexity of Evolutionary Programming
Huang Han, Hao Zhifeng, and Qin Yong
2008, 45(11):  1850-1857. 
Asbtract ( 658 )   HTML ( 0)   PDF (777KB) ( 633 )  
Related Articles | Metrics
As a method of evolutionary computation, evolutionary programming(EP) algorithm is an evolutionary algorithm for continuous optimization problems. The convergence of EP has been proved, but there are few studies on the time complexity of EP, which is one of the hard problems in evolutionary computation. EP algorithm can be modeled as an absorbing Markov process because of its comparison selection. The item called average convergence time is used to propose a method to analyze the time complexity of EP algorithm based on absorbing Markov process model. The average convergence time can be calculated with the probability that the EP algorithm attains the optimal solution in the next iteration but not in the current iteration. A direct approach to estimate the convergence time is given as a theorem. Furthermore, the corollaries of the theorem indicate the way to estimate the bounds of the convergence time. The convergence time reveals how many iteration times the EP algorithm uses to converge to the optimal solution. Therefore, the proposed theorem and corollaries can be considered as the time complexity theory for the foundation of evolutionary programming. Finally, the average convergence time of Gauss-mutation evolutionary programming is analyzed as an application case study of the proposed theory.
Enhanced One-Class SVM
Feng Aimin, Xue Hui, Liu Xuejun, Chen Songcan, and Yang Ming
2008, 45(11):  1858-1864. 
Asbtract ( 705 )   HTML ( 1)   PDF (924KB) ( 685 )  
Related Articles | Metrics
One-class-classifier (OCC) aims to distinguish a target class from outliers. Existing OCC algorithms based on hyperplane, such as one-class SVM (OCSVM) and Mahalanobis one-class SVM (MOCSVM), solve this problem by finding a hyperplane with the maximum distance to the origin. However, since they either neglect the structure of the given data or just takes the structure into account in a relatively coarse granularity, only the suboptimal hyperplane may be abtained. In order to mitigate this problem, a novel OCC named enhanced one-class SVM (EnOCSVM) is proposed. First obtaining the distribution of the target data by the unsupervised methods such as agglomerative hierarchical clustering, and then embedding the cluster information into the original OCSVM framework, EnOCSVM can optimize the tightness of target data and maximizes the margin from the origin simultaneously. In this way, EnOCSVM not only takes much more priori knowledge into account than the above algorithms, but also provides a general method to extend the present SVM algorithm to consider intrinsic structure of the data. Moreover, the optimization of the EnOCSVM can be solved using the standard SVM implementation similar to OCSVM, and all the advantages of SVM are preserved. Experiment results on benchmark data sets show that EnOCSVM really has better generalization than OCSVM and MOCSVM significantly.
Biclustering Nonlinearly Correlated Time Series Gene Expression Data
Yan Leiming, Sun Zhihui, Wu Yingjie, and Zhang Baili
2008, 45(11):  1865-1873. 
Asbtract ( 780 )   HTML ( 1)   PDF (933KB) ( 534 )  
Related Articles | Metrics
The biclustering algorithms focus on clustering correlated patterns in sub-spaces. However, most of the biclustering algorithms nowadays address only the linearly correlated pattern or a certain linearly similar pattern, leaving the nonlinearly correlated patterns untouched, which are often hidden in a great many of real data sets. In this paper, a novel biclustering algorithm called MI-TSB is proposed to find and report all nonlinearly correlated patterns in time series gene expression data. It first deduces an efficient calculating formula of quadratic mutual information with matrix theory, and then based on the quadratic mutual information and sliding window technology, a time series data nonlinearly similar model and a simple general suffix tree variation version are introduced. Using suffix tree as index structure, the MI-TSB algorithm explores all of biclusters effectively and efficiently. Compared with general biclustering algorithms, the ability of discovering the nonlinearly correlated patterns in sliding window is one of the most important advantages of the MI-TSB algorithm. Additionally, experiments on real gene expression dataset and synthetic dataset show that the MI-TSB algorithm successfully discovers some nonlinearly correlated patterns which can not be found by other ordinary biclustering algorithms. Besides, gene annotating by gene ontology demonstrates that the MI-TSB algorithm can find biologically meaningful results.
An Advanced Co-Training Algorithm Based on Mutual Independence and Diversity Measures
Tang Huanling, Lin Zhengkui, Lu Mingyu, and Wu Jun
2008, 45(11):  1874-1881. 
Asbtract ( 482 )   HTML ( 1)   PDF (744KB) ( 458 )  
Related Articles | Metrics
Co-training algorithm is constrained by the assumption that the features can be split into two subsets which are both compatible and independent. However, the assumption is usually violated to some degree in real-world application. The authors propose two methods to evaluate the mutual independence utilizing conditional mutual information or conditional CHI statistics, and present a method to construct a mutual independence model (MID-Model)for initial features set. Based on MID-Model, two novel feature partition algorithms PMID-MI and PMID-CHI are developed. The former utilizes conditional mutual information to evaluate the mutual independence between two features; the latter utilizes conditional CHI statistics. As a result, a feature set can be divided into two conditional independent subsets using PMID-MI or PMID-CHI. Compared with the random splitting method, both PMID-MI and PMID-CHI accomplish better performance. In addition, the conditional independence between two subsets is verified by several diversity measures such as Q statistic, correlation coefficient ρ, disagreement, double fault, and the integrative measure DM. Then, combining MID-Model and diversity measures, an improved semi-supervised categorization algorithm named SC-PMID is developed. Two classifiers can be co-trained on a pair of independent subsets. The independence of two subsets can reduce the chance of both classifiers agreeing on erroneous label of an unlabeled example. Experimental results show that the SC-PMID algorithm can significantly improve the semi-supervised categorization precision.
A Dynamic Bayesian Network Based Framework for Continuous Speech Recognition and its Token Passing Model
Miao Duoqian, Wang Ruizhi, and Ran Wei
2008, 45(11):  1882-1891. 
Asbtract ( 809 )   HTML ( 1)   PDF (1467KB) ( 508 )  
Related Articles | Metrics
Recently, dynamic Bayesian network (DBN) based speech recognition has aroused an increasing interest, because of its interpretability, factorization and extensibility, which hidden Markov models (HMMs) lack. Although a huge success of the introduction of DBNs into speech recognition in many areas and DBNs has been presented with promising potential to overcome inherent limitations of HMMs in speech recognition, previous work on DBN based speech recognition mainly focuses on isolated word speech recognition, and the frameworks and recognition algorithms for DBN based continuous speech recognition are not as mature and flexible as those for HMM based one. This paper is trying to address the problems of flexibility and extensibility in DBN based continuous speech recognition. To achieve this purpose, the token passing model, which works very well to address the above problems for HMM based continuous speech recognition, is adapted for DBN based one, and a general framework based on it is proposed. In this framework, the advantages of both token passing model and DBN are combined. A novel recognition algorithm independent of the upper layer language model is proposed under this framework, and a toolkit DTK for building DBN based speech recognition under this framework is developed.
An Agent-Oriented Methodology ODAM for Adaptive MAS
Mao Xinjun, Qu Tingting, and Wang Ji
2008, 45(11):  1892-1901. 
Asbtract ( 506 )   HTML ( 0)   PDF (1515KB) ( 517 )  
Related Articles | Metrics
Agent-oriented software engineering (AOSE) is viewed as a novel paradigm for complex systems. However, it is still a challenge to develop complex MASs that are dynamic, open and even self-adaptive with such technology. To deal with the problems, twofold should be considered when conducting the research on AOSE: the potentiality and flexibility of agent orientation paradigm should be extensively exploited and AOSE should borrow and integrate successful technologies and practices of software engineering. In this paper, an agent-oriented methodology called ODAM for adaptive MAS is presented, which is based on dynamic binding mechanism, borrows organization metaphor to model adaptive MAS in an abstract and natural way, and integrates iteration development and MDA approach to simplify the development and adapt to the variety of agent technologies and platforms. The methodology framework and technical details are introduced, including dynamic binding mechanism, meta-model and modeling language based on organization abstractions, and software development process based on iteration development and MDA. A case is also studied to illustrate the approach.
An Approach to Dynamic Service Composition Based on Context Negotiation
Tang Lei,, Huai Xiaoyong, and Li Mingshu,
2008, 45(11):  1902-1910. 
Asbtract ( 380 )   HTML ( 0)   PDF (1222KB) ( 408 )  
Related Articles | Metrics
Composition of Web services has received more and more interest to support businesstobusiness or enterprise application integration. In this paper, an approach to service composition based on context negotiation is presented and applied to distributed document management system for pervasive computing. It can not only adapt to the dynamic environment for pervasive computing, but also satisfy the users preferences. First, the definitions of the context and context aware service are given. Based on the definitions, the constraint conditions of context negotiation are given. According to the constraint conditions, the service composition algorithm is presented. Furthermore, the algorithm is optimized to improve the resource utilization. Finally, the prototype of service composition is realized based on context negotiation is presented and applied to distributed document management system for pervasive computing. Some experiments are given to validate the algorithm performance and validity, i.e., the time cost when increasing the number of candidate services and context, the conflicts of candidate services selection when increasing the number of context, etc. The prototype and experiment data prove that the approach presented can be applied to context sensitive service composition in pervasive computing.
A Novel Method for Behavioral Model Refinement Based on Genetic Programming
Wang Shuaiqiang, Ma Jun, Wang Haiyang, and Wan Jiancheng
2008, 45(11):  1911-1919. 
Asbtract ( 395 )   HTML ( 2)   PDF (1014KB) ( 466 )  
Related Articles | Metrics
The refinement of behavioral models is a crucial issue in model driven software development. In this paper, according to the formal behavioral models for environment description and the refinement theory in formal methodology, an automatic method for the refinement of behavior models based on genetic programming is proposed. The method regards the refinement as the assembling process of some basic executable operations. In the beginning, by means of analyzing the post condition formula of the abstract behavior, the refinement based on the logic reduction is carried out, and thus the loop structures and other simple formulae are produced. The simple formulae are used to construct new abstract behaviors, and they are refined sequentially by the refinement based on genetic programming until the generated program is comprised of the basic operations finally. Since choice structures are difficult to evolve by traditional genetic programming, the authors introduce the conception of combination termination criterion. By testing this criterion, the choice structures are created as well. Finally, sorting is taken as an example to demonstrate the evolutionary process, and the result shows that the proposed method is fairly feasible. As a matter of fact, the method can be used in the problem solving areas in which the solution is composed of some basic operations.
A Localization Algorithm Based on Dynamic Grid Division for Mobile Wireless Sensor Networks
Wei Yehua, Li Renfa, Luo Juan, and Chen Honglong
2008, 45(11):  1920-1927. 
Asbtract ( 502 )   HTML ( 0)   PDF (1170KB) ( 565 )  
Related Articles | Metrics
Localization is extremely critical for many applications in wireless sensor networks. Without the location of sensor nodes, collected information is valueless. Meanwhile, location information is also helpful for many network operations such as clustering, topology control, and geographical routing. Localization is an extensively studied problem in wireless sensor networks. Some localization algorithms for static wireless sensor networks have been proposed. However, little study has been done about the localization in mobile wireless sensor networks. A Monte-Carlo localization algorithm is presented based on dynamic grid division for wireless sensor networks, in which the nodes can move randomly. In the presented algorithm, when the number of received one-hop anchors is lager than a threshold, a farthest distance selecting algorithm is used. Only these selected anchors take part in localization and data transmitting, and they can conserve some energy. Then sampling area is created based on selected or all received anchor information, a grid division is made, and the maximum sampling number is computed with cell counts. Next, sampling is made in the created area and filtering is done with a mobility model of error compensation, which can improve the sampling efficiency and reduce the computing overhead. The simulation demonstrates that the proposed algorithm provides better performance in localization precision, computing overhead, and energy consumption.
An Efficient Virtual Co-Allocation Service Model in P2P Networks
Yue Guangxue, and Li Renfa
2008, 45(11):  1928-1938. 
Asbtract ( 421 )   HTML ( 0)   PDF (1642KB) ( 431 )  
Related Articles | Metrics
In the realistic networks, resources are mainly concentrated in several important nodes, with a large number of nodes acting as the request of the service. Because P2P network is a virtual network based on application layer of the Internet, and free riding, under the wide area network environment, where unavoidably exist some problems such as congestion, node failure, low efficiency and quality of service (QoS). In order to solve these problems, a scheme is provided, which dynamically constructs virtual co-allocation service pool that consists of service member coordination to achieve the load balance. A stakeholder-centric virtual organization model is put forward to cope with the problem of how to dynamically select autonomous partners services and processes to form a virtual organization to satisfy the stakeholders requirement in realistic networks. The scheme constructs the strategy of choosing the service member coordination based on D-S(Dempster-Shafer) evidence reasoning. It makes use of the historical trade information of nodes and recommendation certificate to show the spare service alliances global trust character. Also provided are the virtual co-allocation service pool of mathematics model, constraint condition, constructional rule, and correlative analysis. The simulation results show that the scheme could effectively solve the above problems, and greatly improves the QoS of P2P network.
Study of Multiuser MIMO Transmission Scheme with Limited Feedback
Yang Yubo, Tian Lin, Zhou Jihua, Zhang Yucheng, and Shi Jinglin
2008, 45(11):  1939-1946. 
Asbtract ( 416 )   HTML ( 0)   PDF (787KB) ( 384 )  
Related Articles | Metrics
In broadband wireless communication systems, the practicability of MIMO techniques relies on the solutions of multiuser MIMO transmission schemes and feedback schemes of channel state information (CSI). Most researches on limited feedback focus on point-to-point transmission systems. In multiuser MIMO systems, the reduction of feedback signaling overhead is more important since the base station relies on the CSI of all receiving terminals. In this paper, a transmitter interference elimination (TIE) scheme based on the reverse matrix of signature matrix is proposed to improve the system performance. A user selection algorithm is proposed to choose the transmitting users that generate the minimum interference to each other. A power allocation algorithm based on the water-filling algorithm is also proposed to maximize the system throughput. In addition, a limited feedback scheme based on vector quantization is proposed to reduce the feedback signaling overhead. Finally, the performance characteristics of TIE under limited feedback is studied and proved. It is shown that the multiuser interference generated by quantization error is linearly proportional to the transmitting power. It is also shown both in analysis and in simulation that the performance of multiuser MIMO systems under limited feedback are upper bounded because of the multiuser interference generated by feedback error. Numerical results show that the proposed scheme achieves significant throughput improvement while reducing the feedback overhead.
High Performance Architecture for Elliptic Curve Scalar Multiplication Based on FPGA
Chen Jing, Jiang Junjie, Duncan S. Wong, Deng Xiaotie, and Wang Dongsheng
2008, 45(11):  1947-1954. 
Asbtract ( 515 )   HTML ( 0)   PDF (795KB) ( 472 )  
Related Articles | Metrics
Elliptic curve scalar multiplication (ECSM) is the most important operation in elliptic curve cryptography (ECC). The study of its implementation performance and optimization has attracted great interests of researchers due to the rapid development and deployment of ECC recently. Focusing on designing high performance hardware architecture for this operation, first a digit-serial/parallel finite field multiplier with word width D is developed, which completes one multiplication over GF(2/+m) in m/Dcycles. Using this multiplier, a hardwired logic design for performing elliptic curve scalar multiplication over GF(2/+m) is proposed. This architecture maximizes the parallelism that the projective coordinates version of the Montgomery scalar multiplication algorithm can achieve, and also shortens the maximum delay path. It completes one scalar multiplication over GF(2/+m) in about (5m-6) cycles. When implemented on Xilinx Virtex4-LX200 FPGA, using multipliers with word width 16, it occupies about 46% of all computation resources available, achieves a frequency of 225MHz, and takes only 36μs to complete one elliptic curve scalar multiplication operation for arbitrary elliptic curves, arbitrary points and integers over GF(2/+/{163/}). This result outperforms all comparable FPGA-based elliptic curve scalar multipliers in the world. On the other hand, this architecture can be easily reconfigured to adapt different security levels and performance requirements.
A Two Phases Unsupervised Sequential Forward Fractal Dimensionality Reduction Algorithm
Yan Guanghui and Li Zhanhuai
2008, 45(11):  1955-1964. 
Asbtract ( 476 )   HTML ( 0)   PDF (899KB) ( 417 )  
Related Articles | Metrics
Both the dimensionality and the amount of data that needs to be processed are increasing rapidly with the advances in data collection and storage capabilities. Accordingly, reducing the dimensionality of the attribute vectors to enhance the performance of the underlying techniques is a popular solution to tackle the infamous curse of dimensionality. The fractal dimension of one dataset keeps stable as the embedding dimension of the dataset varies and can act as the indicator to guide the process of the dimensionality reduction. Therefore, the authors choose the individual attribute fractal dimension and the difference of fractal dimension after the attribute merge operation as the criterion of attribute correlation and transform the dimensionality reduction problem into an optimization problem which tries to find the attribute subset with the maximal fractal dimension and the attribute number restriction simultaneously. In order to solve the optimization problem, a two phase unsupervised sequential forward fractal dimensionality reduction algorithm is proposed, which integrates the relevance analysis process and the redundancy analysis process based on the fractal dimension of the individual attribute and the attribute subset. The elementary time-space complexity of the algorithm is presented. The experimental results using synthetic and real life data set show that the algorithm gets the satisfactory subset with rather low workload of fractal dimension calculation.
An Evolutionary Tree Reconstruction Method Combining Quartet Puzzling and Neighbor Joining
Li Jianfu, Guo Maozu, and Liu Yang
2008, 45(11):  1965-1973. 
Asbtract ( 772 )   HTML ( 2)   PDF (2463KB) ( 543 )  
Related Articles | Metrics
Evolutionary tree reconstruction is a very important topic in biology. A reliable evolutionary tree has many important applications in medical and biological research, such as drug discovery and conservation biology. A rich variety of tree reconstruction methods have been developed, which fall into three categories: 1) maximum parsimony methods, 2) distance based methods, and 3) approaches applying the maximum likelihood principle. Maximum likelihood programs can recover the correct tree more frequently than other methods. But, inference of evolutionary trees using the maximum likelihood principle is NP-hard. The reconstruction method quartet puzzling(QP) is currently receiving considerable attention. QP is a divide-and-conquer approach to the maximum likelihood problem. QP works by estimating the set Q of quartet topologies, then combining the information in Q into an evolutionary tree on the whole set by some recombination technique. However, it is demonstrated that current performance of QP is far from satisfactory as expected. How to reconcile the different topologies of the quartets into one tree is still a major problem faced by QP. In order to improve QP, a new method named QPNJ is proposed, which has the same computational complexity as QP. Experiments with simulated datasets show that QPNJ is more accurate than others. Moreover, unlike QP, QPNJ is less dependent on the shape of the correct tree.
Singular Point Extraction from Fingerprint Based on Gaussian-Hermite Moment and Improved Poincare Index
Weng Dawei, Yin Yilong, Yang Gongping, and Qi Xiuyan
2008, 45(11):  1974-1984. 
Asbtract ( 822 )   HTML ( 5)   PDF (2900KB) ( 426 )  
Related Articles | Metrics
It is very important to extract singular points accurately and reliably for classification and matching of fingerprints. To deal with the difficulty in extracting singular points from fingerprint image of low quality, a two-phase algorithm for singular points extraction is presented in this paper. In the first phase, to heighten anti-noise capability of traditional Poincare index and reduce false singularities, an improved algorithm for Poincare index is proposed, and then it is used to extract candidate singularities. In the second phase, based on the characteristic that Gaussian-Hermite moment attribution between singular region and ordinary region is different, it is along the direction orthogonal local ridge orientation in ordinary area while along all the directions in singular region. Gaussian-Hermite moment attribution for each candidate in its round neighborhood is calculated to determine whether it is true singularity or not. Because this two-phase method effectively assembles information of ridge orientation and coherence in singularitys neighborhood,it can extract singularities in a comparatively accurate and reliable way. Experimental results show its effectiveness and robusticity. 500 fingerprint images from the NIST-4 database are used for an experimental test, and the accuracy rate on identifying singular points is 93.05% (96.93% for core points and 86.43% for delta points).
A Multi-Standard Video Codec Architecture Based on Multi-Core Pipeline
Zhang Peng, Du Jianguo, Xie Xiaodong, and Gao Wen
2008, 45(11):  1985-1993. 
Asbtract ( 408 )   HTML ( 2)   PDF (1143KB) ( 493 )  
Related Articles | Metrics
Supporting multi-standard is becoming the trend of video codec, which brings the challenge of both performance and flexibility in system design. The authors introduce the VLSI implementation of a video codec architecture which can support multiple video coding standards, including H.264/AVC, AVS, and VC-1. Algorithm characteristics of these standards are first analyzed. Based on the algorithm similarities and differences of them, several techniques are efficiently adopted to optimize the architecture at system level and module level. At system level, a four-stage macroblock-based pipeline consists of five programmable cores, and all modules of the encoder and decoder are carefully mapped onto the pipeline architecture. The pipelined multi-core architecture can largely improve system performance by exploiting system level parallelism while maintaining the programmability. And at module level, dedicated data paths are efficiently embedded in the programmable cores to speed up the signal processing. Detailed module level HW/SW partition is proposed according to certain computation in each module. The implementation results show that the codec can guarantee real-time encoding or decoding NTSC (30fps)/PAL (25fps) videos in the worst case, using only 961kgate.