Loading...
ISSN 1000-1239 CN 11-1777/TP

Table of Content

15 September 2007, Volume 44 Issue 9
Paper
A High Performance Total Order Broadcast Algorithm
Li Lei, Wang Huaimin, and Shi Dianxi
2007, 44(9):  1449-1455. 
Asbtract ( 485 )   HTML ( 0)   PDF (364KB) ( 748 )  
Related Articles | Metrics
Total order broadcast is an important group communication primitive for building fault-tolerant distributed applications, and it guarantees that all members in a communication group receive messages in the same order even if some members are faulty. The existing total order broadcast algorithms can not achieve both low latency and high throughput at the same time, and lack adaptability for the communication patterns of applications, and thus they are not suitable for high performance computing environments. In analyzed in this paper are the ordering mechanisms in some existing typical algorithms, and the key factors that affect the performance of total order broadcast algorithms are pointed out. Then a novel algorithm is proposed, which builds the total order using the leader/followers pattern and is driven by block detection mechanism. It works as follows: each group member can send messages at any time, but only messages from the current leader are delivered, and if the leader remains inactive, it will issue a special request to change the leadership to one of the active follower members. Simulation experiments are performed and the results show that the new algorithm achieves good performance in terms of both latency and throughput, and is much more efficient under bursty message arrival pattern.
An Improved PERM Method for Protein Folding Problem of HP Model
Chen Mao, Huang Wenqi, and Lü Zhipeng
2007, 44(9):  1456-1461. 
Asbtract ( 495 )   HTML ( 1)   PDF (331KB) ( 610 )  
Related Articles | Metrics
Predicting the structure of a protein from its amino acid sequence is one of the central problems in computational biology. PERM (prunedenrichment Rosenbluth method) is an efficient algorithm for the protein folding problem based on HP lattice model. However, the PERM algorithm is not succinct and is not easily understood. Based on introducing the algorithm idea of PERM and the key factors affecting the efficiency of PERM, a new version of PERM with two improvements is proposed: one is that it redefines the weight and the predicted value in PERM, and the other is that it unifies the calculation of weight when choosing possible branches. The proposed formulae are more succinct and uniform and the algorithm idea is much clearer in the improved PERM. The computational result shows that the improved PERM can find the known lowest energy states for all the test instances. The improved PERM is generally several to hundreds times faster than PERM, and it is far more efficient than Monte Carlo. It is noteworthy that with the improved PERM, lower energy (-35) can be found than the previously best result (-34) in literature for one test instance with length equal to 46.
A Dynamically Incremental Manifold Learning Algorithm
Zeng Xianhua, and Luo Siwei
2007, 44(9):  1462-1468. 
Asbtract ( 520 )   HTML ( 1)   PDF (428KB) ( 557 )  
Related Articles | Metrics
The main goal of manifold learning is to find a smooth low-dimensional manifold embedded in high-dimensional data space. At present, manifold learning has become a hot issue in the field of machine learning and data mining. In order to seek valuable information from high-dimensional data stream and large-scale data set, it is urgently necessary to incrementally find intrinsic low-dimensional manifold structure in such observation data set. But, current manifold learning algorithms have no incremental ability and also can not process the giant data set effectively. Aiming at these problems, the concept of incremental manifold learning is firstly defined systematically in this paper. It is advantageous to interpret the dynamic process of developing a stable perception manifold and to guide the research of manifold learning algorithms which fit to incremental learning mechanism in man brain. According to the guiding principles of incremental manifold learning, a dynamically incremental manifold learning algorithm is then proposed, which can effectively process the increasing data sets and the giant data set sampled from the same manifold. The novel method can find the global low-dimensional manifold by integrating the low-dimensional coordinates of different neighborhood observation data sets. Finally, the experimental results on both synthetic “Swiss-roll” data set and real face data set show that the algorithm is feasible.
Research on Language Modeling Based Sentiment Classification of Text
Hu Yi, Lu Ruzhan, Li Xuening, Duan Jianyong, and ChenYuquan
2007, 44(9):  1469-1475. 
Asbtract ( 608 )   HTML ( 1)   PDF (394KB) ( 661 )  
Related Articles | Metrics
Presented in this paper is a language modeling approach to the sentiment classification of text. It provides the semantic information beyond topic in text summary when characterizing the semantic orientation of texts as “thumb up” or “thumb down”. The motivation is simple: “thumb up” and “thumb down” language models are likely to be substantially different: they prefer to different language habits. This divergence is exploited in the language models to effectively classify test documents. Therefore, the method can be deployed in two stages: firstly, the two sentiment language models are estimated from training data; secondly, tests are done through comparing the Kullback-Leibler divergence between the language model estimated from test document and those two trained sentiment models. The unigrams and bigrams of words are employed as the model parameters, and correspondingly maximum likelihood estimation and smoothing techniques are used to estimate these parameters. Compared with two different classifiers, i.e. SVMs and Nave Bayes, on movie review corpus when training data is limited, the language modeling approach performs better than SVMs and Nave Bayes classifier, and on the other hand it shows its robustness in sentiment classification. Future works may focus on finding a good way to estimate better language models, especially the higher order n-gram models and more powerful smoothing methods.
A Binary Differential Evolution Algorithm with Hybrid Encoding
He Yichao, Wang Xizhao, and Kou Yingzhan
2007, 44(9):  1476-1484. 
Asbtract ( 775 )   HTML ( 4)   PDF (535KB) ( 848 )  
Related Articles | Metrics
Differential evolution (DE) is an evolutionary algorithm that is based on the individual differential reconstruction idea. It is proposed by Stom and Price in 1997, and is very suitable to solve optimization problem over continuous spaces.First of all, with the introduction of concepts of differential operator (DO), etc., the concise description of DE is given and the analyis of its main features is advanced. For solving discrete optimization problem using DE, based on the idea of mathematic transform, the concepts of adjuvant search space and individual hybrid encoding are advanced. And with a definition of special mapping and the function of adjuvant search space, the high efficient differential evolution search over a continuous space is transformed into the homomorphism evolution search over discrete spaces. Thus, the binary differential evolution algorithm with hybrid encoding (HBDE) is first proposed. Subsequently, given definitions of probabilistic convergence and complete convergence of HBDE, and proved these by using Markov random theory. HBDE not only has the advantages of DE, but also is very suitable to solve discrete optimization problems. Calculations of instances to random 3-SAT problem and 0/1 knapsack problem show that HBDE has better convergence capability and stability, and its property is far more superior to binary particle swarm optimization as well as genetic algorithm.
The Dual Software Architecture Description Framework XYZ/ADL
Zhu Xueyang
2007, 44(9):  1485-1494. 
Asbtract ( 649 )   HTML ( 1)   PDF (523KB) ( 518 )  
Related Articles | Metrics
Software architecture plays a critical role in the process of software development. Graphical languages are widely used in software architectural modeling. They have advantages of being intuitive and semi-formal. However, the lack of precise semantics makes the models difficult to analyze. In this aspect, formal methods can be used complementarily. And it is not practical to model architectures using only the formal languages in engineering development. It is now a common concern among industrial and academic communities on how to combine the merits of these two kinds of methods and thereby improve the software reliability. In this paper, a dual software architecture description framework, XYZ/ADL, is proposed. It supports the basic concepts of software architecture used commonly in software engineering. The front end of XYZ/ADL is a collection of graphical languages, including the usual “box-and-line” diagrams as structure expression, and the UML activity diagrams and statecharts as behavioral expression. The back end of XYZ/ADL is the linear temporal logic language XYZ/E, which can represent both dynamic and static semantics of systems as its unified formal semantic backbone. The graphical languages at the front end can facilitate the communication among software engineers and their use of this framework. The formal language at the back end is the basis of formal analysis and verification.
Fault-Tolerant Real-Time Scheduling Algorithm in Software Fault-Tolerant Module
Liu Dong, Zhang Chunyuan, Li Rui, Huang Ying, and Li Yi
2007, 44(9):  1495-1500. 
Asbtract ( 556 )   HTML ( 0)   PDF (371KB) ( 556 )  
Related Articles | Metrics
In the fault-tolerant real-time scheduling algorithms of software fault-tolerant module, each task possesses two versions, namely, primary and alternate. The prediction of executable primaries is the key issue that affects scheduling performance. In order to improve the prediction precision of primaries, new algorithms named DPA (deep-prediction based algorithm) and EDPA (EDF-based DPA) are put forward. Two algorithms schedule tasks with the help of prediction-table, which is created for the released primary and contains the information of the tasks between the current time and the notification time. In DPA algorithm, the primaries in prediction-table are scheduled according to the temporal sequence of their corresponding alternates' notification time. EDPA algorithm schedules primaries using EDF algorithm in prediction-table. If primaries in prediction-table do not fail, the table is referenced to schedule tasks with no cost. Simulation results show that DPA and EDPA algorithms provide more execution time for primaries and waste less CPU time than the well-known algorithms so far. It is shown that the two algorithms can get higher scheduling performance with lower cost in the circumstance when tasks periods are short and software fault probability is low.
Design Implementation and Performance Analysis of a High Performance Northbridge
Zeng Hongbo, Hu Mingchang, Li Wen, Cai Fei, and Tang Zhimin
2007, 44(9):  1501-1509. 
Asbtract ( 517 )   HTML ( 0)   PDF (513KB) ( 514 )  
Related Articles | Metrics
To improve the performance of the entire computing system, not only the performance of CPU needs to be boosted, but also high performance chipsets are needed. Chipsets are responsible for data delivery between CPU and other devices, commonly with memory controllers embedded as crucial components, and this significance is highlighted as the memory access latency has become one of the most significant bottlenecks in nowadays computer systems. Discussed in this paper are the methods of designing and implementing a northbridge targeting at high performance. The architecture of NB2005—a northbridge for Godson-2 processor—and the optimization techniques applied on each module are described in detail. A novel dynamic page management strategy in DDR controller is proposed, which exploits the spatial locality characteristics of programs to reduce memory access latency. A new steam buffer mechanism is described, which at runtime jointly considers the memory access behavior and the status of memory controller. Also presented is a new buffer-swap mechanism implemented in PCI channel to improve the throughput of PCI bus. Experiments show that the Godson-2 system augmented with NB2005 outperforms that with Marvell GT64240 in all aspects tested. Specifically, NB2005 achieves above 40% memory bandwidth enhancement, yields speedups of 12.2%and 2.5% in SPEC INT2000 and SPEC FP2000 respectively and also improves the disk I/O performance by more than 30%.
Dimensional Bubble Flow Control and Adaptive Routing Algorithm in Torus Networks
Xiao Canwen, Zhang Minxuan, and Guo Feng
2007, 44(9):  1510-1517. 
Asbtract ( 391 )   HTML ( 1)   PDF (432KB) ( 458 )  
Related Articles | Metrics
One novel flow control strategy called torus' dimensional bubble flow control (TDBFC) is presented. At the same time, a novel adaptive routing algorithm called torus' dimensional bubble routing algorithm (TADBR) is also presented. The flow control strategy of TDBFC is designed for torus networks and based on bubble flow control and DBFC flow control. Since there are similar things between bubble flow control and DBFC flow control such as virtual cut-through switching and credit-based flow control mechanism etc, the flow control strategy of TDBFC is realized by integration of bubble and DBFC flow control. In torus networks, when the flow control strategy of TDBFC is accepted, the routing algorithm of TADBR can get the goals including deadlock-free and minimal distance even if the cyclic dependencies exist. The detailed proof is provided for these conclusions by analysis of the situation of all kinds of packet. Finally, the 2-D torus simulator called RingNetSim is presented. The simulator realizes the flow control of TDBFC and routing algorithm of TADBR. The performance of routing algorithms is evaluated by adjusting buffering space, communication models and arbitration algorithms. The performance results of TADBR algorithm are compared with the dimension-order routing algorithm. The results show that the TADBR algorithm owns preferable performance.
Automatically Constructing Counter-Examples of Security Protocols Based on the Extended Horn Logic Model
Zhou Ti, Li Mengjun, Li Zhoujun, and Chen Huowang
2007, 44(9):  1518-1531. 
Asbtract ( 379 )   HTML ( 1)   PDF (821KB) ( 565 )  
Related Articles | Metrics
It is very important to construct counter-examples of security protocols automatically, but there are few methods to do this work. Based on the extended Horn logic model and the corresponding verification method of security protocols, the solution strategies are presented, which are used to construct automatically intuitionistic counter-examples in protocols which break the security properties. Some important theorems are proved to make sure the correctness of these solution strategies. Based on this model, a series of algorithms are designed to construct counter-examples of security protocols automatically, the optimal methods of some main algorithms are given, and their complexities are analyzed in detail. These algorithms are implemented in the verifier SPVT which is developed in the functional programming language Objective Caml. The time complexities of these algorithms are proved to be linear theoretically. Finally, some classical security protocols are verified by SPVT which presents counter-examples automatically. Those results are also discussed and some counter-examples are analyzed. The results of experiments also show that these algortithms are linear. As the form of counter-examples is very similar to Alice-Bob notations, it is very convenient to read. Therefore, this formal method can be used by experts in all fields to check if there is a real counter-example in security protocols.
Homomorphic Commitment Schemes Based on Bilinear Groups
Song Yan
2007, 44(9):  1532-1537. 
Asbtract ( 931 )   HTML ( 2)   PDF (352KB) ( 806 )  
Related Articles | Metrics
Commitment scheme is one of the fundamental and useful cryptographic primitives; it has found applications to a wide range of security mechanisms: digital signature schemes, electronic payment systems, zero-knowledge protocols, secure multiparty function evaluation protocols, and so on. Therefore, commitment has received extensive study in the literature. From the perspective of design approach, many of the known and efficient constructions of commitment schemes fall into the paradigm of q-one-way group homomorphism. Though effective and fairly general, q-one-wayness is a strong requirement so that when one tries to instantiate it, the choices of algebraic structures turn out to be limited; hence, it is an important topic to find alternative to the construction of commitment schemes of various properties. In this paper, using bilinear groups of composite order, a perfect hiding trapdoor commitment scheme is constructed for the first time, which is provably computational binding under the subgroup decision assumption. A dual construction of unconditional binding commitment scheme is also presented, which is proven to be computational hiding under the same intractability assumption. These proposals thus give alternative approach to constructing commitment schemes. Moreover, due to the bilinear maps associated with the bilinear groups, the proposed commitment schemes demonstrate unique multiplicative homomorphic property.
Anomaly Detection of Program Behaviors Based on System Calls and Homogeneous Markov Chain Models
Tian Xinguang, Gao Lizhi, Sun Chunlai, and Zhang Eryang
2007, 44(9):  1538-1544. 
Asbtract ( 544 )   HTML ( 2)   PDF (395KB) ( 609 )  
Related Articles | Metrics
Anomaly detection is the major direction of research in intrusion detection. Presented in this paper is a new method for anomaly detection of program behaviors, which is applicable to host-based intrusion detection systems using system calls as audit data. The method constructs a one-order homogeneous Markov chain to represent the normal behavior profile of a privileged program, and associates the states of the homogeneous Markov chain with the unique system calls in training data. At the detection stage, the occurrence probabilities of the state sequences of the Markov chain are computed, and two different schemes can be used to determine whether the monitored program's behaviors are normal or anomalous while the particularity of program behaviors is taken into account. The method gives attention to both computational efficiency and detection accuracy. It is less computationally expensive than the method based on hidden Markov models introduced by Warrender et al, and is more applicable to on-line detection. Compared with the methods based on system call sequences presented by Hofmeyr and Forrest, the method in this paper can achieve higher detection accuracy. The study empirically demonstrates the promising performance of the method, and it has succeeded in getting application in practical host-based intrusion detection systems.
A Secure Dynamic Threshold Signature Scheme
Li Huixian, Cai Wandong, and Pang Liaojun
2007, 44(9):  1545-1549. 
Asbtract ( 431 )   HTML ( 0)   PDF (285KB) ( 472 )  
Related Articles | Metrics
In most available threshold signature schemes, only one group secret key is shared among a group of signers and the threshold value is fixed. However, in many occasions, the number of signers often depends entirely on the significance of the document. The solution usually is to construct a threshold signature scheme for different threshold value. It is obvious that there is a great deal of repeated computation and the efficiency is very low in this method. Motivated by this concern, based on the intractability of the discrete logarithm problem, a new dynamic threshold signature scheme that solves the above problem is proposed in the paper. In the proposed scheme, multiple group secret keys are shared among a group of signers, and each group secret key has its specific threshold value. Different group secret keys can be chosen flexibly to sign documents depending on their significance. Each signer needs to keep only one secret shadow and a secret value, and no public shadow is needed. Analyses show that, compared with the existing schemes, the proposed scheme can resist to various attacks from internal or external attackers, and protect the signature from allied cheating, which indicates that the proposed scheme is capable of providing more security and practicability.
A P2P Topology Measuring Method Based on the Positive Feedback Strategy
Wang Yong, Yun Xiaochun, Li Yifei, and Wang Xiaofeng
2007, 44(9):  1550-1556. 
Asbtract ( 347 )   HTML ( 0)   PDF (415KB) ( 377 )  
Related Articles | Metrics
Measuring and analyzing the topological properties of P2P networks will provide guidelines for their further optimization and development. However, it seems infeasible to capture a complete and precise snapshot of all the P2P networks due to the variety of their protocols and the dynamic characteristics of the networks. An alternative way is to measure the specific P2P network instance by studying its protocol details and constructing a high-speed topology crawling system. In this paper, two most difficult problems confronted by the crawling systems are discussed, a P2P network topology measuring framework using the positive feedback strategy and its key algorithms are presented, based on which the measured Gnutella network is basically taken as an example, the distributed topology crawling system (called D-crawler) is implemented, and the evaluation methods of the framework are defined. The performances of various topology crawling systems are compared, and the accuracy and completeness of the D-crawler system are analyzed in detail. The results show that the D-crawler system can collect more accurate, complete, and stable topology data of the Gnutella network with relatively shorter crawling period and fewer hardware requirements. The measuring framework and algorithms can be applied to other P2P networks with slight modification of the D-crawler system.
Topology-Aware Node Rendezvous Algorithm Based on DHT
Duan Hancong, Lu Xianliang, Tang Hui, Zhou Xu, and Zhao Zhijun
2007, 44(9):  1557-1565. 
Asbtract ( 549 )   HTML ( 0)   PDF (552KB) ( 579 )  
Related Articles | Metrics
In the peer-to-peer application models such as file sharing, media streaming and cooperative computing, nodes communicate by unicast and construct overlay networks. As overlay networks is usually built on top of existing substrate network, random joining of nodes will lead to topology mismatching, which will not only increase the communication delay between end nodes but also make a heavy bandwidth pressure on the substrate network. Current topology-matching algorithms suffer low scalability and long clustering delays. Proposed in this paper is a novel distributed topology-aware node rendezvous algorithm (TANRA) based on network coordinates and distributed hash table (DHT). In TANRA, firstly, 2-dimension network coordinates space of nodes is partitioned into many sectors which have the same area by means of using a cluster of concentric circles. Secondly, every sector is mapped to a unique region in the multi-layer name space of DHT based on thegeographic information in 2-dimension network coordinates space. Finally, the nodes can be clustered with its near neighbors in substrate network by calling two primitives of DHT, Put()/Get(), as the proximity between nodes is kept. The simulation results show that TANRA can efficiently provide the topology matching for upper applications with lower joining delay under a large number of end nodes.
Research on Searching in Unstructured P2P Network Based on Power-Law Distribution and Small World Character
Tang Daquan, He Mingke, and Meng Qingsong
2007, 44(9):  1566-1571. 
Asbtract ( 387 )   HTML ( 1)   PDF (373KB) ( 579 )  
Related Articles | Metrics
Unstructured peer-to-peer (P2P) systems have been widely used in the Internet. The common search method used is flooding-based broadcasting. This method usually leads to serious communication cost problem. Based on the observation and analysis of social communication network, it is noticed that in social communication network, the transfer of message is optimized. In message spreading, reduplicate communication cost is avoid unwillingly. The rumor spreading mechanism utilizes the clustering characteristic of social communication network in born. The mechanism responsible for the emergence of power-law networks is growth and preferential attachment. In this paper, based on the power-law distribution and small world character of unstructured P2P networks, a search method is presented. This method combines preference link mechanism, which generates the power-law character, and interest decline mechanism in rumor spreading, which is accommodated to clustering network. Mathematical analyses show that this approach could sharply optimize the communication cost in P2P systems. To evaluate the effectiveness of this algorithm, using topology generation tool BRITE to generate simulation network based on the GLP (generalized linear preference) model. The result of the preliminary simulation shows that the communication cost of this algorithm is less than the half of the flooding algorithm, and the overlay degree of this algorithm is quite high.
An Object-Adjustable Heuristic Scheduling Strategy in Grid Environments
Ding Ding, Luo Siwei, and Gao Zhan
2007, 44(9):  1572-1578. 
Asbtract ( 354 )   HTML ( 1)   PDF (448KB) ( 439 )  
Related Articles | Metrics
Task scheduling in grid environments is much more challenging because grid is a distributed, heterogeneous and dynamic system. Focusing on the fact that the tasks involved in such grid environments may have quite different execution time depending on their types, the concept of sufferage based on mean execution time, which considers the requirement of QoS, is introduced to serves as the new heuristic of task scheduling. Besides, the notion and definition of service ratio are given to measure this kind of QoS quantitatively. Furthermore, by incorporating the QoS with the makespan of tasks, a local objective function, which can be adjusted, is proposed and a corresponding heuristic scheduling strategy based on the function is presented to satisfy the different demands of task scheduling. Simulation results confirm that this object-adjustable scheduling algorithm can improve the QoS of tasks by giving higher priority to the tasks with larger waiting time relative to execution time, and can trade off two objectives, makespan and QoS, by adjusting the preference factor in the local objective function. Therefore, it is more flexible than most of the existing task scheduling algorithms since they are always fixed-objective and more suitable for the complex grid environments.
A Survey of Interactive Rendering of Large-Scale and Complex Scenes
Wu Lingda, Gao Yu, and Wei Yingmei
2007, 44(9):  1579-1587. 
Asbtract ( 387 )   HTML ( 1)   PDF (389KB) ( 645 )  
Related Articles | Metrics
The fast rendering technique for large-scale and complex scenes is the basis of many important applications such as virtual reality, real-time simulation, 3D interactive design, and so on. And it is also a basic problem that many researches are trying to resolve. With recent advances in 3D scanning and modeling techniques, there has been a rapid increase in the availability and size of geometric datasets. Large-scale and complex scenes consisting of hundreds of millions of primitives are becoming increasingly common in many graphics applications. These large data sets exceed the capability for interactive rendering on standard graphics hardware. And it is a challenge for researchers to develop new rendering algorithms and data structures to work with these large complex scenes more efficiently. In this paper, a detailed survey of current techniques for interactive rendering of large-scale and complex scenes is given. Firstly, the state-of-the-art progress of the research is presented. Secondly, the related key technologies are summarized in detail. Thirdly, some typical interactive rendering systems for large-scale and complex scenes are compared and the necessary components and general framework for a potential interactive rendering system are described. Finally, some key research issues that need further research efforts are pointed out.
An Adaptive LIC Rendering Method for Fluid Artistic Style
Qian Xiaoyan, Xiao Liang, and Wu Huizhong
2007, 44(9):  1588-1594. 
Asbtract ( 383 )   HTML ( 0)   PDF (555KB) ( 528 )  
Related Articles | Metrics
Fluid artistic style is an important part in art field, which expresses artists' inner feeling by exaggerated fluid lines. Many algorithms have been able to simulate pen-and-ink painting, oil painting, color painting and so on very well. But there is little research on fluid artistic style in non-photorealistic rendering (NPR). Applying LIC algorithm to non-photorealistic rendering, an adaptive LIC rendering method for fluid artistic style is presented. First, the carve vector field is calculated from the gradient field of source's luminance image. And the structured vector field needed by LIC rendering is computed by increasing and smoothing the carve vector field. Disturbing source image randomly produces texture image. Then the algorithm convoles texture image with the structured vector field adaptively. To form natural and smooth rendering results, variable LIC convolution step-length and step-number are produced according to the local characteristics of the structured vector field and the texture image separately. Finally, the algorithm transfers color characteristics of source reference image to the rendering result using color transferring algorithm, which makes the rendering result full of rich color like oil painting. Experimental results comfirm that this method can create vivid and flexible fluid artistic style such as van Gogh's style.
Brain MRI Segmentation Using the Active Contours Based on Gaussian Mixture Models
Chen Yunjie, Zhang Jianwei, Wei Zhihui, Xia Desheng, and Heng Pheng Ann
2007, 44(9):  1595-1603. 
Asbtract ( 481 )   HTML ( 0)   PDF (529KB) ( 549 )  
Related Articles | Metrics
Many neuroanatomy studies rely on brain tissue segmentations of magnetic resonance images. In order to segment these images, many active contour methods have been presented. But the traditional active contour method only uses the information of the edge, when it segment magnetic resonance images with strong noise or weak edges, which is popular in medical images, so it is difficult to get the true edge. In this paper the Gaussian mixture model is used to make a new sanction. With this sanction the model can reduce the effect of the noise and prevent the curve over the edge. The expectation-maximization (EM) method is the popular method to solve the Gaussian mixture model, but it is a local optimizer method and is sensitive to the initial value. The global optimization characteristic of the particle swarm optimizer method, which is based on a metaphor of social interaction, is used to solve this problem. The classical particle swarm optimizer method is sensitive to the initial location. In order to overcome this problem, Powell method and new corrupt method are used to adapt the particle swarm optimizer method and with the new adapted particle swarm optimizer method the Gaussian mixture model can get global best results. Experiments on the segmentation of brain magnetic resonance images show that the proposed model can gain better results in image segmentation.
3D Segmentation Based on Cylindrical B-Spline Active Surface Model
Tang Min, Tang Yang, Xu Lizhong, Pheng Ann Heng, and Xia Deshen
2007, 44(9):  1604-1611. 
Asbtract ( 436 )   HTML ( 0)   PDF (605KB) ( 535 )  
Related Articles | Metrics
Reconstruction of the surface of left ventricle (LV) is one of the challenging problems to analyze left ventricle's shape and motion. In this paper, a cylindrical B-spline active surface (CBAS) model is put forward. The model can be used to segment the LA (long axis) and SA(short axis ) tagged MR images to get the 3D surface of the LV. The model can be seen as a grid composed of B-snake contours. According to the relation of LA and SA image plane's position in the 3D space, one node of the grid synchronously finds the edge points in one SA image and one LA image, and the sample points between two nodes in the same direction calculate their image energies using only one SA or LA image. By using the cylindrical model, the deformation of the model on the SA planes or LA planes can be realized just by changing only one variable, which decreases the complexity of the model. In the paper, firstly, the fuzzy Hough transform is used to find the approximate position of the LV's contour. Secondly, the approximate contours are used to construct the initial surface of the CBAS model. Then, the model is utilized to segment the tagged MR image series and get the exact LV's surfaces. The experiment shows that the method can get exact 3D surfaces by being compared with manually outlined contours.
An Efficient Density-Based Clustering Algorithm for Vertically Partitioned Distributed Datasets
Ni Weiwei, Chen Geng, and Sun Zhihui
2007, 44(9):  1612-1617. 
Asbtract ( 413 )   HTML ( 0)   PDF (362KB) ( 525 )  
Related Articles | Metrics
Clustering is an important research in data mining. Clustering massive datasets has especially been a challenge for its large scale and too much noise data points. Distributed clustering is an effective method to solve these problems. Most of existing distributed clustering research aims at circumstances of horizontally partitioned dataset. In this paper, considering vertically partitioned distributed datasets, based on the analysis of relations between local noise datasets and the corresponding global one, an efficient filtering is applied to the global noise, which can efficiently eliminate the negative affection of noise data and reduce the scale of dataset to be dealt on the center node. Furthermore, an effect storage structure CTL(closed triangle list) is designed to store the intermediate clustering results of each node, which can efficiently reduce communication costs among distributed computer nodes during the clustering process and is helpful to conveniently generate global clustering model with high space utilization ratio and complete clustering information. Thus,a distributed density-based clustering algorithm DDBSCAN is proposed. Theoretical analysis and experimental results testify that DDBSCAN can effectively solve the problem of clustering massive vertically partitioned datasets, and the algorithm is effective and efficient.
Clustering Algorithm of High-Dimensional Data Based on Units
Xie Kunwu, Bi Xiaoling, and Ye Bin
2007, 44(9):  1618-1623. 
Asbtract ( 451 )   HTML ( 2)   PDF (357KB) ( 467 )  
Related Articles | Metrics
Clustering is a data mining problem that has received significant attention from the database community. Data set size, dimensionality and sparsity have been identified as aspects that make clustering more difficult. Clustering in high-dimensional spaces is a difficult problem which is recurrent in many domains, for example, in image analysis. High dimension according to higher spatial dimension, data point distribution sparsity, and average density, therefore, discover the data gathering the kind quite to be difficult. The bottleneck of distance-based methods in clustering high-dimensional data sets is calculating the distance between data points. At present the research technique mainly concentrates on the density method based on the grid method and the characteristic method, and this research usually lies in making the improved data to gather with emphasis on the kind of process performance, including obtaining accurately gathering a kind of center, removing noise and so on. Instead of distance calculation, CAHD (clustering algorithm high-dimensional data) searches the dense units in n-dimension space and subspace from both bottom-up and top-down directions in the meantime, and then it clusters these dense units by using bitwise AND. The search strategy reduces search space to improve efficiency and the only use of bitwise and bit-shift machine instructions in clustering makes the algorithm more efficient. The algorithm CAHD is proposed for high-dimensional data sets. Experiments based on the data set indicate that the algorithm has very good validity.
A Study on the Emotional Prosody Model Building Based on Small-Scale Emotional Data and Large-Scale Neutral Data
Shao Yanqiu, Sui Zhifang, Han Jiqing, and Wang Zhiwei
2007, 44(9):  1624-1631. 
Asbtract ( 383 )   HTML ( 0)   PDF (414KB) ( 525 )  
Related Articles | Metrics
Emotional prosody model building is very important for emotional speech synthesis. However, in the courses of researches, it is a serious problem that the quantity of emotional data is much less than neutral data. The corpus including three emotions, i.e. happiness, anger and sadness, is built in this paper. The parameters that affect the emotional prosody are analyzed and an emotional prosody model based on neural network is built. In the process of training the prosody model, because emotional corpus is too small, the problem of over-fitting caused by data sparsity will occur. In order to utilize the large-scale neutral data to improve the quality of emotional prosody model, three methods are proposed, namely, the method of mixed corpus, data fusion based on least-square algorithm, and multistage network. All of these methods amplify the impact of emotional corpus. So, the prediction results of emotional parameters are all improved to some extent. Especially the method of multistage network, which uses the result of neutral model as one input of the network, corresponds to enlarge the features space and strengthen the function of the emotional input features. The results show that the multistage network is the best one of the three methods.