Loading...
ISSN 1000-1239 CN 11-1777/TP

Table of Content

15 November 2010, Volume 47 Issue 11
Paper
Generation of Fire Animation Based on Level-Set
Hong Yi, Wang Zhaoqi, Zhu Dengming, and Qiu Xianjie,
2010, 47(11):  1849-1856. 
Asbtract ( 504 )   HTML ( 0)   PDF (1636KB) ( 556 )  
Related Articles | Metrics
Fire animation is very attractive to artists and now widely used in many fields, such as film making, cartoons, computer games, fire prevention, visual reality. In the field of computer graphics, however, it has been a difficult issue to realistically simulate various flame effects with rich details. This paper presents an evolution model based on Level-Set metamorphosis and uses the combination of target shape, path constraint, and combustion propagation to evolve the blue core of fire, so as to simulate the path-based flame spreading, mobile-burning objects, etc. In addition, aiming to address the lack of flame details in current fire simulation, we use the modified MacCormack which is a high-order advect solver to lower numerical dissipation and enhance the solution precision of our fire model. Moreover, to further improve the realism of the fire animation, a curve of smoke density evolution is adopted to sculpt the soot production during the burning process. In this way, we can render flame and soot together to obtain a more realistic vision effect. Experimental results show that the proposed method is able to conduct realistic simulation of path-based flame propagation, mobile-burning objects, and other various fire animations with rich details.
An Algorithm of Physically-based Scalar-fields Guided Deformation on GPU
Wu Xiaoxiao, Liang Xiaohui, Xu Qidi, and Zhao Qinping
2010, 47(11):  1857-1864. 
Asbtract ( 412 )   HTML ( 0)   PDF (1533KB) ( 511 )  
Related Articles | Metrics
Scalar fields guided deformation is one of the hot research issues in computer graphics.However, the problem of time efficiency of SFD is yet not to be solved. In this paper, a GPU-based shape deformation algorithm is proposed,integrating the advantage of both the representations of ADFs (adaptively sampled distance fields) and the physically-based modeling techniques.The octree-based ADFs are constructed and represented on the GPU. Then the deformation of the ADFs is governed by the principle of physical dynamics and achieved by manipulating the scalar fields directly. The algorithm constructs the ADFs and represents the octree structure on the GPU, which processes all the octree nodes at the same depth in parallel and provides fast access to the ADFs nodes with the look up tables. A dynamic adaptive resampling strategy for ADFs is employed during the deformation. Meanwhile, the physical properties are integrated into irregular fields for deformation and the stiffness of the non-uniform springs in the system is adaptive to spatial resolution of ADFs to avoid non-homogeneity in physics caused by the irregular structure. The results show that the time and space efficiency of our physically-based deformation algorithm based on the representation of ADFs are relatively high, which is over one order of magnitude faster than CPU algorithm. It also implies the algorithm has potential applications in physically-based interactive sculpting.
Fast Simulation of Immiscible Liquids Interaction
Sun Hongquan and Han Jiqing
2010, 47(11):  1865-1870. 
Asbtract ( 338 )   HTML ( 0)   PDF (1485KB) ( 582 )  
Related Articles | Metrics
The interactions between immiscible liquids, such as water and oil, are common in everyday life. In this paper, a particle-based method is proposed to simulate the phenomena emerging from multiple immiscible liquids interaction. Through establishing a simple and effective interfacial tension model for multiple liquids and using the essential method of the smoothed particle hydrodynamics, the realistic and fast interaction simulation for multiple immiscible liquids can be achieved. In experiments, the interaction process of one kind and two kinds of liquid droplets dropping into a tank of liquid are simulated respectively, and the proposed method is proven to be effective. In terms of the CPU times and simulation effects, the proposed method is compared with the regular SPH method and Müller et al. method, and it can be concluded that the proposed method is more suitable for immiscible simulation. The experimental results show that the overhead owing to the interfacial tension is trivial, and the proposed method is suitable for achieving the realistic and fast immiscible interactions simulation of multiple liquids and can be applied to a variety of applications of graphics. Further, the proposed interfacial tension model can be integrated easily into the existing fluid solvers; thus the cost of implementation is very low.
Video Copy Detection Based on Spatio-Temporal Trajectory Behavior Feature
Wu Xiao, Li Jintao, Tang Sheng, and Guo Junbo
2010, 47(11):  1871-1877. 
Asbtract ( 390 )   HTML ( 0)   PDF (1700KB) ( 514 )  
Related Articles | Metrics
Large scale video copy detection is to detect copied segments of provided video content from large video databases. This application requires compact feature which is insensitive to various visual copy changes. However, traditional image features are prone to spatial changes such as color and texture transformations. The reason is the copied versions have large image transformations including global quality decrease and local visual distortions, which dramatically change the distribution of visual features. Consequently, previous methods based on histogram features and ordinal measures failed in copy detection. To solve this problem, this paper proposes the use of invariant visual features based on keypoint trajectory behavior. Instead of using the spatial cues, the proposed approach models the temporal information as robust features. Temporal cues are quantified based on keypoint trajectories which are insensitive to strong changes. Then videos are represented by spatio-temporal features which are more robust to copy changes. Bag of trajectory (BoT) technigue is adopted for fast pattern matching in large database. The experimental results show that spatio-temporal trajectory features are robust to various visual changes, including image blur, scale and ratio changes, and minor frame rate change. Compared with the state-of-art scheme using ordinal measure, the proposed algorithm with lower cost presents better accuracy.
Brain MR Image Segmentation Based on Anisotropic Wells Model
Chen Yunjie, Zhang Jianwei, Wang Shunfeng, and Zhan Tianming
2010, 47(11):  1878-1885. 
Asbtract ( 545 )   HTML ( 3)   PDF (1882KB) ( 558 )  
Related Articles | Metrics
Nuclear magnetic resonance (MR) image analysis has become a major means of the auxiliary medical services. However, intensity inhomogeneity, which is usually named as bias field, causes considerable difficulty in the quantitative analysis of MR images. Thus bias field estimation is a necessary pre-processing step before quantitative analysis of MR data. The Wells model, one of the widely used methods, uses knowledge of tissue intensity properties and intensity inhomogeneities to correct and segment MR images. However, the classical Wells model only uses the intensity information and no spatial information is taken into account, so it is sensitive to the noise. In order to overcome this limitation, the Gibbs theory and the image structure information are used to construct anisotropic Gibbs random field. The traditional Gibbs theory usually loses the information of the beam structure regions and the corner regions. With the spatial information, the anisotropic Gibbs random field can reduce the effect of the noise and contain the information of the beam structure regions and the corner regions. The anisotropic Gibbs random field is incorporated into the Wells model. The experiments of segmenting the brain magnetic resonance images show that the proposed method can obtain better results in an accurate way.
A Discrete Particle Swarm Optimization-based Algorithm for Polygonal Approximation of Digital Curves
Wang Bin
2010, 47(11):  1886-1892. 
Asbtract ( 371 )   HTML ( 0)   PDF (812KB) ( 421 )  
Related Articles | Metrics
The shape contour of objects can be regarded as a closed digital curve. How to find the optimal polygonal approximation of the digital curve is a very difficult problem. In recent years, many bio-inspired algorithms, including genetic algorithms (GA), ant colony optimization (ACO) and particle swarm optimization (PSO), have been applied to solve polygonal approximation problem. The existing PSO-based methods adopt binary particle swarm optimization (BPSO) which requires developing a constraint function to deal with velocity and position updates. However, it is very difficult to develop a proper constraint function. In addition, the computational cost of constraint function is expensive. To overcome these problems, an integer-coding particle swarm optimization (IPSO) is proposed for polygonal approximation. As a modified version of standard particle swarm optimization,IPSO can be performed to search the optimal solutions in the discrete solution space,by redefining the addition, multiplication and subtraction for the updating formulas of the velocity and position. A position-vector repairing scheme is developed to maintain the feasibility of the solutions, and a local optimizer is embedded to the IPSO to improve the quality of the solutions. The experimental results manifest that IPSO outperforms the GA and binary-coding PSO on the quality of solutions and computational speed.
Clustering Time Synchronization Algorithm for Periodic Sleep MAC Protocol
Su Jinzhao, Liu Liyan, and Wu Wei
2010, 47(11):  1893-1902. 
Asbtract ( 410 )   HTML ( 0)   PDF (2416KB) ( 381 )  
Related Articles | Metrics
The MAC protocol of wireless sensor network takes responsibility of allocating wireless channels. Due to the limited energy of wireless sensor nodes, they often work in the way of periodic sleep. However, periodic sleep leads to increase in transmission delay. The realization of periodic sleep depends on time synchronization methods between nodes. A typical representative of competition-based periodic sleep MAC protocol for wireless sensor network is S-MAC. On the basis of the time synchronization algorithm of S-MAC, we propose a clustering time synchronization algorithm by introducing cluster and border node control methods, redesign the format of MAC frame, present the procedure of building cluster, and study the producing, controlling and replacing strategies of border node. This algorithm is suitable for periodic sleep MAC protocols. Results of simulation and practical experiments show that compared with S-MAC, this algorithm can significantly control the numbers of cluster and border node, decrease the overhead of time synchronization and end-to-end transmission delay, so as to save nodes energy and extend lifecycle of the whole network. In a competitive unicast situation, the network throughput is up to 208.66Bps which can satisfy the data transfer requirement of general sensor network. The cost of time synchronization control can be reduced by 35%, and multi-hop transmission delay decreased by 4%.
A Parallel Packet Classification Algorithm with Real-Time Incremental Updates
Zhang Shuzhuang, Luo Hao, and Fang Binxing,
2010, 47(11):  1903-1910. 
Asbtract ( 452 )   HTML ( 0)   PDF (1758KB) ( 457 )  
Related Articles | Metrics
UTM (unified threat management) technique requires that packet classification algorithms support incremental updates. However, Current approaches mainly focus on speeding up the classification, and rarely consider the requirement of incremental updates, which hinders UTMs practical applications. In this paper, a parallel classification algorithm is proposed to improve the performance of incremental updates. Firstly, a two dimension hybrid hierarchical trie is proposed to organize the classification rule-set. Kinds of the prefix-couples in rules can be formed into groups by mapping them into the trie because of the characteristics of the trie structure, and then the whole rule-set can be divided into a number of sub-sets. The processing procedure of each packet has been decomposed into several independent sub-missions, and each of them deals with a subset. Since the hybrid hierarchical trie maintaines the independence of each rule, each of them can be added or deleted from the trie incrementally. The experimental results show that the new algorithm can improve the speed of incremental updates to the same order of magnitude of classification. Additionally, using the parallel method in classification makes significant reduction in the algorithms sensitivity to the scale and type of rule-sets, therefore the algorithm is more adaptive and scalable.
A Bouncing-Track Based Data Storage and Discovery Scheme in Large-Scale Sensor Networks
Li Zhigang, Xiao Nong, and Chu Fuyong
2010, 47(11):  1911-1918. 
Asbtract ( 389 )   HTML ( 0)   PDF (1746KB) ( 501 )  
Related Articles | Metrics
In a large scale wireless sensor network, all nodes comprise a peer-to-peer network in the case that the sink node does not exist. Any node has probability to become a data consumer node or data producer node. Sensor network is a kind of Ad-hoc network, meanwhile the energy and computation of an individual sensor node is limited, so complicated protocol design cannot be supported. How to make every random generated consumer node and producer node discover each other and search the useful data is a big research challenge in sensor networks. In this paper we propose a bouncing track based data storage and discovery scheme to solve this problem. It is a data-centric storage approach, which needs the consumer nodes and producer nodes to disseminate their queries or data along their relevant bouncing track. This scheme need not an individual node to keep global information of the whole network. Each node determines how to forward their data only using local information and a predefined reflection angle value. In theory, any two bouncing tracks intersect with each other, which guarantees successful data retrieval. This scheme can also satisfy hop distance bounded search cost when a consumer queries data. It can also guarantee data load balance.
Heuristic Traversal Path Algorithm Based on Linear Aggregation in Wireless Sensor Networks
Luo Qing and Lin Yaping,
2010, 47(11):  1919-1927. 
Asbtract ( 324 )   HTML ( 0)   PDF (1987KB) ( 460 )  
Related Articles | Metrics
Most wireless sensor networks (WSN) are deployed to monitor a region of interest for any intruding target. On the contrary, the intelligent target looks for the optimal path to traverse the enemy sensor networks for avoiding being detected. However, the target may be subject to traversal time constraint. It is difficult for most existing traversal algorithms based on breadth-first-search to satisfy a pre-determined time constraint value. Motivated by this reason, a traversal model is constructed and an approximately optimization algorithm is proposed in this paper. The algorithm makes the continuous path problem domain a discrete one by Voronoi diagram. Using exposure and traversal time as the path performance metric, the algorithm combines the linear aggregated routing mechanism to find the optimal path. It enables the intelligent target to traverse the enemy sensor networks field. And the traversal time from the source to the destination is less than a given threshold. Theoretically and experimentally, it is concluded that the proposed algorithm is able to solve the traversal path problem with time constraint. The k shortest path heuristic traversal path algorithm based on linear aggregation (kSP-LAHTP) is able to find a traversal path closer to the optimum with parameter k increasing.
Dynamic Data-Partitioned Online Aggregation
An Mingyuan, , Sun Xiuming, and Sun Ninghui
2010, 47(11):  1928-1935. 
Asbtract ( 441 )   HTML ( 1)   PDF (1678KB) ( 415 )  
Related Articles | Metrics
To avoid the performance degradation due to random I/O, traditional online aggregation algorithms assume that the source data are already randomized in the data file, so sequential access approximately equals to random sampling over the data. But this assumption doesn’ hold in many real scenes which leads to obvious error when running the algorithms. The authors propose a new method: dynamic data-partitioned online aggregation (DDPOA). DDPOA logically splits the data into non-conjunctive partitions, each of which consists of consecutive data items in the data file, computes estimates based on individual partition, and then uses specific linear combination of these values to estimate the final result. DDPOA weakens the randomization requirement over the whole dataset and makes the estimates more accurate. Accessing partitioned data could cause lower performance due to random disk I/O. To handle I/O performance issue, DDPOA dynamically adjusts the partitions during execution. Adjacent partitions that are similar enough will be judged and merged into one which improves the I/O performance without losing the accuracy. Experiment on real dataset from network security monitor system DBroker shows that DDPOA is much better than traditional algorithms in terms of accuracy with little performance overhead. When it comes to the dataset satisfying the randomization assumption, DDPOA is as good as the traditional algorithms.
XML Approximate Query Approach Based on Attribute Units Extension
Meng Xiangfu, Yan Li, Zhang Wengbo, and Ma Zongmin
2010, 47(11):  1936-1946. 
Asbtract ( 419 )   HTML ( 1)   PDF (1119KB) ( 451 )  
Related Articles | Metrics
To deal with the problem of approximate query against XML documents, based on the extensions of document attribute units, the authors propose a novel XML approximate query approach which can provide the relevant query results to the user’s original query. The leaf nodes and attribute nodes of XML documents are treated as attribute units. Then based on the concept of the agree set, the maximum set is exported and the minimum nontrivial functional dependence sets are generated consequently. Thus the approximate dependence relations can be found. By using the approximate dependence relations, the approximate candidate keys and approximate keywords are found. After that, this approach ranks the attribute units according to their supported degree and expands the original query by regarding the importance sequence of attribute units. The first attribute unit to be relaxed must be the least important attribute unit and has the maximum relaxation degree. The relaxed query is used to query the XML documents and the relevant query results of the original query are obtained. The experimental results and analysis demonstrate that the XML approximate query approach presented can efficiently meet the user’s query intentions and has a high performance as well.
An Efficient Method for Processing Skyline Queries
Huang Zhenhua, Xiang Yang, Xue Yongsheng, and Liu Xiaoling
2010, 47(11):  1947-1953. 
Asbtract ( 810 )   HTML ( 4)   PDF (787KB) ( 564 )  
Related Articles | Metrics
Skyline query processing has recently received a lot of attention in database community. This is mainly due to the importance of skyline results in many applications, such as multi-criteria decision making, data mining, and user-preference queries. Given a set of k-dimensional objects, the skyline query finds the objects that are not dominated by others. When users issue multiple different dimensional-space skyline quereis simultaneously, all the existing works obtain the results of these skyline queries from the original relational table from scratch. Clearly, the existing approaches are extremely inefficient as the cardinality of the original relational table and the number of skyline queries increase. Motivated by the above fact, an efficient method, called EAPSQ (efficient algorithm for processing skyline queries), is proposed to return m issued different dimensional-space skyline quereis {SQ\-1,…,SQ\-m} using n prestoring skyline sets {PR\-1,…,PR\-n}. The PAPSQ algorithm adequately considers the characteristics of the storage mechanism of prestoring skyline sets, and adopts the concept of contribution margin in economics. Thus it can efficiently achieve the optimal state for distribution of m skyline queries between n prestoring skyline sets, which can markedly improve the performance for processing skyline queries. Moreover, detailed theoretical analyses and extensive experiments demonstrate that our algorithm is both efficient and effective.
An Approach Combining Incremental Search and Heuristic Search for Solving Multiobjective Problems
Wei Wei, Ouyang Dantong, Lü Shuai, and Yin Minghao
2010, 47(11):  1954-1961. 
Asbtract ( 433 )   HTML ( 1)   PDF (921KB) ( 560 )  
Related Articles | Metrics
Multiobjective problems are much more difficult to solve than single objective problems because of the multiple conflicting objectives. Heuristic search is an efficient approach for solving multiobjective shortest paths problems. However, many real problems change their state spaces with different situations. Incremental search which reuses information from previous searches can be used to find solutions in dynamic situations. In this paper, an approach combining incremental search and heuristic search for solving multiobjective problems is proposed, and a multiobjective incremental heuristic search system based on path expansion in the search process is designed and implemented. When the edge costs of a graph change or nodes are added or deleted in the state space, the system does not solve the new problem from scratch, but updates the scene of the search immediately, reuses parts of the information of the previous search and then starts a new search process from the scene after the update process, thus the efficiency of replanning is improved. The gridworld benchmark problem is used for testing and the experimental results show that the approach combining incremental search and heuristic search can solve a series of similar multiobjective shortest path problems very efficiently when the state space changes continuously.
Simplification of SMO Algorithm and Its Application in Solving ε-SVR with Non-Positive Kernels
Zhou Xiaojian, Ma Yizhong , and Zhu Jiagang
2010, 47(11):  1962-1969. 
Asbtract ( 676 )   HTML ( 2)   PDF (899KB) ( 613 )  
Related Articles | Metrics
Sequential minimal optimization (SMO) algorithm is an effective method for solving large-scale support vector machine (SVM). The existing algorithms need to judge which quadrant the four Lagrange multipliers lie in, which complicates their implementation. In addition, the existing algorithms all assume that the kernel functions are positive definite or positive semi-definite, limiting their applications. Having considered these deficiencies of the traditional ones, a simplified SMO algorithm based on SVR is proposed, and further applied in solving ε-SVR with non-positive kernels. Different from the existing algorithms, the proposed algorithm in this paper just considers two Lagrange multipliers in implementation by expanding the original dual programming of ε-SVR and solving its KKT conditions, thus it is easily applied in solving ε-SVR with non-positive kernels. The presented algorithm is evaluated by a benchmark problem. Compared with the existing algorithms, the simplified one is much easier to be implemented without sacrificing space and time efficiency, and can achieve an ideal regression accuracy under the premise of ensuring convergence. Therefore it has certain theoretical and practical significance. Furthermore, the proposed algorithm is benefit to present a general-purpose SMO algorithm for SVR with all types of loss functions. Additionally, the proposed method, which is used to deal with ε-SVR, is also available to the other SVR with non-positive kernels.
Multi-fault Mode Diagnosing with Value Propagation and Its Completeness
Zhang Xuenong, Jiang Yunfei, and Zhang Licheng
2010, 47(11):  1970-1977. 
Asbtract ( 395 )   HTML ( 0)   PDF (779KB) ( 507 )  
Related Articles | Metrics
Model-based diagnosis is an active field of artificial intelligence research. Diagnostic problem is characterized by a set of observations to account for the differences between the system’s actual behaviors and its expected behaviors. The classical method was built on the well-known consistency-based theory and it described the system’s structure and behaviors usually in the first-order language. But it is not efficient and therefore is difficult to be applied in actual systems. Different from the classical method, diagnosing with value propagation is an effective procedure-oriented method. For some special systems, the algorithm terminates in polynomial time. But it is not complete and the system model can only discribles normal system behavior. This paper presents an expanded system model based on value propagation, which can discrible multi-fault mode of the system behavior, redefines the system diagnosis and discusses the relation between consistency-based (abductive) diagnosis and value propagation-based diagnosis. Furthermore, the relation between value propagation and the actual diagnosis has been analysed, which is useful in diagnosis test. Finally, the sufficient condition of the algorithm’s completeness is given, which is valuable for promoting the application of value propagation-based diagnosis.
Estimation of Distribution Algorithm Based on Sequential Importance Sampling Particle Filters and Cholesky Decomposition
Zhang Jianhua, and Zeng Jianchao
2010, 47(11):  1978-1985. 
Asbtract ( 564 )   HTML ( 1)   PDF (859KB) ( 489 )  
Related Articles | Metrics
Estimation of distribution algorithm is a new intelligent stochastic optimization algorithm. It is more effectively to solve the non-linear, variable coupling optimization problem. Estimation of distribution algorithm in continuous domains is generally based on the assumptions that variables subject to Gauss distribution and the probability model is single-peaked one, which is not capable of describing the solutions distribution effectively for complex optimization problems. Aiming to overcome such drawbacks, an estimation of distribution algorithm depending upon sequential importance sampling particle filters is presented. In this algorithm, the variables are not required to subject to Gauss distribution. Instead, the distribution of samples is represented by weighted particles through the particle filter iteration on selection set of population. The probability model of this algorithm is multi-peaked and the relations among each dimension are handled using the covariance matrix. In sampling, the covariance matrix is shrunken for each particle and the shrunken covariance matrix is decomposed using Cholesky decomposition, and a method of handling the situation that the covariance matrix is zero matrix is presented. The next generation of population is produced from the decomposition result. Finally, the experimental results of several benchmark functions for complex optimization problems indicate the validity of the algorithm.
Neuro-Fuzzy System Modeling with Density-Based Clustering
Pan Weimin and He Jun
2010, 47(11):  1986-1992. 
Asbtract ( 350 )   HTML ( 0)   PDF (1037KB) ( 445 )  
Related Articles | Metrics
Neuro-fuzzy system is widely used for nonlinear system modeling. How to partition the input space optimally is the core issue in fuzzy system modeling. Previous ways suffer from two main drawbacks, the difficulty to determine of the number of partitions and rule redundancy, which hinders the application of fuzzy system. The authors present a new approach to neuro-fuzzy system modeling based on DENCLUE using a dynamic threshold and similar rules merging (DDTSRM). They first introduce DDT, which uses a dynamic threshold rather than a global threshold in merging density-attractors in DENCLUE. DDTSRM is good at determining the number of rules because DDT does not depend on input parameters. Additionally, the modeling performance is improved for DDT can find arbitrary shape and arbitrary density clusters. Rule redundancy is caused by similar fuzzy sets in the input and output data space. After structure identification, similar rules are merged by considering similarity measures between fuzzy sets. This is also effective for the model to avoid overfitting to the sample data. Finally, BP method is used to precisely adjust the parameters of the fuzzy model. DDTSRM is applied to a nonlinear function and Box and Jenkins system. Experimental results show that DDTSRM has overcome the drawbacks with a good performance.
Review of Programming Models for Data-Intensive Computing
Wang Peng, Meng Dan, Zhan Jianfeng, and Tu Bibo
2010, 47(11):  1993-2002. 
Asbtract ( 569 )   HTML ( 1)   PDF (1185KB) ( 787 )  
Related Articles | Metrics
Advances in communication, computation, and storage have created large amounts of data. The ability to collect, organize, and analyze massive amounts of data could lead to breakthroughs in business, science, and society. As a new computing paradigm, cloud computing focuses on Internet service, and Internet service providers have an increasing need to store and analyze massive data sets. In order to perform Web-scale analysis in a cost-effective manner, recently several Internet companies have developed distributed programming systems on large-scale clusters composed of shared-nothing commodity servers, which we call cloud platform. It is a great challenge to design a programming model and system that enables developers to easily write reliable programs that can efficiently utilize cluster-wide resources and achieve maximum degree of parallelism on the cloud platform. Many challenging and exciting research problems arise when trying to scale up the systems and computations to handle terabyte-scale datasets. The recent advance in programming model for massive data processing is reviewed in this context. Firstly, the unique characteristics of data-intensive computing are presented. The fundamental issues of programming model for massive data processing are pointed out. Secondly, several state-of-the-art programming systems for data-intensive computing are described in detail. Thirdly, the pros and cons of the classic programming models are compared and discussed. Finally, the open issues and future work in this field are explored.
Real-Time Fault-Tolerant Scheduling for Distributed Systems Based on Improving Priority of Passive Backup
Zhu Ping, Yang Fumin, and Tu Gang
2010, 47(11):  2003-2010. 
Asbtract ( 371 )   HTML ( 0)   PDF (873KB) ( 547 )  
Related Articles | Metrics
Fault-tolerant rate-monotonic first fit (FTRMFF) algorithm is a hard-real-time scheduling broadly used in distributed systems. This scheduling algorithm has the advantages of easy operation and low cost. However, the strategy of priority inheritance on the backup copy probably makes it hard to make full use of the slack located in the existing processors. To solve the problem and on the basis of analyzing the worst case response time of all types of task copies, the “improving priority for passive backup based scheduling (IPPBS)” algorithm is then proposed. When a passive backup copy could not be assigned to the existing processors and needs a new processor, IPPBS algorithm will improve its priority to a reasonable level to shorten the response time. Because its worst case response time becomes shorter, the passive backup copy is likely to be scheduled to the existing processor without breaking the scheduablity feasibility of other tasks with higher priority in the same processor. Moreover, the priority improving of factor searching algorithm is presented in detail. Finally, the adequate simulation experiments show that the IPPBS algorithm is feasible and effective. Compared with the classic FTRMFF algorithm, it can save the processors up to 13%.
A Faster Algorithm for Sorting Genomes by Reciprocal Translocation, Insertion and Deletion
Hao Fanchang, Luan Junfeng, Zhu Daming, Zhang Peng, and Li Ming
2010, 47(11):  2011-2023. 
Asbtract ( 445 )   HTML ( 2)   PDF (1387KB) ( 439 )  
Related Articles | Metrics
The genome sorting problem considers how species are related to each other by computing rearrangement distances between their genomes. The rearrangement distance between two genomes is the minimum number of genetic mutations needed to transform one to another. Multi-chromosomal genomes evolve frequently by a rearrangement event called translocation, which have been well studied. Many research works have been done for the case where only translocations occur. However, the more common conditions in computational biology is that there are genes that appear only in the source or target genome. A method solving this involves the evolution events called insertion and deletion. There has been an approximation algorithm for sorting genomes using translation, insertion and deletion (SG-TID). In this paper, we analyze the relationship between the three operations and then define the invertibility among the operations, through which SG-TID is transformed to a simpler one called sorting genomes by translocation and deletions (SG-TD). Additionally in the algorithm solving the SG-TD problem, we use O(n) memory space to record some important parameters, which help reduce the time complexity to a considerable level. We first give formulas for computing the rearrangement distance for SG-TID, then present a simpler algorithm to compute the rearrangement distance and find the corresponding transformation sequence in O(n) space and O(n\+2) time, which is faster and more practical than the previous TID algorithm.