Loading...
ISSN 1000-1239 CN 11-1777/TP

Table of Content

15 April 2011, Volume 48 Issue 4
Paper
Nondominated Individual Selection Strategy Based on Adaptive Partition for Evolutionary Multi-Objective Optimization
Gong Maoguo, Cheng Gang, Jiao Licheng, and Liu Chao
2011, 48(4):  545-557. 
Asbtract ( 543 )   HTML ( 0)   PDF (1321KB) ( 681 )  
Related Articles | Metrics
Many real world problems involve the simultaneous optimization of various and often conflicting objectives. These optimization problems are known as multi-objective optimization problems. Evolutionary multi-objective optimization, whose main task is to deal with multi-objective optimization problems by evolutionary computation techniques, has become a hot topic in evolutionary computation community. The solution diversity of multi-objective optimization problems mainly focuses on two aspects, breadth and uniformity. After analyzing the traditional methods which were used to maintain the diversity of individual in multi-objective evolutionary algorithms, a novel nondominated individual selection strategy based on adaptive partition is proposed. The new strategy partitions the current trade-off front adaptively according to the individual's similarity. Then one representative individual will be selected in each partitioned regions for pruning nondominated individuals. For maintaining the diversity of the solutions, the adaptive partition selection strategy can be incorporated in multi-objective evolutionary algorithms without the need of any parameter setting, and can be applied in either the parameter or objective domain depending on the nature of the problem involved. In order to evaluate the validity of the new strategy, we apply it into two state-of-the-art multi-objective evolutionary algorithms. The experimental results based on thirteen benchmark problems show that the new strategy improves the performance obviously in terms of breadth and uniformity of nondominated solutions.
A Multiple Query Dependent Ranking SVM Aggregation Algorithm
Wang Yang, Huang Yalou, Xie Maoqiang, Liu Jie, Lu Min, and Liao Zhen
2011, 48(4):  558-566. 
Asbtract ( 605 )   HTML ( 0)   PDF (1450KB) ( 597 )  
Related Articles | Metrics
Supervised ranking approaches have been becoming more and more important in the fields of information retrieval and machine learning. In ranking for document retrieval, queries often vary greatly from one to another. Only the documents retrieved from the same query are to be ranked against each other. However, in most of the existing approaches, losses from different queries are defined as the same. The significant diversities existing among queries are taken into consideration, and a rank aggregation framework for multiple dependent queries is proposed. This framework contains two steps, training of base rankers and query-level aggregation. Training of base ranker sets up a number of query-dependent base rankers based on each query and its relevant documents, and then turns the output of base rankers into feature vectors. Query-level aggregation uses a supervised approach to learn query-dependent weights when these base rankers are combined. As a case study, an SVM based model is employed to aggregate the base rankers, referred to as Q.D.RSVM. It is proved that Q.D.RSVM can set up query-dependent weights for different base rankers. Q.D.RSVM is applied to document retrieval and Web retrieval tasks. Experimental results based on benchmark datasets show that Q.D.RSVM outperforms conventional ranking approaches.
A Fuzzy Associative Classification Method Based on Multi-Objective Evolutionary Algorithm
Huo Weigang, and Shao Xiuli
2011, 48(4):  567-575. 
Asbtract ( 489 )   HTML ( 1)   PDF (917KB) ( 593 )  
Related Articles | Metrics
Accuracy and interpretability are fuzzy associative classification models optimization objective, which complement and restrict each other. So far informed research only takes classification models accuracy into account, or transforms two-objective into single-objective optimization problem. Interpretabilitys optimization method is too simple. In the research field of classification model based on multi-objective optimization and fuzzy rule, most of them generate fuzzy rule according to sample datasets quantitative attribute corresponding fuzzy items permutation and combination. When there are many quantitative attribute in the dataset, evolutionary exploration space is large. So a fuzzy associative classification model based on variant apriori and multi-objective evolutionary algorithm NSGA-II (MOEA-FACM) is proposed. MOEA-FACM adopts fuzzy confirmation measure based on probabilistic dependence to assess fuzzy associative rule in order to generate good quality rule set. Then a small number of fuzzy associative rules are selected from the prescreened candidate rule set using NSGA-II. Maximization of the classification accuracy, minimization of the number of selected rules, and minimization of the total fuzzy items in antecedent of associative rule are regarded as optimization objectives. According to Pittsburgh coding approach and biased mutation operator, a number of non-dominated rule sets, and fuzzy associative classification model, with respect to these three objectives, are built, which can obtain interpretability-accuracy tradeoff. Experiment results on benchmark data sets show that compared with homogeneous classification model, the proposed model has high accuracy, better generalization ability and less number of fuzzy associative rules and total fuzzy items, and better interpretability.
Adaptive Neighborhood Selection Based on Local Linearity for Manifold Learning
Zhan Yubin, Yin Jianping, Liu Xinwang, and Zhang Guomin,
2011, 48(4):  576-583. 
Asbtract ( 683 )   HTML ( 2)   PDF (1836KB) ( 683 )  
Related Articles | Metrics
Recently, manifold learning has attracted extensive interests of researchers from machine learning, pattern recognition, computer vision and related communities. Constructing a reasonable neighborhood for each point is a key issue in manifold learning. Conventional neighborhood selection algorithms k nearest neighbors (k-NN) and ε-neighborhood need to specifiy the parameter manually beforehand which most manifold learning algorithms are sensitive to. In this paper, an adaptive neighborhood selection algorithm based on local linearity called ANSLL is presented. Firstly, two principles of construction neighborhood for most existing manifold learning algorithms are summarized as: 1) data points in the same neighborhood should approximately lie on a d-dimensional linear subspace (d is the dimensionality of data manifold) and 2) each neighborhood should be as large as possible. Then based on these two principles, ANSLL exploits principal component analysis (PCA) technique to measure the linearity of finite data points set and constructs neighborhood for each point via neighborhood contraction or expansion. In addition, an improved method of constructing neighborhood graph, which can improve the accuracy of geodesic distance estimation for isometric embedding, is also proposed. Finally, extensive and systematic experiments on both synthetic data sets and real world data sets demonstrate that the ANSLL algorithm can adaptively tune neighborhood size according to local curvature of data manifold and then improve the performance of most existing manifold algorithms, such as Isomap and LLE.
A Hybrid Approximate Inference Algorithm for Multi-Agent Dynamic Influence Diagrams
Yao Hongliang, Wang Xiufang, Hu Dawei, Wang Hao, and Mao Meiqin
2011, 48(4):  584-591. 
Asbtract ( 549 )   HTML ( 0)   PDF (1214KB) ( 613 )  
Related Articles | Metrics
Multi-agent dynamic influence diagrams (MADIDs) are fit for modeling multi-agent systems in dynamic environment, where structural relations among agents are represented in the local factorial probability form. A major problem of probability graphical models inference is that it is difficult to realize tradeoff between accuracy and complexity computational of approximate inference. In general, the approximation inference methods can improve computational efficiency, but it will also induce the loss of accuracy. BK algorithm and particle filter are two important approximate algorithms for dynamic probability models. BK algorithm has considerable computational efficiency, but induces the error of the inference. PF algorithm can approximate arbitrary distribution, but with the high dimension problem of computation. Hybrid approximate inference(HAI) algorithm for multi-agent dynamic influence diagrams (MADIDs) is presented by combining the advantage of BK and particle filter, and HAI algorithm converts the distribution of MADIDs into the local factorial form. Based on decomposability of probability graph models, HAI algorithm decomposes MADIDs to prototype junction tree, and particle filter is executed on these clusters with lesser scale complexity to achieve local optimal estimation, while BK for other clusters; in addition, separator clusters are induced for decreasing inference errors in HAI algorithm. Simulation results show that the proposed algorithm improves computational precision notably comparied with BK algorithm and PF algorithm, and the algorithm is an efficient approximation inference method of MADIDs and can establish a tradeoff between accuracy and complexity of approximate distribution.
IKnnM-DHecoc: A Method for Handling the Problem of Concept Drift
Xin Yi, Guo Gongde, Chen Lifei, and Bi Yaxin
2011, 48(4):  592-601. 
Asbtract ( 915 )   HTML ( 9)   PDF (1818KB) ( 657 )  
Related Articles | Metrics
With the extensive applications of data stream mining, the classification of concept-drifting data streams has become more and more important and challenging. Due to the characteristics of data streams with concept-drifting, an effective learner should be able to track such changes and to quickly adapt to them. A method named dynamic hierarchical ECOC algorithm based on incremental KnnModel (IKnnM-DHecoc) for handling the problem of concept drift is proposed. It divides a given data stream into several data blocks, and then learns from each data block by using incremental KnnModel algorithm. Based on the outcomes of pre-learning, a hierarchical tree together with a hierarchical coding matrix are built and updated, from which a chosen incremental learning method is used for training in order to build a set of classifier and a set of classifier candidates. Moreover, a pruning strategy for generated nodes of hierarchical tree is proposed to reduce computational cost by taking account of each nodes activity. In testing phase, a combination scheme of taking advantage of both IKnnModel and DHecoc is used for prediction. Experimental results show that the proposed IKnnM-DHecoc algorithm not only improves the dynamic nature of learning and classification performance, but could quickly adapt to the situation of concept drift.
Properties and Application of Coalition Structure Graph
Liu Jinglei, Zhang Wei, Liu Zhaowei, and Sun Xuejiao
2011, 48(4):  602-609. 
Asbtract ( 749 )   HTML ( 0)   PDF (884KB) ( 452 )  
Related Articles | Metrics
Forming effective coalitions is a major research challenge in the field of multi-agent systems. The coalition structure generation problem is extremely challenging due to the exponential number of partitions that need to be examined. But what kind of appearance the space of coalition structure is, there is few people to research on it, particularly its graph properties. Optimal coalition structure generation problem is discussed from the viewpoint of graph theory. Firstly, the space of coalition structure is taken as a coalition structure graph, the vertex in which stands for coalition structure, the edge in which stands for splitting of coalition structure. Soon after two graph properties: optimal substructure, overlapping sub-structure, are summed up, one property: the key searching set is extended, and a new graph property: connectivity of graph with few redundancy paths is given. In order to understand and apply these properties, we devise an effective dynamic programming (EDP) algorithm to search the optimal coalition structure. In addition, we prove that the lower bound of EDP is Ω(21/+n), and the upper bound is O(3/+n) in theory. Finally, empirical evaluation shows that the searching number of EDP algorithm is lower 42% than those of DP, when involving 21 agents.
Graph-Based Automatic Acquisition of Semantic Classes
Wu Yunfang , Shi Jing , and Jin Peng
2011, 48(4):  610-616. 
Asbtract ( 383 )   HTML ( 0)   PDF (874KB) ( 631 )  
Related Articles | Metrics
A semantic class is a collection of terms which share similar meaning. Knowing the semantic classes of words can be extremely valuable for many natural language processing tasks. This paper investigates the usage of linguistic knowledge on the graph-based acquisition of Chinese semantic classes, and demonstrates that linguistic knowledge can really improve the graph-based method. The used corpus is Xinhua News of LDC Chinese Gigaword. A graph is built by extracting word pairs with coordination structure from corpus, with the co-occurring words as nodes and the co-occurring frequency as edges weight between the two words. And then Newman algorithm is adopted to experiment word clustering in the graph. This paper focuses on transforming the edges weight, motivated by the properties of coordinate structure and Chinese language. We present six kinds of methods: divide the whole corpus to small parts, cut the low-frequency edges, enlarge the weight of bidirectional edges, enlarge the weight of edges within cliques, enlarge the weight of edges in which two nodes share the same last-character, and reduce the weight of edges in which two nodes have different number of characters. The experimental result with the six methods yields a promising precision of 53.12%, which outperform the baseline Newman algorithm by 29.84%.
An Approach for Constraint-Based Test Data Generation in Mutation Testing
Liu Xinzhong, Xu Gaochao, Hu Liang,Fu Xiaodong, and Dong Yushuang
2011, 48(4):  617-626. 
Asbtract ( 470 )   HTML ( 0)   PDF (1908KB) ( 693 )  
Related Articles | Metrics
As a testing strategy to evaluate the completude of test cases, mutation testing has been identified as a “fault-oriented” technique for unit testing, which is mainly used to generate complete test cases. By applying mutation operators to simulate software defects, mutation testing generates mutants and constructs a test suit to kill them. Test data generation for the test suit constructing includes random method, path-wise method and goal-oriented test data generators. Among them, the path-wise technique of test data generation is a high-efficiency technique for test cases generation, implements test data generation by building and solving constraint systems. However, most of path-wise generation techniques only take the control dependence among statements into consideration, viz, build constraint system by analyzing the control flow graph but neglecting the data dependence among statements. Considering both of them, a new domain reduction method named domain reduction approach with data dependence (DRD) is proposed to improve the test data generation technique of domain reduction. Using path with data dependence, DRD combines the constraint-based test data generation technique with the chaining approach for test data generation to build constraint system. As an automation technology for test data generation, DRD solves the constraint system by domain reduction technique and verifies test data with back substitution method. Experimental results showed that this method improves the successful rate and execution efficiency of test data generation in mutation testing at a large extent.
A Service Composition Method for Tradeoff Between Satisfactions of Multiple Requirements
Wang Xianzhi, Wang Zhongjie, Xu Xiaofei, and Liu Ying
2011, 48(4):  627-637. 
Asbtract ( 627 )   HTML ( 0)   PDF (1306KB) ( 469 )  
Related Articles | Metrics
Service composition is effective in constructing value-added service rapidly for service-oriented applications. Existing selection models for composite services rely severely on assumptions that customers each requirement is raised alone, while in reality, service requirements can be numerous in practical applications. And when a small time slide is focused on, multiple requirements can be seen as concurrent and service sets involved by sub-solutions corresponding to each individual requirement have intersections, resulting in competitions or sharing of certain services between requirements. Therefore, current single requirement-oriented methods can not deal with the situation that multiple service requirements arrive concurrently competing for services. This paper presents a multiple service requirements-oriented service composition model and algorithm. In the light that service can either be exclusive or sharable and decisive priority relations exist between all assessment factors, an assessment method based on confliction-avoidance scheduling and graded weighting priorities is put forward. On that basis, tradeoff strategies are proposed for genetic algorithm and a service composition method is put forward for tradeoff between satisfactions of multiple service requirements. Experiment results show that this method ensures proportionality of all sub-solutions and sub-optimal solutions can be gained efficiently by improving its coding manner. Compared with other possible strategies, it has proved superior applicability to different circumstances of quantity and quality of available services.
An Extendable Control Flow Checking Method Based on Formatted Signatures
Xu Jianjun, Tan Qingping, Li Jianli, and Li Jianming
2011, 48(4):  638-646. 
Asbtract ( 431 )   HTML ( 0)   PDF (1645KB) ( 537 )  
Related Articles | Metrics
Hardware transient fault is one of the top challenges for the space computers, which run in the space environment with different radiation phenomena. Furthermore, with the continuously increasing performance enabled by the scaling of VLSI technologies, modern microprocessors are becoming more susceptible to transient faults. For the reliability of system, a major effect incurred by these transient faults is the control flow errors, e.g. modifying the target address of a jump instruction. Through the control flow graph of program, basic blocks are firstly categorized by the graph coloring algorithm. Then an effective control flow checking method, named ECCFS, is presented based on the formatted signature of basic blocks. Moreover, the extended solutions are proposed for the control flow checking of intra-block and inter-procedure, respectively. ECCFS can be extended flexibly by user through configuring the signatures format according to the requirement of detecting rate and performance. The analytical result of checking capacity and the experimental result of fault injection indicate that ECCFS can detect most control flow errors, excepting the dummy branch and some checking defects. Compared with two typical control flow checking methods, ECCFS has the advantage in errors detecting rate and performance overhead.
A Requirement-Driven Software Trustworthiness Evaluation and Evolution Model
Ding Shuai, Lu Fujun, Yang Shanlin, and Xia Chengyi
2011, 48(4):  647-655. 
Asbtract ( 530 )   HTML ( 1)   PDF (1629KB) ( 719 )  
Related Articles | Metrics
Software trustworthiness evaluation model is built upon the accurately eliciting of trustworthy requirements and the reasonable establish ment of indicator system in the domain special application. Toward software which has huge architecture and complex non-functional demands, trustworthy requirements become changing with software operational state transition. The stability of trustworthiness evaluation indicator system will be affected by trustworthy requirement dynamic evolution. The software trustworthiness evaluation and evolution problem are attracting wide attention in the field of trustworthy software. In this paper, a novel requirement-driven software trustworthiness evaluation and evolution model is designed. Firstly, several key technologies adopted in the process of software trustworthiness evaluation are analyzed and summarized, such as requirements analysis and indicators extraction, trustworthy evidence acquisition and conversion, etc. And the problem of trustworthiness evaluation adaptive solution under the requirements evolution is discussed. Secondly, incidence matrix is used to achieve correlation analysis between trustworthy attributes and then the variation rule of relative weight is revealed. On this basis, adaptive reconstruction device, which can analyze and solve the software trustworthiness evaluation indicator system of self-reconfiguration, is designed based on incidence matrix. Finally, a complete framework of trustworthiness evaluation evolution and evolution model is proposed. Experimental results show the rationality and validity of the model.
A Kind of Analysis Method of Off-Line TTP Fair Non-Repudiation Protocol
Liu Dongmei, Qing Sihan, Ma Hengtai, and Li Shuren5
2011, 48(4):  656-665. 
Asbtract ( 546 )   HTML ( 1)   PDF (961KB) ( 490 )  
Related Articles | Metrics
Off-line TTP fair non-repudiation protocols have been studied widely. Compared with on-line TTP fair non-repudiation protocol, off-line TTP fair non-repudiation protocols are analyzed rarely. Off-line TTP fair non-repudiation protocols are often composed by several subprotocols, which are defined as protocol cluster. In this paper, a kind of analysis method of off-line TTP fair non-repudiation protocol is proposed. There are three main points in this paper. Firstly, according to the cluster properties of off-line TTP fair non-repudiation protocol, protocols are instanced. Through instancing, non-repudiation and effectiveness of off-line TTP fair non-repudiation protocol can be analyzed within each single instance. Secondly, as asynchronous communication, sending and receiving actions can not exactly reflect true events of protocol. Through refining the actions of participants, protocols can be represented as the participants action sequence. And the participants action sequence can be used to analyze the violation of execution. Thirdly, the time determiner is introduced to express and verify the timeliness property of the protocol. Finally, two off-line TTP fair non-repudiation protocols are analyzed, among which ZG off-line TTP protocol is composed by two subprotocols and CCD off-line TTP protocol is composed by three subprotocols. The results of analysis indicate that ZG off-line TTP protocol is verified, which does not meet timeliness, and CCD off-line TTP protocol is verified which does not meet fairness.
The Topology Voronoi Graph of Visualizing Local Vector Field
Xu Huaxun, Ma Qianli, Cai Xun, and Li Sikun
2011, 48(4):  666-674. 
Asbtract ( 518 )   HTML ( 0)   PDF (2499KB) ( 381 )  
Related Articles | Metrics
Topology visualization methods are very important for discovering the topology structure of the fluid fields. Among them, topology graph is a primary means, which can describe the topology relation among the critical points clearly. However, it is incapable of depicting the scope of the topology feature in the fluid field. As a basic property of topology feature, the scope property is important to investigate the fluid field and its time-dependent transformation. This paper presents a new method named topology Voronoi graph which is able to define the scope and describe the trend of one topology feature in the time-dependent vector field. First, we introduce the streamline distance to describe the mutual influence between any two points in the vector field. Then we can calculate the streamline distance of each point, with respect to the critical point it crossed, to define the feature region for the critical points in the vector field. Finally, we design an efficient topology Voronoi graph generation algorithm. The experiments on the wind field and the synthetic function data show that this method can enhance the structure visualization effects on the fluid field.
Bézier Approximate Merging by Interval Curves
Zhi Dejia and Wang Guojin
2011, 48(4):  675-682. 
Asbtract ( 450 )   HTML ( 0)   PDF (1440KB) ( 479 )  
Related Articles | Metrics
This paper mainly deals with the approximate merging problem of multiple adjacent Bézier curves with different degrees by new perspective—a single interval Bézier curve, which is a frequently seen problem in modeling, geometric approximation and data transformation. At first, two methods are given in this paper as theoretical models. One gives the result of the interval Bézier curve by one-sided approximation, which can directly get the values of Bézier control points. The other uses the unified matrix representation for precise merging. The approximate merging center curve is further derived based on matrix operation and the error curve is given by both constant and unconstant interval. Then, several examples are provided to demonstrate the algorithms, which show that the methods in this paper both achieve satisfying merging results and have a good prospect in geometric approximation and data transformation. These methods will be chosen for different results and reasons. More and more, it can be proved that these methods can also be used in the merging process of multiple adjacent rational Bézier curves, in the merging process of multiple adjacent 3D Bézier curves, in the approximate merging problem by a disk Bézier curve and in the approximate merging problem of multiple adjacent Bézier surfaces.
AFMC: A Novel Automated Flow for Asynchronous Circuit Design
Shi Wei, Ren Hongguang, Wang Zhiying, Lu Hongyi, Wang Yourui, and Su Bo
2011, 48(4):  683-690. 
Asbtract ( 443 )   HTML ( 0)   PDF (1441KB) ( 699 )  
Related Articles | Metrics
As the CMOS technology enters the deep submicron design era, the richness of computational resources brings a lot of problems, such as complex clock distribution, great clock skew and high power dissipation. Asynchronous circuit style is an efficient approach to solve the problems, and it is becoming significantly attractive to designers. The asynchronous circuit design flow based on macrocells can convert a synchronous circuit to an asynchronous counterpart efficiently using current EDA tools and industrial libraries for the synchronous circuit design. In this paper, a fully-automated asynchronous circuit design flow based on macrocells is presented, and it is also compared with the fully-automated desynchronization flow. The fully-automated desynchronization flow generates asynchronous circuits from the gate-level netlist, while our flow works from the register transfer level specification. Then, the proposed flow is used to implement a simple DLX RISC microprocessor in UMC 018μm industrial library. The experiment shows that the fully-automated flow can accelerate the asynchronous circuit design and the logic delay of datapath in the macrocell based asynchronous circuit can be significantly optimized. Furthermore, the newly proposed flow can achieve an average of 6% speedups, when compared with the desynchronized DLX microprocessor for a subset of the Mibench benchmark suite.
A Fault-Tolerant Scheduling Algorithm with Software Fault Tolerance in Hard Real-Time Systems
Ding Wanfu, Guo Ruifeng, Qin Chenggang, and Guo Fengzhao,
2011, 48(4):  691-698. 
Asbtract ( 472 )   HTML ( 1)   PDF (1212KB) ( 512 )  
Related Articles | Metrics
Hard real-time systems are those that are specified in terms of stringent reliability and strong timing constraints owing to the fact that any failure to support correct outputs in a timely manner may result in a disaster. They are often involved in critical activities, where human lives may be at stake. These characteristics emphasize the need for making the services provided by this kind of system fault-tolerance. Software fault-tolerant model is a cost-effective means which trades the quality of computation results for promptness to tolerate the software faults. A new fault-tolerant scheduling algorithm is proposed based on the software fault-tolerant model in order to improve system fault resilience and, at the same time, reduce the preemptions. In order to achieve the optimal configuration, a maximal dual priority configuration search algorithm (MDPCSA), which based on the worst-case response time schedulability analysis, is presented. Then, together with the MDPCSA, we propose an optimal dual priority configuration search algorithm (ODPCSA). We show that ODPCSA is optimal in the sense that the fault resilience of task sets is maximized and the preemptions are minimized as for the proposed analysis. Compared with the related algorithms, the proposed approach is evaluated and shown to be more effective by simulation.
A Boundary-Table-Based Algorithm for Reconfigurable Resource Management and Hardware Task Scheduling
Yu Guoliang, Wu Weiguo, Yang Zhihua, and Qian Depei
2011, 48(4):  699-708. 
Asbtract ( 564 )   HTML ( 0)   PDF (2248KB) ( 386 )  
Related Articles | Metrics
Reconfigurable computing (RC) is a kind of computation schema with hardware efficiency and software flexibility. The management of the reconfigurable resources and the scheduling of the hardware tasks are two critical factors that are concerned closely with the performance of RC. Focusing on the scheduling of hardware tasks in linear dimension reconfigurable device, one method based on boundary table (BT) is proposed for reconfigurable resource management by using the BT data structure to record the regional boundaries and their location relations in R-T coordinates. On the basis of the method, a new algorithm BT-P (boundary table placement) is also proposed to achieve the scheduling and placement of hardware tasks. By utilizing the weighted overlapping boundary length as the evaluation function and combining it with the reconfigurable resource management method, the proposed scheduling algorithm can realize optimization in a smaller runtime overhead way. The simulation results show that, compared with the stuffing algorithm, the proposed algorithm can effectively increase the chip utilization by 5% to 11% with the change of the load rate and lower rejection rate of the tasks by 9% to 11% with the change of load rate and the relaxation factor. The average time overhead of each task in scheduling and placement is between 2-4 microseconds.
Fine-Grained Parallel Zuker Algorithm Accelerator with Storage Optimization on FPGA
Xia Fei, Dou Yong, Xu Jiaqing, and Zhang Yang
2011, 48(4):  709-719. 
Asbtract ( 549 )   HTML ( 2)   PDF (2921KB) ( 398 )  
Related Articles | Metrics
In the field of RNA secondary structure prediction, the Zuker algorithm is a most popular method using free energy minimization models. However, general-purpose computers including parallel computers or multi-core computers exhibit embarrassing efficiency of no more than 50%. FPGA chips provide a new approach to accelerate the Zuker algorithm by exploiting fine-grained custom design. The Zuker algorithm shows complicated data dependence, in which the dependence distance is variable, and the dependence direction is also across two dimensions. We propose a systolic-like array including one master PE and multiple slave PEs for fine-grained hardware implementation on FPGA. We partition tasks by columns and assign tasks to PEs for load balance. We exploit data reuse schemes to reduce the need to load matrix from external memory by a sliding triangle window cache and transferring local elements to adjoining PEs. We also propose several methods, fitting curves with linear function, replacing scattered points with register constants, compressing address space and shortening data length to greatly reduce energy parameter tables by more than 85%. The experimental results show a factor of 18x speedup over the ViennaRNA-1.6.5 software for 2981-residue RNA sequence running on a PC platform with AMD Phenom 9650 Quad CPU, however the power consumption of our FPGA accelerator is only about 20% of the latter.