Loading...
ISSN 1000-1239 CN 11-1777/TP

Table of Content

15 February 2011, Volume 48 Issue 2
Paper
A Parallel Coordinates-Based Measure Model and Its Applications
Hu Jun, Huang Houkuan, and Gao Fang
2011, 48(2):  177-185. 
Asbtract ( 486 )   HTML ( 1)   PDF (3135KB) ( 496 )  
Related Articles | Metrics
To apply visualization in data mining, or to establish visible data mining method is a cross research subject about visualization and data mining. This type of research is required to be established on reasonable acknowledge basement. On one hand, it requires to analyse the theory and technology basement of this method; on the other hand, it also requires to consider the visualization characteristics of the property of data mining subjects and the observers awareness of visualization characteristics. Two aspects are mainly needed to be considered during applying visualization to data mining. One is the separability of the mining algorithm process, that is, to split the process of the mining algorithm to inaffect the result of the data mining. The other is to determine the key factors in the mining algorithm and measuring standard, and find out their influences to the result of data mining. In this paper, the characteristic and method of visualization techniques applications are analyzed. The method of determining visualization data object and resolution of data mining algorithm is proposed. This paper proposes a parallel coordinates-based measurement index system and a measure model, proves some related properties and conclusion. Finally, it gives an application case. The results show that the methods are simple and valid for visualization of data in data mining. In analysing and processing the visible objects, visualization techniques based on measuring index can get help from proper mathematic method to model and evaluate, contributing to the research and application of visualization of data mining.
Research on Decomposition Strategy for Knowledge Tree of Characteristic Predicate
Wu Xiangjun, Bian Rui, Ling Yingbiao, and Jiang Yunfei
2011, 48(2):  186-194. 
Asbtract ( 451 )   HTML ( 1)   PDF (933KB) ( 438 )  
Related Articles | Metrics
Search space reduction is an essential issue in the AI planning. The knowledge tree of predicate is a special tree structure, which shows all actions to achieve the same predicate in a planning domain. The planning tree of predicate is constructed by the knowledge trees of predicates recursively, which can reflect the degree of difficulty to achieve a predicate for a planning domain at the current planning state. The size of the knowledge tree will affect the efficiency of the planning tree generation directly, thus it will further affect the planning efficiency. However, not all preconditions of these actions in the knowledge tree can be satisfied at the same time in planning process usually. Therefore, this paper proposes a principle of knowledge tree decomposition as well as a strategy of decomposing knowledge tree based on the characteristics preconditions, and gives the corresponding decomposition algorithm. For any domain, using the algorithm, knowledge tree of predicate can be decomposed into a number of smaller knowledge subtrees which are more targeted to the specific state. The use of knowledge subtrees in planning process can avoid some unnecessary searching of actions and improve the planning efficiency. Finally, the experiment results show that decomposition algorithm is efficient.
Simulation and Implementation of Gene Expression Regulation in E-Cell Model Analog-Cell
Han Xiaosong, Pei Zhili,, Lu Xinhua, Ji Zhaohua, and Liang Yanchun,
2011, 48(2):  195-202. 
Asbtract ( 589 )   HTML ( 1)   PDF (2459KB) ( 454 )  
Related Articles | Metrics
As one of the most important research fields of artificial life (ALife), electronic cell (E-cell) model is to explore new biological phenomena and rules in silico. The analog-cell, which is the first pattern E-cell model in China, could be used to graphically simulate the processes of gene expression of eukaryote cell in molecule level, including transcription, translation and gene mutation. In order to simulate gene expression more lively and more accurately, plenty of important regulation factors and enzymes for gene expression regulation are added into the extended analog-cell. Algorithms of the reaction mechanisms and the processes in the phases of transcription, mRNA processing and translation are also designed and implemented with the state controlling mechanism, such as DNA unwinding, mRNA capping, aminoacylation, translation initiation and translation termination. Under the mechanism of state controlling, each reactant in the analog-cell has different state values in different states, and the state control of the reaction is performed by the shift of the state value of relevant reactants. The mechanism can keep the materials and energy metabolism highly strict and orderly. Simulated results are shown in some instances graphically, and indicate that the designed algorithms are consistent with the rules of biology and the analog-cell is able to simulate gene expression regulation graphically and effectively. The advantages on the analog-cell and the outlooks are also discussed.
Flocking Movement of Delayed Multi-Agent Systems with Leader-Following
Yang Hongyong, Cao Kecai, and Zhang Siying
2011, 48(2):  203-208. 
Asbtract ( 588 )   HTML ( 1)   PDF (783KB) ( 600 )  
Related Articles | Metrics
With the development of computer technology, network technology and communication technology, it is deeply promoted for the modeling and application of the multi-agent systems including the alignment control of unmanned air vehicles (UAVs), the distribution control of the sensor network, and the pose control of the satellite. Recently, there are more and more researchers to bend their minds to study the dynamical alignment control of the multi-agent systems. In this paper, flocking movement of mobile multi-agents system with heterogeneous communication delays and heterogeneous input delays is studied. Suppose the multi-agent systems consist of n agents and a leader, there is a static directed interconnected graph with leader as a globally reachable node. By applying the generalized Nyquist criterion of the frequency domain, the consensus algorithm with heterogeneous communication delays and heterogeneous input delays is analyzed. By utilizing the Greshgorins disc theorem and curvature theory, the flocking motion of delayed multiple-agent algorithm with leader-following is studied, and a decentralized consensus convergence condition is obtained to ensure the flocking movement of the multi-agent systems. This consensus condition applies the local information of every agent, which is dependent on input delays but independent of communication delays. Finally, computer simulation is used to show the validity of the results.
Computing the Minimal Hitting Sets Based on Dynamic Maximum Degree
Zhang Liming, Ouyang Dantong, and Zeng Hailin
2011, 48(2):  209-215. 
Asbtract ( 430 )   HTML ( 0)   PDF (808KB) ( 476 )  
Related Articles | Metrics
A lot of theoretical and practical problems can be partly reduced to an instance of the minimal hitting set (MHS) or one of its relatives, such as the minimum set cover problem, model-based diagnosis, and teachers and courses problem. While generating MHS cluster of a collection of sets is known to be NP-hard, which necessitates heuristic approaches to handle large problems. Several exhaustive algorithms have been presented to solve the MHS problem. In this paper, we formalize the computing procedure of hitting sets by introducing SE-Tree which produces all the resolutions gradually first. As the closed nodes are added into the SE-Tree, the non-minimal hitting sets can never be produced, and the true resolutions can not be missed by pruning either. Then the concept of un-extended element degree and node degree are presented, and the node of SE-Tree is extended in accordance with the descending order of un-extended element degree. With this extended order, we not only can generate hitting set early, but also can reduce the number of generated nodes. Moreover, the node degree can be used to determine whether the set of node is a hitting set or not directly, and avoid the computing of hitting set. Results show that the corresponding algorithm is easily implemented, and the efficiency is highly improved.
Why-Questions Answering for Reading Comprehension Based on Topic and Rhetorical Identification
Zhang Zhichang, Zhang Yu, Liu Ting, and Li Sheng
2011, 48(2):  216-223. 
Asbtract ( 554 )   HTML ( 3)   PDF (973KB) ( 581 )  
Related Articles | Metrics
As an important branch in the study of question answering system, automatic reading comprehension (RC) system involves reading a short passage of text and answering a series of questions pertaining to that text. In all question types including who, what, when, where, why studied in the field of RC, answer extraction of why-question should apply the discourse structure information of text and the answer is not an named entity. Concerning these difference of why-question with other types, an answer sentence extraction approach for why-question of reading comprehension is given in this paper based on question topic and causal rhetorical relation identification. It uses machine learning model to rank sentences in text according to their probabilities of becoming answer sentence. In the model, two kinds of feature are used for identification of text sentence corresponding to question topic and that of causal rhetorical relation between question topic and sentence context respectively. In all features, the idf and semantic role similarity features are utilized to identify the sentence corresponding to the question topic, and other features, including cue phrases, special semantic roles, causal relation entailment probabilities between words mined from large scale document collections, position and expression format of sentence context, are used to identify causal rhetorical relation. Experimental results on Remedia corpus show that the method improves significantly the performance of reading comprehension why-question answering.
An Estimation of Query Language Model Based on Statistical Semantic Clustering
Pu Qiang, He Daqing, and Yang Guowei
2011, 48(2):  224-231. 
Asbtract ( 574 )   HTML ( 1)   PDF (1211KB) ( 547 )  
Related Articles | Metrics
It is an important research direction in information retrieval to determine how to effectively generate clusters and use the information in clusters. Assuming that a document contains a set of independent hidden topics, a document is viewed as an interaction of independent hidden topics with some noise. A novel semantic clustering technique using independent component analysis is proposed according to this assumption. The perfect topic separation capability of independent component analysis will group a set of documents into different semantic clusters according to the hidden independent components in semantic space. Within language modeling framework, a certain semantic cluster is activated by a users initial query. A new query language model can be estimated by a users initial query model and a feedback semantic topic model which is estimated from the semantic cluster information in an activated semantic cluster. The estimated query model is applied in experiments on five TREC data sets. The experiment results show that the semantic cluster based query model can significantly improve retrieval performance over traditional query models and other cluster based language models. The main contribution of the improved performance comes from the estimation of query model on the semantic cluster that is most similar to a users query.
Software Reliability Modeling with Logistic Test Coverage Function
Li Haifeng, Li Qiuying, and Lu Minyan
2011, 48(2):  232-240. 
Asbtract ( 553 )   HTML ( 1)   PDF (1200KB) ( 673 )  
Related Articles | Metrics
Test coverage is a good indicator for testing completeness and effectiveness, which has some effects on software reliability and defect coverage. Furthermore, the evaluation results of software reliability models with test coverage information will be further improved. Although some of traditional software reliability growth models have been widely applied to reliability prediction, it is likely that the prediction accuracy of these models can be further improved by adding other important factors affecting the final software reliability. Test coverage is believed to be one of such factors. Due to the integrated effects of software structure and learning factor on testing, test coverage increasing rate may exhibit a varying trend which first increases and then decreases. In other words, test coverage function may be an S-shaped curve which can be well and flexibly described by the Logistic function in many situations. Hence, this paper utilizes the Logistic function to describe the test coverage growth behavior. Based on this Logistic test coverage function, a defect prediction model that relates test coverage to fault detection is presented. Furthermore, combining NHPP modeling framework with Logistic test coverage function, a new software reliability model considering test coverage is proposed. Several case studies are also presented and analyzed. The results indicate that compared with several existing models, the new defect prediction model and reliability model considering Logistic test coverage function proposed in this paper can provide a substantial improvement in term of goodness-of-fit power and applicability.
A Method for Generating Software Architecture Models from Process Algebra Specifications
Zhu Yi, Huang Zhiqiu, Zhou Hang, and Liu Linyuan
2011, 48(2):  241-250. 
Asbtract ( 621 )   HTML ( 1)   PDF (3599KB) ( 696 )  
Related Articles | Metrics
The transformation from specifying requirements to software architecture is a hot topic in software engineering. UML-RT (unified modeling language for realtime) is widely used in modeling the software architectures of real-time systems, but UML-RT models are often inaccurate or ambiguous, because they are created from natural language specifications. So UML-RT models need to be given formal semantics. Process algebra is a formal method being used to solve the communication problems of concurrent systems, which has precise syntax and accurate semantics to facilitate automatic verification and validation. TCSP is a real-time extension of the process algebra CSP (communicating sequential process), which is fit for specifying timing constraint behaviours of real-time systems. A method for generating software architecture models from process algebra specifications is proposed in this paper. Firstly, the transformation framework from natural language specifications to software architecture models is defined; Secondly, TCSP is used as requirements specification of real-time systems, and software architecture models are generated from process algebra specifications by a transformation mechanism which is established between TCSP and UML-RT; Lastly, an instance is given to validate the effectivity of this method in modeling real-time softwares. Experimental results show the UML-RT models generated by this method can increase the reliability for designing the software architecture of real-time systems.
Software Superfamilies Based on Sub-Graph Significance Profile
Zhang Lin, and Zhang Li
2011, 48(2):  251-258. 
Asbtract ( 353 )   HTML ( 0)   PDF (2635KB) ( 350 )  
Related Articles | Metrics
The significance of triad appeared in open source software is studied. It is found that the local structure trends to be networked with the increase of software scale. Tree style sub-graph trends to decrease, but most of close style sub-graph trends to increase. After comparing triad significance profiles and correlation between open source software networks, we have found that software networks could be divided into 3 clusters which are consistent with 3 of 4 well known super-families contain different networks from various domains. Most software networks have similar local structure with biological networks. It seems that software scale may be one of the reasons causing different sub-graph significance profiles. With the increase of software scale, the significance profile of triad trends to be similar. The experiment results show that the software network topology is very similar to biological networks, part of the research methods, research results and evolution mechanism of biological network are very helpful to the research in software network. The experimental results also show analyzing software networks, in particular to analyze the small-scale software network, that should be the first to analyze the local structural features. Depending on the local characteristics, it is necessary to take a different research approach and research strategy, not simply be treat them equally.
Image Enhancement in the Compressed Domain Based on Retinex Theory
Wang Ronggui, Zhang Xinlong, Zhang Xuan, and Fang Shuai
2011, 48(2):  259-270. 
Asbtract ( 658 )   HTML ( 1)   PDF (5060KB) ( 659 )  
Related Articles | Metrics
Existing image enhancement algorithms in the compressed domain can not preserve the details and color information when enhancing the contrast of images. They mostly treat the DCT coefficients uniformly using the same strategy, or can not inhibit artifacts if treating the coefficients differently. A new image enhancement algorithm in the DCT compressed domain based on Retinex theory is proposed. The algorithm divides DCT coefficients into the illumination component (DC coefficients) and the reflection component (AC coefficients) according to Retinex theory. The DC coefficients are adjusted to compress the image dynamic range by mapping the illumination component to an ideal range, using two simple but powerful functions. The AC coefficients which represent reflection component are adjusted to enhance the local details with a new definition of spectral content measure. The fact that perception of image detail information for peopleis based on the ratio of high and low frequency in human visual system leads to the new definition. The block artifacts will be suppressed using a block decomposing strategy with a threshold automatically according to the characteristics of the images to be enhanced. Compared with traditional Retinex and DCT enhancement algorithms, experiment results show the proposed algorithms efficiency in details enhancement and color preserving, also with artifacts compressed.
Detection for Flood Change with SAR Images Based on Fusion and T-Distribution
Li Jinji, Jiao Licheng, Zhang Xiangrong, and Yang Dongdong
2011, 48(2):  271-280. 
Asbtract ( 460 )   HTML ( 3)   PDF (3939KB) ( 441 )  
Related Articles | Metrics
Based on fusion and T-distribution model, a new approach for detecting flood changes with multi-temporal SAR images is presented. Firstly, by incorporating the advantages of image differencing and log-ratio operator, a novel fusion strategy based on experience is introduced. Then, the final difference image with better effect in vision can be obtained by fusing with difference image and log-ratio image. According to the histogram of the final difference image obtained by fusion strategy, the two ranges of absolute changed and absolute unchanged classes in the histogram can be got, respectively. Then the fuzzy range between the two ranges is obtained, which are unable to identify changed or unchanged classes. Under the T-distribution assumption of the fuzzy range in the histogram, a thresholding approach based on the Kittler-Illingworth (KI) threshold selection criterion (TM_KI) is proposed. Finally, the change-detection map is produced by using the proposed thresholding procedure to the fusing difference image. Through experimental comparisons, analysis of results confirm the proposed method not only can reduce the affection by speckle noise and enhance the subtle changed areas brought by flooding, but also effectively detect small changed areas, so that this method can improve the performance of change detection.
Image Guided Computer Assisted Microwave Ablation for Liver Cancer
Zhai Weiming, Sheng Lin, Song Yixu, Zhao Yangnan, Wang Hong, and Jia Peifa,
2011, 48(2):  281-288. 
Asbtract ( 518 )   HTML ( 2)   PDF (1819KB) ( 685 )  
Related Articles | Metrics
A novel computer assisted image guided surgery method for liver cancer microwave ablation is proposed. Advanced GPU accelerated visualization technique is used to reconstruct 3D patient anatomy structure from preoperative CT images in realtime. Thermal field simulation technology for percutaneous liver cancer microwave ablation is introduced, including the calculation of the specific absorption rate of microwave energy, the calculation of tissue temperature field using the Pennes bioheat transfer equation and the calculation of the tissue necrosis zone using the Arrhenius equation. Preoperative surgery planning is performed by the surgeon in an interactively way, until the optimal surgery plan is achieved, including the pose of the microwave probe and the microwave working time and power. Intraoperative sonography is used in the guidance of the placing of the probe in patient and postoperative CT is 3D visualized to evaluate the accuracy of the thermal field simulation and the placement of the microwave probe. Extensive experiments have been performed on both porcine liver and patient with liver cancer and the actual necrosis zone are measured in postoperative CT images for patients. The experiment results show that this method is accurate for surgery simluation and could be used in image guided microwave ablation surgery.
Constructing Technology for Hypervideo Based on Sketch Interface
Yang Haiyan, Chen Jia, Ma Cuixia, He Lili, Teng Dongxing, Dai Guozhong, and Wang Hongan
2011, 48(2):  289-295. 
Asbtract ( 427 )   HTML ( 0)   PDF (1429KB) ( 523 )  
Related Articles | Metrics
Due to videos temporally varying nature, it is difficult for user to capture video main content at a glance. Constructing a hypervideo always depends on complex computer vision techniques. It leads to high cognitive load and complicated operations to users. Traditional operations based on time bar or video frame sequence provide ways to interact with video, but it still inconvenient for users to edit or navigate videos according to video semantic flexibly. In order to solve these problems, a novel method based on sketch is proposed with the purpose of providing natural and efficient interaction for hypervideo construction. Due to the properties of abstract and vague, sketch is very suitable to describe and enrich video content, and can bridge the gap between low level features and high semantics. In this paper, we analyze complex semantic relation between different video resources and divide sketch into two types according to different level of video semantic to assist hypervideo construction. Two key problems about sketch creation are resolved. One is sketch similarity matching, the other is fusion of sketch and video. Finally, an example of application is given to illustrate how to realize efficient hypervideo construction by sketch based video semantic representation.
Adaptive Controlling Mechanism for Data Duplicates Based on Prediction in WSN
Zheng Mingcai, Zhang Dafang, Luo Jian, and Li Wenwei
2011, 48(2):  296-305. 
Asbtract ( 396 )   HTML ( 0)   PDF (2482KB) ( 358 )  
Related Articles | Metrics
The minimum hop routing (MHR) wireless sensor networks have some potential advantages such as the shortest communication path, the least time delay of data sinking and the lowest energy dissipation, but these merits couldnt be brought into play at the length in original MHR wireless sensor networks. In order to bring the merits into full play, an adaptive controlling mechanism based on prediction for data packet duplicates is presented to form MHR-DC data sinking model. In MHR-DC wireless sensor networks, the hops of data packet being forwarded to the sink node go nearly the same with MHR wireless sensor networks, traffic load generated by every source node could be high reliably transported to the sink node with the help of the agent node which is in the same gradient level with the source node, energy efficiency of data sinking is improved by reducing data duplicates in the process of data packet transporting hop by hop, traffic load is balanced through keeping away from the region in which the node has heavy forwarding load, the merit of low delay is enhanced by reducing the collision probability of data packets in shared wireless channel, and the integrative performance is optimized by improving the reliability and decreasing the energy dissipation. Theoretical analysis and simulation results validate the effectiveness of the mechanism.
Research on Active Defense Technology in Network Security Based on Non-Cooperative Dynamic Game Theory
Lin Wangqun, Wang Hui, Liu Jiahong, Deng Lei, Li Aiping, Wu Quanyuan, and Jia Yan
2011, 48(2):  306-316. 
Asbtract ( 562 )   HTML ( 1)   PDF (1382KB) ( 987 )  
Related Articles | Metrics
Game theory, an important part of artificial intelligent technique, has been applied on network defense very well. Static model has been used widely in most of the previous studies. However, some work shows such model cannot follow the evolving of the strategies of attackers. In this paper, an active defense model based on dynamic game theory of non-cooperative and complete information has been given, that is, the attack-defense game tree has been generated by adding some virtual nodes on the original attack-defense graph. Based on the attack-defense game tree, the best defense strategies are achieved under current network environment through resolving the Nash equilibrium in different situations. Besides, for the scenarios with complete information and incomplete information, two algorithms have been proposed respectively. The analysis and experimental results show that the complexity of the algorithms can be guaranteed not worse than other similar works. Moreover, not only for scenario with complete information, but also in incomplete cases, the sensible results can be found. With the comparison of mixed strategy Nash equilibrium generated by static game and described in a probabilistic form, results given by the sub-game perfect Nash equilibrium are more easily to be understood and operated. Network research based on game theory should have a good application in the future network security product.
Integrity Measurement Model According to Needs Under Open Network Environment
Liu Changping, Fan Mingyu, and Wang Guangwei
2011, 48(2):  317-326. 
Asbtract ( 394 )   HTML ( 0)   PDF (1289KB) ( 525 )  
Related Articles | Metrics
Integrity measurement is a hot research topic in trusted computing. The potential defects of the existing integrity measurement models under open network environment are analyzed. Aiming at these defects, an on-command model for integrity measurement is proposed. In this model, interrogators define their policy of integrity measurement according to their own security needs. Measurement policy is made up of code and data flow integrity measurement policy. Interrogatee only measure the integrity of relevant components defined in measurement policy, not all possible components in interrogee. Interrogee maybe was inquired from many interrogators simultaneously and should construct special instance of integrity measurement for each interrogator. Compared with the existing models, the noticeable point of this model is the self-defining measurement policy, which provides enough convenience to meet interrogators needs. Meanwhile, this model supported integrity measurement of both code and data flow, which overcome the one-sided defect of measuring partial object. Remote attestation and prototype system based on this model are built in a stream media service network. In this network, servers measure the integrity of clients media player before service, and measure the integrity of media data during service. The experiment under stream media service network indicate that this model has solved the existing problems and could be adapted to open network environment with acceptable performance cost.
Energy-Efficient Real-Time Scheduling Algorithm with Accrual Utility under Energy Bounds
Han Jianjun, Wu Xiaodong, and Li Qinghua
2011, 48(2):  327-337. 
Asbtract ( 439 )   HTML ( 0)   PDF (1199KB) ( 604 )  
Related Articles | Metrics
This paper presents an energy-efficient scheduling algorithm with accrual utility in battery-powered embedded real-time systems. The real-time tasks considered here synchronize to access the shared resources in a mutually exclusive manner. Under these constraints, the goal of a scheduling algorithm is to yield more utility within a supply of limited energy, while satisfying the timeliness and task synchronization requirements. We propose a two-step energy-efficient algorithm (TSEEA), which consists of two phases: a static algorithm, which runs offline and achieves the static execution speeds of all tasks under conservative conditions; and a dynamic one, which strives to release/reclaim slack times and to effectively tune the executing speeds of candidate task in a timely manner by synthesizing the static information and dynamic behaviors of performance demands at runtime. Compared with other algorithms, it is guaranteed that any task can meet its deadline constraint by our approach if the system energy supply is sufficient. Consequently, our algorithm can fully exploit the limited energy supply while yielding a high utility to the system. Further, our algorithm tries to reduce the time complexity. The experiments validate our analytical results and demonstrate that the proposed algorithm outperforms other existing algorithms in terms of accrual utility.
A Measurement Method of Interactive Performance for Desktop Computing System Using Restricted Behavior Policy
Niu Yan, Yang Chun, Xia Yubin, and Cheng Xu
2011, 48(2):  338-345. 
Asbtract ( 390 )   HTML ( 0)   PDF (1682KB) ( 512 )  
Related Articles | Metrics
Measuring interactive performance is the important basis to analyze and improve desktop computing system. The difficulty of measuring interactive performance is how to get exact measurement result conveniently. Exact measurement requires establishing the corresponding relationship between output events and input events. The necessary restrictions for finding the output events of input events are analyzed in this paper. Then we propose an interactive performance measurement method using restricted behavior policy. The method consists of session recorder assistant and user behavior validator. Session recorder assistant is used to record user events and transform unqualified events simultaneously, and user behavior validator is used to check whether all recorded events are valid. Through these two steps, all events are ascertained to get exact measurement results. We implement an interactive performance measurement tool named VNC-IPA using the method. We conduct experiment to compare the accuracy of VNC-IPA and VNCplay. The experiments treat slow-motions result as true value, and it shows that the average of relative errors measuring with VNC-IPA is decreased by 35.4% with respect to VNCplay. Experiments results show that our method can get exact measurement results for interactive events conveniently. Besides, our tool is independent of operating system and can be used in Windows and Linux.
A Multi-Variable -Function Placement Algorithm Based on Dominator Frontier Inverse
Ma Hongtu, Hu Shian, Su Yanbing, Li Xun, and Zhao Rongcai
2011, 48(2):  346-352. 
Asbtract ( 440 )   HTML ( 1)   PDF (730KB) ( 546 )  
Related Articles | Metrics
Based on the work of Cytron and Cooper, we employ the concept of dominance frontier inverse (DFI) to present an iterative algorithm to simultaneously place -nodes for all variables of a function. The set DFI(x) is a set of the nodes N whose dominance frontier is x. The set DFI(x) has one important property: The nodes in the set DFI(x) must be the nodes whose level is no smaller than the level of x on the dominator tree. For each node y contained in the set DFI(x), the variables defined on node y should be placed on node x, and if there are -nodes only, the variables of -nodes should also be placed on x. In fact, the DFI set neednt be computed when placing -nodes in the algorithm of this paper, because a variable set is adopted to hold the -nodes. The algorithm firstly visits the dominator tree in a bottom-up traversal by levels, and then iteratively computes fixed point for the -nodes sets on the join nodes with the same level. The innovation is that the algorithm can directly work on the dominator tree. And it merges the former two-step algorithm into one. Experimental results of the C Specint2000 show that this algorithm is much faster than Cytrons original algorithm, and is also comparable with Cytrons -placement algorithm based on Coopers DF algorithm.