Loading...
ISSN 1000-1239 CN 11-1777/TP

Table of Content

15 January 2006, Volume 43 Issue 1
A Survey of Intrusion-Detection Alert Aggregation and Correlation Techniques
Mu Chengpo, Huang Houkuan, and Tian Shengfeng
2006, 43(1):  1-8. 
Asbtract ( 699 )   HTML ( 10)   PDF (355KB) ( 1185 )  
Related Articles | Metrics
The significances and goals of alert aggregation and correlation techniques are surveyed comprehensively in this paper. Algorithms of aggregation and correlation and their features are discussed in detail. Meanwhile, the ideas of choosing algorithms in developing the intrusion detection alert manage system are summerized, (IDAMS) are presented. The architectures of all the existing aggregation and correlation systems, with emphasis on a brief introduction of the function of the intrusion detection message exchange format (IDMEF) on alert aggregation and correlation. Finally, the future development of this research domain is presented.
Paper
An Automatically Optimized Distributed Intrusion Detection System Using Mobile Agent
Wang Jin, Li Dequan, and Feng Dengguo
2006, 43(1):  9-14. 
Asbtract ( 462 )   HTML ( 0)   PDF (293KB) ( 476 )  
Related Articles | Metrics
Now the traditional distributed intrusion detection system in network has many limits because of several technical difficulties encountered in keeping pace with the increasing network speed and communication complexity between applications. AODIDS is proposed, which can optimize itself by a mobile agent named Improver Agent. Improver Agent roves and evaluates the performance of an Analyzer Agent. According to the evaluation, the Improver Agent makes an optimization plan to make the most of the capacity of the system.
A Typed Low-Level Language Used in Java Virtual Machine
Chen Hui, Chen Yiyun, Wu Ping, and Xiang Sen
2006, 43(1):  15-22. 
Asbtract ( 375 )   HTML ( 2)   PDF (495KB) ( 512 )  
Related Articles | Metrics
In the past ten years, there has been a trend in the field of trustworthy computing: building high-assurance software system based on programming languages and compilers. The most obvious advantage of these techniques is reducing trusted computing base of software system. Moreover the language-based techniques are suitable to describe and verify fine-grained safety policies. Inspired by these researches TLL is designed. It is expected to be a type-safe intermediate language used in the just-in-time compiler of Java virtual machine. The work described in this paper is based on Intel ORP, and aims at building a smaller trusted computing base. Compared with JVML, TLL is closer to the assemble language, and hence is convenient to encode high-level primitive efficiently. TLL type system is derived on polymorphic typed lambda calculus, which is expressive and general to encode various high-level language features. For case study, the self-application semantic, one of the most important safety properties of object-oriented language, is expressed and assured. A prototype using TLL as intermediate language in the just-in-time compiler can be granted as a starting point for building Java virtual machine with tiny trusted computing base.
Verifiable Secret Redistribution Protocol Based on Additive Sharing
Yu Jia, Li Daxing, and Fan Yuling
2006, 43(1):  23-27. 
Asbtract ( 597 )   HTML ( 3)   PDF (277KB) ( 524 )  
Related Articles | Metrics
A non-interactive verifiable secret redistribution protocol based on additive sharing is put forward, which has threshold attribute, too. It can be applied to all the sets of shareholders that can alter the access structure, so the set of new shareholders doesn't need to joint the one of old shareholders. The protocol adopts additive sharing and share back-up technologies, so it can not only verify the correctness of secret shares and subshares, but also recovery bad secret shares. In addition, it can resolve the hard problem of how to identify the set of bad shareholders. It can be transformed into redistribution protocol in proactive RSA conveniently thanks to additive sharing. The protocol is correct, robust and secure, and its performance in many aspects is very high.
A Practical Electronic Auction with Bids Confidential
Qin Bo, Qin Hui, Wang Shangping, and Wang Yumin
2006, 43(1):  28-32. 
Asbtract ( 425 )   HTML ( 1)   PDF (326KB) ( 571 )  
Related Articles | Metrics
In most of the existing cryptographic auctions, the bidders' bids no longer remain confidential if the third parts collude. However, keeping the bids secret in any case is vital to the bidders because the bids are their true evaluations of the commodities and these evaluations may be their critical secrets. A Practical sealed electronic auction scheme of keeping bidders' bids confidential is proposed. No bid is revealed to anyone except the selling price even there exists serious collusion. Any one can verify that bidders follow the protocol to cast their bids and the winning price is correctly resolved according to opening rules. Also, the scheme is much more efficient than the recent publicly verifiable auctions by Brandt. In its typical implementation, the scheme requires at most O(log\-2v) rounds of communication, where v is the range of bidding space. Non-forgery, replay-attack resistance and non-repudiation are achieved by the signature with temporary public key after registration by bidders. And the robustness of the system is based on the zero knowledge proof.
A Multi-Secret Sharing Scheme Based on the General Access Structure
Pang Liaojun, Jiang Zhengtao, and Wang Yumin
2006, 43(1):  33-38. 
Asbtract ( 457 )   HTML ( 3)   PDF (297KB) ( 565 )  
Related Articles | Metrics
Based on Shamir's threshold scheme and the RSA cryptosystem, a new secret sharing scheme for the general access structure is proposed in this paper. In this scheme, each participant's secret shadow is selected by the participant himself and the dealer need not deliver any secret information to each participant, and thus a secure channel between them is unnecessary. The shadows do not need to be changed when the shared secret is renewed, the access structure is altered, or old participants are deleted/new participants are added. All these shadows are shorter than or as short as the shared secret. Each participant shares many secrets with other participants by holding only one shadow, and in the recovery phase each participant is allowed to check whether another participant provides the true information or not. The security of this scheme is the same as that of Shamir's threshold scheme and the RSA cryptosystem. Analyses show that this scheme is a computationally secure and efficient scheme.
Privacy Preserving Classification Mining
Ge Weiping, Wang Wei, Zhou Haofeng, and Shi Baile
2006, 43(1):  39-45. 
Asbtract ( 565 )   HTML ( 1)   PDF (393KB) ( 2513 )  
Related Articles | Metrics
Privacy preserving classification mining is one of the fast-growing sub-areas of data mining. How to perturb original data and then build a decision tree based on perturbed data is the key research challenge. By applying transition probability matrix a novel privacy preserving classification mining algorithm is proposed, which suits non-char type data (Boolean, categorical, and numeric type) and non-uniform probability distribution of original data, and can perturb label attribute. Experimental results demonstrate that the decision tree built using this algorithm on perturbed data has a classifying accuracy comparable to that of the decision tree built using un-privacy-preserving algorithm on original data.
UMLTGF: A Tool for Generating Test Cases from UML Activity Diagrams Based on Grey-Box Method
Yuan Jiesong, Wang Linzhang, Li Xuandong, and Zheng Guoliang
2006, 43(1):  46-53. 
Asbtract ( 789 )   HTML ( 4)   PDF (473KB) ( 762 )  
Related Articles | Metrics
Nowadays UML has been the de facto standard of software modeling languages, and it also brings a new challenge to software testing-How to do software testing based on UML design models. To generate test cases directly from UML activity diagrams, a formal definition of UML activity diagrams and a grey-box test method are proposed. The method first extracts each possible executing path named a test scenario from an UML activity diagram, then it gets input/output variables and relative constraints from the activity states and transitions in each test scenario to generate test cases. A test case generation tool based on this method named UMLTGF is also implemented, it can extract the information of UML activity diagrams from Rational Rose specification files and automatically generate test cases. The tool improves the process of test case generation and reduces the cost of test model creation.
A Distributed Software Architecture Description Language Based on Attributed Grammar
Jia Xiaolin, Qin Zheng, He Jian, and Yu Fan
2006, 43(1):  54-60. 
Asbtract ( 363 )   HTML ( 3)   PDF (361KB) ( 454 )  
Related Articles | Metrics
Most of the existing software architecture description languages (ADL) are based on finite state machines models, and when they are used to describe the large scale systems, the problem of state explosion is difficult to overcome. In the distributed software system, lots of components communicate with each other with complex restrictions, so the specifications of the interaction among the components need to be described by using an efficient method. A model for describing distributed system specifications based on attribute grammar (AG) is described in this paper. First, the AG is extended to refine the characters of distributed software, such as parallelism, synchronization and timing, and a distributed software architecture description language (DSADL) is introduced, and then a prototype of integrated environment for software architecture design is proposed based on the AG analyzer and the AG attribute calculator ,which not only supports the construction of distributed software architecture by GUI and the automatic generation of ADL, but also provides the verification and the analysis of the system. Its initial application shows that DSADL can help the programmers to analyze and design distributed software effectively, so the efficiency of the development can be improved greatly.
A Novel Hidden Markov Model-Based Hierarchical Time-Series Clustering Algorithm
Duan Jiangjiao, Xue Yongsheng, Lin Ziyu, Wang Wei, and Shi Baile
2006, 43(1):  61-67. 
Asbtract ( 576 )   HTML ( 4)   PDF (470KB) ( 660 )  
Related Articles | Metrics
In this paper, a novel hidden Markov model (HMM)-based hierarchical time-series clustering algorithm HBHCTS is proposed, because of the disadvantage of traditional HMM-based clustering algorithms for time-series. The main purpose is to improve clustering quality and represent the clusters easily at the same time. In HBHCTS, HMMs are built from time-series, and the initial models are obtained according to the most similarity, and then the process of merging and updating initial models is iterated until the final result is obtained. In the experiment, the relation between correctness rate and the length of a sequence, the relation between correctness rate and the model distance are researched. The results show that the HBHCTS can achieve better performance in correctness rate than the traditional HMM-based clustering algorithm.
A Sequential Pattern Mining Algorithm Based on Large-Itemset Reuse
Song Shijie, Hu Huaping, Zhou Jiawei, and Jin Shiyao
2006, 43(1):  68-74. 
Asbtract ( 440 )   HTML ( 0)   PDF (446KB) ( 450 )  
Related Articles | Metrics
A first-horizontally-last-vertically scanning database sequential pattern mining algorithm (HVSM) based on large-itemset reuse is presented in this paper. The algorithm redefines the length of sequential pattern, which increases the granularity of mining sequential pattern. While considering a database as a vertical bitmap, the algorithm first extends the itemset horizontally, and digs out all the large-itemsets which are called one-large-sequence itemset. Then the algorithm extends the sequence vertically, and takes each one-large-sequence itemset as a “container” for mining k-large-sequence, and generates candidate large sequence by means of taking brother-nodes as child-nodes, and counts the support by recording the 1st-TID. The experiments show that the HVSM can find out frequent sequences faster than the SPAM algorithm for mining the medium-sized and large transaction databases.
An Efficient Indexing Scheme for Range Aggregate Queries in Spatial Data Warehouse
Chen Xiqian, Wang Zhanchang, Cao Xiukun, and Chi Zhongxian
2006, 43(1):  75-80. 
Asbtract ( 437 )   HTML ( 0)   PDF (383KB) ( 515 )  
Related Articles | Metrics
Spatial data warehouse provides efficient analysis environment for both spatialdata and non-spatial data, which can satisfy the urgent need for embedding spatial data into decision support system. The range aggregate query on both non-spatial dimensions and spatial dimensions is a very important operation to support spatial on-line analytical processing (OLAP). To optimize the operation, an indexing scheme named aCR-tree and its corresponding algorithms with asymptotical performance analysis are proposed based on aggregate cubetree and aR-tree. Using both synthetic and real enterprise data, experiments are conducted to demonstrate storage overhead and range aggregate query performance of the indexing scheme. The analytical and experimental results show that the costs of range aggregate queries and the storage space of aCR-tree are superior to that of the traditional storage structures.
CRGA—A Genetic Algorithm Based on Preserving Global Commonality Schemata and Restricting Local of Crossover
Yao Wangshu, Chen Zhaoqian, and Chen Shifu
2006, 43(1):  81-88. 
Asbtract ( 522 )   HTML ( 0)   PDF (463KB) ( 522 )  
Related Articles | Metrics
A new genetic algorithm based on preserving global commonality schemata and restricting local of crossover is proposed. This algorithm resolves the problem that is the two disadvantages of a standard crossover operator: disruption of high rank, long and good schemata and inefficiency between similar individuals. This algorithm protects the high rank, long and good schemata by estimating the probability of parent alleles preserved in son based on the statistic of all individuals whose fitness is better than average fitness of population, and ensures to produce new individual by restricting the local of crossover. The experiment result shows that the convergence precision and the convergence speed of this algorithm are evidently better than that of genetic algorithm based on standard crossover.
Swarm-Core Evolutionary Particle Swarm Optimization in Dynamic Optimization Environments
Dou Quansheng, Zhou Chunguang, Xu Zhongyu, and Pan Guanyu
2006, 43(1):  89-95. 
Asbtract ( 429 )   HTML ( 0)   PDF (436KB) ( 546 )  
Related Articles | Metrics
The particle swarm optimization (PSO) method was originally designed by Kennedy and Eberhart in 1995 and has been applied successfully in various optimization problems. The PSO idea is inspired by natural concepts such as fish schooling, bird flocking and human social relations. The concept of “swarm-core” is defined in this paper, based on this concept an improved PSO is proposed, which is swarm-core evolutionary particle swarm optimization (SCEPSO). In order to enhance the optimization power of the swarm, the particle swarm are divided into three sub-swarms and each sub-swarm has different job in SCEPSO. At same time the effectiveness of SCEPSO in tracking changing extrema are investigated, experiments for the three types of dynamic optimization models indicate that the SCEPSO can track a continuously changing solution reliably and accurately compared with PSO.
A New Unified Model of Particle Swarm Optimization and Its Theoretical Analysis
Zeng Jianchao and Cui Zhihua
2006, 43(1):  96-100. 
Asbtract ( 536 )   HTML ( 0)   PDF (322KB) ( 599 )  
Related Articles | Metrics
Through mechanism analysis of several modified particle swarm optimizations (PSO), a new uniform model of PSO is described, and the convergence is analysed with linear control theory. To improve the calculation efficiency, two enhanced global search capability self-adaptive PSOs, one-population self-adaptive PSO and two-population self-adaptive PSO, are proposed. The one-population self-adaptive PSO uses the diverse coefficients in the first evolutionary strategy. The two-population self-adaptive PSO uses two different populations: one owns global search capability, and the other owns local search, and through exchanging information the algorithm efficiency is improved. The simulation results show the correctness and efficiency of the presented methods.
Study on Domain Ontology Representation, Reasoning and Integration for the Semantic Web
Zhang Weiming and Song Junfeng
2006, 43(1):  101-108. 
Asbtract ( 558 )   HTML ( 0)   PDF (426KB) ( 585 )  
Related Articles | Metrics
One foundation of the semantic Web is ontology—to make Web contents understandable to machine and suitable for inference, ontology needs to be established, and the Web contents should be annotated with metadata which are defined as terms in ontology. The relation between ontology and domain ontology is explained; the overview of ontology languages for the semantic Web is introduced; based on trade-off between expressivity of knowledge and complexity of reasoning problems, OWL Lite is selected as ontology language; then DORRSW (domain ontology representation and reasoning for the semantic Web) and MDOISW (multiple domain ontology integration for the semantic Web) are proposed; Finally DORRSW and MDOISW are applied to solve an example problem. From these discussions, the basic situations of domain ontology representation, reasoning and integration for the semantic Web are clarified, so the basic knowledge of developing ontology for the semantic Web is provided.
MPEG Video Transcoding with Adaptive Drifting Error Control
Yuan Lujun, Gao Wen, Wu Feng, and Li Shipeng
2006, 43(1):  109-114. 
Asbtract ( 422 )   HTML ( 0)   PDF (327KB) ( 432 )  
Related Articles | Metrics
A fast video transcoder with adaptive drifting error control is proposed in this paper. Firstly, similar to the traditional close-loop transcoder, the proposed transcoder still accumulates the re-quantization errors of each I or P picture. However, the accumulated errors are not always introduced into the transcoding loop in every block. A triple-threshold algorithm is proposed to adaptively utilize the accumulated errors at block level so as to control the drifting error under an acceptable level. Secondly, the re-quantization process can be simply implemented by looking up table, which significantly reduces the complexity of the re-quantization process. The experimental results show that the picture quality obtained from the proposed transcoder is close to that of the traditional closed-loop one while about 50% blocks are not updated with the accumulated errors. In this case, the speed of the proposed transcoder is much faster than that of the traditional close-loop one, and even faster than that of the traditional open-loop one. Furthermore, the proposed transcoder provides the desired flexibility between picture quality and transcoding complexity in CPU-awareness applications.
Implementation of Digital Ridgelet Transform and a New Method
Jia Jian, and Jiao Licheng
2006, 43(1):  115-119. 
Asbtract ( 377 )   HTML ( 2)   PDF (370KB) ( 440 )  
Related Articles | Metrics
Although the ridgelet transform is introduced as a new multiscale representation for functions on continuous spaces, discrete versions of the ridgelet transform that lead to algorithmic implementations remains to be solved. In this paper, approximate digital implementation is described by using a new method. As an important tool, Bresenham algorithm is used to offer exact reconstruction. Compared with the nearest-neighbor interpolation method, the new method has better performance such as stability against perturbations, low computation complexity and easy implementation. Compared with the ridgelet transform based on the Z\+2\-p method, the numerical results show that the new transform is more effective in reconstruction, compression and denoising images with straight edges, which lays a solid foundation for further research.
An Image Segmentation Model Based on Dual Level Sets
Zhang Jianwei, and Xia Deshen
2006, 43(1):  120-125. 
Asbtract ( 530 )   HTML ( 1)   PDF (440KB) ( 442 )  
Related Articles | Metrics
It is hard for level set method to segment an image with slight topological structure. By analysing the phenomenon, a new method based on dual level sets has been introduced. With this new method, objects can be segmented quickly. Meanwhile, typical level set method can not get the true segmentation when it segments an image with strong noise or week boundary, for it only uses edge information to construct the speed function. So the region information is employed to construct the speed function. Better results are achieved in the application of this method on segmentation of synthetic or cardiac magnetic resonance images.
Iris Recognition Study Based on Texture Feature Fusion and Selection
Gu Hongying and Pan Yunhe
2006, 43(1):  126-131. 
Asbtract ( 502 )   HTML ( 0)   PDF (358KB) ( 468 )  
Related Articles | Metrics
Iris recognition is a prospering biometric method, but some technical difficulties still exist. To get more representative iris features, features from space and frequency domain are extracted at the same time. Both variation fractal dimension and wavelet features are extracted to form the feature sequence. Multi-objective genetic algorithm is employed to optimize the features. Finally the selected features of different iris patterns are used to train iris classifiers. Furthermore, traditional SVM is modified as non-symmetrical support vector machine to satisfy the variant security requirements in real iris recognition applications. Experimental data shows that the new feature sequence represents the variation details in the iris patterns properly. Therefore the correctness is improved.
Kernel Uncorrelated Discriminant Analysis and Its Application to Handwritten Character Recognition
Liang Zhizhen and Shi Pengfei
2006, 43(1):  132-137. 
Asbtract ( 373 )   HTML ( 0)   PDF (361KB) ( 368 )  
Related Articles | Metrics
Based on uncorrelated discriminant analysis, kernel uncorrelated discriminant analysis is developed. However, computing kernel uncorrelated vectors is computationally expensive due to the utilization of kernel functions. In order to overcome this problem, an effective method for solving kernel uncorrelated discriminant analysis is proposed in this paper. Firstly, the proposed algorithm smartly uses the decomposition of matrices. Then the generalized singular value decomposition on the matrix pair is carried out. At the same time, several related theorems are proposed. Most importantly, the proposed method can overcome the singular problem of matrices in kernel uncorrelated discriminant analysis. In some sense, the proposed method extends existing methods, namely, from linear problems to non-linear problems. Finally, experimental results on handwritten numeral characters show that the proposed method is effective and feasible.
On-Line Handwriting Cursive Recognition
Zou Mingfu, Niu Xingyu, Liu Changping, and Bai Hongliang
2006, 43(1):  138-144. 
Asbtract ( 647 )   HTML ( 0)   PDF (427KB) ( 870 )  
Related Articles | Metrics
On-line cursive handwriting recognition is both a pattern recognition and search problem. Its computing complexity is very high. Some new methods are proposed to overcome these problems. First, some prior knowledge, such as middle of primitive and delayed stroke, is used to adjust the reference line. Then, based on the special primitive attribute of handwriting cursive, a fast valid decoding algorithm, that is, the primitive level building beam viterbi (PLBBV), is presented. Finally, a combining classifiers algorithm that uses all the classifiers' measurement to make decision is represented. The tests on the Unipen training set and the laboratory set reveal the feasibility of the proposed method.
A Review of Text-to-Visual Speech Synthesis
Wang Zhiming, and Tao Jianhua
2006, 43(1):  145-152. 
Asbtract ( 656 )   HTML ( 1)   PDF (367KB) ( 574 )  
Related Articles | Metrics
Visual information is important to the understanding of speech. Not only hearing-impaired people, but people with normal hearing also make use of visual information that accompanies speech, especially when the acoustic speech is degradedin the noise environment. As text-to-speech (TTS) synthesis makes computer speak like human, text-to-visual speech (TTVS) synthesis by computer face animation can incorporate bimodality of speech into human-computer interaction interface in order to make it friendly. The state-of-the-art of text-to-visual speech synthesis research is reviewed. Two classes of approaches, parameter control approach and data driven approach, are developed in visual speech synthesis. For the parameter control approach, three key problems are discussed: face model construction, animation control parameters definition, and the dynamic properties of control parameters. For the data driven approach, three main methods are introduced: video slice concatenation, key frame morphing, and face components combination. Finally, the advantages and disadvantages of each approach are discussed.
English Text Chunking Based on Headword Extending and the Evaluation of Relative-Degree
Liang Yinghong, Zhao Tiejun, Liu Bo, and Yang Muyun
2006, 43(1):  153-158. 
Asbtract ( 487 )   HTML ( 2)   PDF (325KB) ( 499 )  
Related Articles | Metrics
Traditional English text chunking approach is to transfer chunking to part of speech. It is shown that this could not take into account the relationship of neighbor part of speech and the cohesion of all part of speeches within one phrase. In this paper, the headword extending and the evaluation of relative-degree strategy are proposed and applied in the identification of English text chunking whose main features are: 1) regarding each phrase as a cluster whose kernel is headword, which richly uses the disciplinarian of consisting of one phrase; 2) dynamically evaluating the chunking result using doubt-degree and reliability. Through testing on the public corpus, the speed of this method is faster than others, and the F score achieves 94.05%, which is at the state-of-the-art.
A Query Description Model of Soccer Video Based on BSU Composite Petri-Net
Lao Songyang, Huang Guanglian, Alan F. Smeaton, Gareth J. F. Jones, and Hyowon Lee
2006, 43(1):  159-168. 
Asbtract ( 395 )   HTML ( 0)   PDF (799KB) ( 460 )  
Related Articles | Metrics
The soccer is one of the most cosmopolitan sports, whose fans are found everywhere in the world. So soccer video is in favor with great mass of spectators. In this paper, based on analyzing of soccer video characteristics, a query description model based on basic semantic unit composite Petri-net (BSUCPN) for soccer video is proposed. Firstly a BSU set of soccer video just like the word set in text is defined, Secondly based on the Petri-net model a description model of soccer semantic query is designed, and then some query descriptions by BSUCPN such as goal, attack, corner kick, foul and substitution are given. The experimental results indicate that this model is robust and generic, and that it can be extended to other ball games and sports video.
A Decision-Making Method on D-S Evidence Fusion Information Based on Distance Measure
Lin Zhigui, Xu Lizhong, Yan Xijun, Huang Fengchen, and Liu Yingping
2006, 43(1):  169-175. 
Asbtract ( 523 )   HTML ( 2)   PDF (390KB) ( 433 )  
Related Articles | Metrics
To solve the decision-making problem based on results of multi-source information fusion and D-S evidence structure, a decision-making method based on distance measure is proposed. The method combines attributes of decision-making basic elements with that of decision-making non-basic elements to make a decision about the results, and divides the problem on D-S evidence into two levels, that is, attribute level and evidence level. On the attribute level, a method for obtaining support degrees of candidate decision-making focal elements from evidence focal elements is introduced. On the evidence level, a space based on power set elements of the discernment frame is generated, ideal state vectors of candidate decision-making focal elements are introduced, distance measure is defined, and a decision-making model based on the measure is made. Finally, the fusion results of multi-source information about water quality monitoring data are analyzed based on different methods. The result indicates that the decision-making method based on distance measure is valid and has advantages of processing conflict evidence or non-conflict evidence.
High-Level Low-Power Synthesis of Real-Time Systems Using Time Petri Nets
Hu Xiao, Li Xi, and Gong Yuchang
2006, 43(1):  176-184. 
Asbtract ( 403 )   HTML ( 0)   PDF (618KB) ( 410 )  
Related Articles | Metrics
In this paper, using time Petri nets (TPNs) as specification models, system functionality and timeliness are first verified. Then a heuristic algorithm is proposed to optimize the system energy, which is driven by subtasks' energy-gradients and can be further simplified in the case that TPNs are composable. The experimental results show that the proposed methods can support the high-level low-power synthesis of real-time systems with results close to the optimum one, but with very low time complexities.