Loading...
ISSN 1000-1239 CN 11-1777/TP

Table of Content

15 February 2010, Volume 47 Issue 2
Paper
Mapping Analysis Between Viterbi and DTW Algorithms—Application to the Identification of Signer Independent Sign Language
Ni Xunbo, Zhao Debin, Jiang Feng, and Cheng Dansong
2010, 47(2):  . 
Asbtract ( 753 )   PDF (1813KB) ( 547 )  
Related Articles | Metrics
In classical pattern classification theory, Viterbi algorithm represents pattern matching algorithm of statistic probability. However, DTW algorithm represents pattern matching algorithm of template matching algorithm. Whether there is any relationship between them have not been presented clearly. Aiming at this problem, the authors set up relationship between Viterbi algorithm and DTW algorithm based on application of fuzzy math theory under the premise that “the category of fuzzy math membership is the general probability”. Firstly, they propose the common closeness degree expression transferring “distance” of DTW algorithm to “probability” of Viterbi algorithm making use of closeness degree in fuzzy math and prove the common closeness degree expression theoretically. Secondly, the HMM parameters are re-estimated with the common closeness degree of DTW to set up fuzzy closeness degree relationship between DTW algorithm and Viterbi algorithm, for which the δ-ε algorithm is presented to obtain parameter re-estimating form similar to HMM based on data frame. Then, in order to ensure correctness of the fuzzy closeness relationship between DTW algorithm and Viterbi algorithm, corresponding proof is given as a theorem. Thirdly, during the HMM parameter re-estimation with the decided DTW closeness degree expression, it is found that there exists fuzzy relationship between the DTW closeness degree re-estimating parameters and the HMM re-estimating parameters and it is proved as a theorem. Finally, the authors propose Dtw-ViterbiⅠ,Ⅱ,Ⅲ based on the above theorem, prove the correctness of them as a theorem and implement them in signer-independent sign language recognition. Experiment results show that introducing the path searching strategy of DTW algorithm in Viterbi algorithm in the form of probability can partly reduce the failures in signer-independent sign language recognition by reducing candidate vocabulary thus improving the signer-independent sign language recognition rate and speed in case of large vocabulary.
Secure Multiparty Computation of Statistical Distribution
Wang Ke and Dai Yiqi
2010, 47(2):  201-206. 
Asbtract ( 602 )   HTML ( 2)   PDF (706KB) ( 468 )  
Related Articles | Metrics
Secure multi-party computation is now a crucial privacy preserving technology in network computing environment, and it plays an important role in cryptography. It is a basic block to construct other cryptographic protocol, and is recently a research focus in international cryptographic community. The researchers from home and abroad have made extensive researches on secure multi-party computations, and a lot of theoretical and practical achievements have been obtained, and yet a lot of practical problems need to be further studied. The authors first briefly introduce the state of the art in the study of secure multi-party computation, and some problems need to be further studied. Secondly, they study the secure problems that are encountered in statistics. Aiming at privacy preserving computing of statistical distribution, which is frequently encountered in statistics, and based on the intractability of computing discrete logarithm and using rigorous logic, three solutions are proposed to this problem. The privacy preserving property of these solutions are proved by simulation paradigm. The study of this problem has not been read in the literature. These solutions are of great importance in practical privacy preserving statistical computation. They can be used to protect the privacy of the informant, so that the informant need not worry about the leakage of their privacy. This makes the statistical results be more reliable and have more reference value.
General Results on Secret Sharing Based on General Access Structure
Zhang Haibo, Wang Xiaofei, and Huang Youpeng
2010, 47(2):  207-215. 
Asbtract ( 477 )   HTML ( 0)   PDF (891KB) ( 413 )  
Related Articles | Metrics
For secret sharing, current researches mainly focus on perfect access structures with a very limited number of access subsets, where each subset is either a qualified set or a forbidden set and no semi-access subset exists, as well as on the shares bounds under a uniform distribution, where the number of the bits required by a share is used as the measurement of the bounds. Therefore, the research results are inevitably limited to some extent. Based on general access structures, some generalized information-theoretic results that are suitable for both perfect and non-perfect access structures with an unlimited number of access subsets identified by qualified, forbidden or semi-access are presented in this paper. These results are the general conclusions of many current related works and can be used as the basis for further researches. Meantime, using the information entropy of a share as the measurement of the bounds, some generalized bounds that are suitable for all shares and bounds that are suitable only for particular shares are given too. The bounds are also the generalization of many current related results under arbitrary probability distributions. Some of the bounds are tighter than those well-known ones. Additionally, with the help of the above new generalized results, some potential results can be easily deduced and the proof for many well-known results can be easier and more concise.
Audio Watermarking Approach Based on Audio Features in Multiwavelet Domain
Peng Hong, Wang Xun, Wang Weixing, Wang Jun, and Hu Deyu
2010, 47(2):  216-222. 
Asbtract ( 554 )   HTML ( 2)   PDF (827KB) ( 637 )  
Related Articles | Metrics
Based on the analysis of the audio features, a new audio watermarking algorithm using the discrete multiwavelet transform is proposed. Combined with time-frequency masking property of the human auditory system, the proposed algorithm analyses the zero-cross ratio and the short-time energy of each audio frame to choose the audio frames to embed the watermark. Using the features of sub-sampling and the advantages of multiwavelet in signal processing, each frame to embed the watermark is sub-sampled into two sub-audio frames, and these sub-audio frames are decomposed into multiwavelet domain respectively. According to the energies of two sub-audio frames in multiwavelet domain, the capacity of embedded watermark in audio signal is estimated, and then watermark embedding is accomplished based on the energy relationship between two sub-audio frames. The retrieval of embedded watermark can be considered as a classification problem with two-class that can be solved by support vector machines. The experimental results show that the proposed algorithm can find the suitable audio frames to embed watermark according to the features of the audio signal and can also adjust the embedding strength dynamically improving the robustness of watermarking system without losing auditory quality.
A Digital Rights Management Mechanism and Implementation Based on Logic Framework
Zhong Yong,, Zhang Hong, Liu Fengyu, and Qin Xiaolin
2010, 47(2):  223-230. 
Asbtract ( 461 )   HTML ( 0)   PDF (942KB) ( 472 )  
Related Articles | Metrics
DRM technology is becoming one of the critical techniques to protect digital rights in the Internet. Rights expression languages (RELs) in DRM system are languages devised to construct licenses and describe usage rights of digital content. Most of current rights expression languages are based on XML, which lack a formal semantics and when the conditions of user become complex, the syntax of the languages are complicated and obscure. Due to the problems, the rights expression language LucScript is presented, which is a language based on logical framework that owns logical advantages of expressiveness, flexibility and semantic integrity to XML-based languages. The logic semantic, syntax and triggering mechanism of the language framework are analyzed and explained. And the implementation mechanism of the language is discussed. Finally, the application and process of usage control of the language are exampled. The language is based on integral semantics of Active-U-Datalog that owns a unique stable model, which makes the language own more expressiveness and flexibility than other logic-based rights expression languages. A comparative analysis between LucScript and LicenseScript language which owns the most powerful expressiveness at the present time is also shown at the end of the paper. LucScript language can improve usage flexibility and capability of real-time control of digital content in DRM system effectively.
Efficient Certificateless Signature and Group Signature Schemes
Chen Hu, Zhu Changjie, and Song Rushun
2010, 47(2):  231-237. 
Asbtract ( 744 )   HTML ( 3)   PDF (768KB) ( 1677 )  
Related Articles | Metrics
Group signatures are studied in newly proposed certificateless public key setting. The security model for the certificateless group signature scheme is put forward. Using bilinear pairings, a certificateless signature scheme is presented, which is formally proven in the random oracle. At the same time, a group signature scheme, based on the certificateless signature scheme given in this paper, is constructed. The former is computationally efficient in that it just needs one pairing operation in its signing and verification phases, the latter just needs two pairing operations. So they have advantage in performance. The security of these schemes is based on the fact that the computational Diffie-Hellman problem is hard. This group signature scheme not only enjoys all the security requirements of a group signature, but also is a dynamic group signature scheme that allows users to join or leave the group freely without updating the group public key and other group members private signing key. Besides, the length of the signature is independent of the numbers of the group members. Due to its security, high efficiency and freedom from certificate management, the group signature scheme may have practical applications in electronic commerce and electronic voting, etc.
Alternating Combination Trilateration Based on Circle-Selection
Cai Shaobin, Li Xi, Tian Ying, Gao Zhenguo, and Yao Nianmin
2010, 47(2):  238-244. 
Asbtract ( 548 )   HTML ( 2)   PDF (945KB) ( 509 )  
Related Articles | Metrics
WSN(wireless sensor network) is formed by a large number of cheap sensors, which are communicated by an ad hoc wireless network. WSN is used to be deployed in a certain area to collect information of sensed objects. In most applications of WSN, the acquired information is useful only when the locations of sensors and objects are known. Therefore, localization is one of the most important technologies of WSN. However, the characters of WSN, such as limited wireless bandwidth and limited power and processing ability of sensor node, determine that the localization of sensor network is a challenge. In ACT (alternating combination trilateration), in order to acquire higher localization precision, all weight triangles formed by any three beacons are used to localize a sensor node. However, its calculation overhead is very high. Therefore, IACT(improved alternating combination trilateration) is proposed to improve ACT(alternating combination trilateration) by a new high-weight triangle selection method, and acquires a lower calculation overhead without affecting its location precision. However, the studies show that its calculation overhead can be reduced further. Therefore, a new high-weight triangle selection method based a cycle is proposed, and IACT is improved further by ACTBCS(alternating combination trilateration based on circle-selection) based on the circle-based selection method. Compared with IACT, ACTBCS has lower calculation overhead and similar precision of location.
A TDMA Scheduling Algorithm to Balance Energy Consumption in WSNs
Liu Anfeng, Xu Juan, and Chen Zhigang,
2010, 47(2):  245-254. 
Asbtract ( 593 )   HTML ( 0)   PDF (1565KB) ( 507 )  
Related Articles | Metrics
Sensor nodes in wireless sensor networks are constrained by battery power. And the sensor nodes sense a specific phenomenon in the environment and route the sensed data to a relatively small number of central data processing nodes, called sinks. So there exists imbalance in energy consumption in essence. In this paper, the authors are not only interested in determining a TDMA schedule that minimizes the total time required to complete the convergecast, but also consider a TDMA scheduling algorithm can which balance load to prolong network lifetime. They consider a simple version of the problem in which every node generates exactly one packet, and the node has multi-transmission power levels which can vary according to its transmission distance. The formula of energy consumption are analyzed for the general k-hop network in theory according to the typical network parameters. It is proved that there exits a best k that makes the network lifetime the longest. A TDMA scheduling algorithm is proposed for general k-hop network, and the upper bound of time slot required in general k-hop network is given as follows. Based on the analysis, the entire network scheduling strategy can be obtained for general k-hop network. Theoretical analysis and numerical simulation results confirm the accuracy and effectiveness of the algorithms.
Searching Semantic Web Documents Based on RDF Sentences
Wu Honghan, Qu Yuzhong, and Li Huiying
2010, 47(2):  255-263. 
Asbtract ( 623 )   HTML ( 2)   PDF (1264KB) ( 476 )  
Related Articles | Metrics
Keyword-based semantic Web document search is one of the most efficient approaches to find semantic Web data. Most existing approaches are based on traditional IR technologies, in which documents are modeled as bag of words. The authors identify the difficulties of these technologies in processing RDF documents, namely, preserving data structures, processing linked data and generating snippets. An approach is proposed to model the semantic Web document from its abstract syntax: RDF graph. In this approach, a document is modeled as a set of RDF sentences. It preserves the RDF sentence-based structures in the processes of document analyzing and indexing. The authoritative descriptions of named resources are also introduced and it enables the linked data across document boundaries to be searchable. Furthermore, to help users quickly determine whether one result is relevant or not, The traditional inverse index structure is extended to enable more understandable snippet extraction from matched documents. Experiments on real world data show that this approach can significantly improve the precision and recall of semantic Web document search. The precision at top one result is improved up to 19% and a steady improvement (near 10%) is observed. According to 50 random queries, the recall increases up to 60% averagely. Remarkable improvements in system usability are also obtained.
A Survey of the Research on Similarity Query Technique of Sequence Data
Zhu Yangyong, Dai Dongbo, and Xiong Yun
2010, 47(2):  264-276. 
Asbtract ( 646 )   HTML ( 2)   PDF (1484KB) ( 681 )  
Related Articles | Metrics
Sequence data is ubiquitous in many domains such as text, Web access log and biological database. Similarity query in sequence data is a very important means for extracting useful information. Recently, with the development of various scientific computing and the generation of large scale sequence data, similarity query on sequence data is becoming a hot research topic. Some important issues related to it are: similarity metrics used in different application fields and the mutual connections between them; statistical information of distance distribution on random sequence collections as well as its function for analyzing the performance of query algorithms; different kinds of key techniques for efficiently answering similarity queries in large scale datasets and the comparisons between their merits and demerits. In this survey, the classification and characteristics of sequence data is summarized. Some kinds of similarity metrics and statistical information about distance between random sequences are also presented and the relationships among these similarity metrics are further analyzed. Then, some types of similarity query and key issues in point are introduced. Based on these foundations, this paper focuses on the classification and evaluation of key techniques on sequence similarity search. Finally, some challenges on similarity query of sequence data are discussed and future research trends are also summarized.
Research on a New Concise Representation of Frequent Itemset and Its Mining Algorithm
Song Wei, Li Jinhong, Xu Zhangyan, and Yang Bingru
2010, 47(2):  277-285. 
Asbtract ( 462 )   HTML ( 0)   PDF (985KB) ( 616 )  
Related Articles | Metrics
Frequent itemset mining has become an important data mining task and a focused theme in data mining research. The bottlenecks of frequent itemset mining are as follows: On the one hand, the number of all frequent itemsets is usually extremely large. On the other hand, there is often severe redundancy in the resultant itemsets. To overcome these problems, recently several proposals have been made to construct a concise representation of the frequent itemsets, instead of mining all frequent itemsets. The aim is that the resultant subset can either satisfy the requirements of applications, or can derive all the other frequent itemsets. Maximal itemset and closed itemset are two most typical representative subsets of all frequent itemsets. Based on maximal itemset and closed itemset, a new concise representation of frequent itemset, namely meta itemset, is proposed. It is proved that both maximal itemset and closed itemset are special cases of meta itemset. The cardinality of meta itemset is between those of maximal itemset and closed itemset. Then, property of meta itemset is discussed. Finally, by introducing pruning strategies to DCI-closed-index, which is a closed itemset mining algorithm, an algorithm for mining meta itemset is proposed. Experimental results show that the proposed algorithm is effective and efficient.
SVM Based fMRI Data Classification: An Approach to Decode Mental State
Xiang Jie and Chen Junjie
2010, 47(2):  286-291. 
Asbtract ( 755 )   HTML ( 2)   PDF (1406KB) ( 617 )  
Related Articles | Metrics
Recently, a growing number of studies have shown that machine learning technologies can be used to decode mental state from functional magnetic resonance imaging (fMRI) data. Two feature extraction methods are compared in this paper, one is based on the cumulative change of blood oxygen level dependent (BOLD) signal of activated brain areas, and the other is based on the values at each time point in the BOLD signal time course of each trial. The authors collected the fMRI data while participants were performing a simplified 4×4 Sudoku problems, and predicted the complexity (easy vs. complex) or the steps (1-step vs. 2-steps) of the problem from fMRI data using these two feature extraction methods respectively. Both methods can produce quite high accuracy, and the performance of the latter approach is better than the former. The results indicate that SVM can be used to predict high-level cognitive states from fMRI data. Moreover, the feature extraction based on serial signal change of BOLD effect can predict cognitive state better because it could use abundant and typical information kept in BOLD effect data. By ranking accuracy of every single-voxel based classifier, it is interesting that voxels with higher accuracy are anatomically located in PPC, PFC, ACC and other brain regions closely related to problem solving, which is consistent with previous studies in cognitive neuroscience. The methods might shed light on brain computer interface (BCI).
Nave Gene Expression Programming Based on Genetic Neutrality
Zhu Mingfang, Tang Changjie, Dai Shucheng, Chen Yu, Qiao Shaojie, and Xiang Yong
2010, 47(2):  292-299. 
Asbtract ( 672 )   HTML ( 0)   PDF (904KB) ( 550 )  
Related Articles | Metrics
The neutral theory of molecular evolution suggests that the accumulation of neutral mutations in the genome plays a vital role in evolutions. The genetic representation of gene expression programming (GEP), an artificial genotype and phenotype system, permits the existence of non-coding regions in the genome where neutral mutations can be accumulated. The authors introduce a concept named nave gene expression programming (NGEP) and analyze the effect in terms of neutral regions. NGEP uses the complete tree decoding method that causes more neutral regions than GEP. In order to explore the role of the genetic neutrality in NGEP, this paper makes the following contributions: 1)perfect the concept of nave gene expression programming, whose decoding method is based on complete tree; 2)analyze the characteristic of neutral regions in GEP and NGEP, and point out that NGEP has more free neutrality regions; 3)study and compare the specific role of genetic neutrality for both GEP and NGEP by controlling and adjusting the length and the number of genes and these non-coding regions, and tests the efficiency of NGEP; and 4)extensive experiments and comparisons show that NGEP is more efficient than traditional GEP in the case of similar gene redundancy, in particular, the success rate of NGEP does not change drastically with the growth of genetic neutrality.
Context-Sensitive Query Expansion
Li Weijiang, Zhao Tiejun, and Wang Xiangang
2010, 47(2):  300-304. 
Asbtract ( 763 )   HTML ( 1)   PDF (605KB) ( 520 )  
Related Articles | Metrics
The effectiveness of information retrieval (IR) systems is influenced by the degree of term overlap between user queries and relevant documents. Query-document term mismatch, whether partial or total, is a fact that must be dealt with by IR systems. query expansion (QE) is one method for dealing with term mismatch. Classical query expansion techniques such as the local context analysis make use of term co-occurrence statistics to incorporate additional contextual terms for enhancing passage retrieval. However, relevant contextual terms do not always co-occur frequently with the query terms and vice versa. Hence the use of such methods often brings in noise, which leads to reduced precision. On the basis of analyzing the process of producing query, the authors propose a new method of query expansion on the basis of context and global information. At the same time, the expansion terms are selected according to their relation with the whole query. Additionally, the position information between terms is considered. The experiment result on TREC data collection shows that the method proposed outperforms the language model without expansion by 5%~19%. Compared with the popular approach of query expansion, pseudo feedback, the method has the competitive average precision.
A Stroke-Segment-Mesh (SSM) Glyph Description Method of Chinese Characters
Lin Min and Song Rou
2010, 47(2):  318-327. 
Asbtract ( 554 )   HTML ( 5)   PDF (4520KB) ( 811 )  
Related Articles | Metrics
In the field of Chinese characters information processing,there are drawbacks in the current description method of computerized Chinese character glyph, especially on the aspect of feature selection and computation for comparing glyphs, which cause many problems, such as the input of wrongly written characters, variant forms of characters in ancient literatures, and combined characters and automatic glyph comparison. This research offers an application oriented description method of Chinese character glyphs, which possesses the characteristics of suitable granular degree, ambiguity-free, and standardized basic units and which can be applied in the description of any possible different glyph skeletons (including wrongly written characters, variant form of a Chinese character, and combined characters). This research also offers the classifications of simple strokes, compound strokes, algorithms of automatic stroke extraction and structural relation computation based on stroke-segment-smesh of glyphs. The experiment indicates that this method can be applied in the descriptive input of any glyph and computation for comparing either the whole or part of glyphs. Therefore it can support Chinese character input of any glyphs by descriptive drawing and the description of wrongly written characters or the quantitative analysis of misused characters in the teaching and research of Chinese characters, the description and analysis of variant forms of characters in ancient literatures, or the retrieval of rare character glyphs in the electronic books.
Ensemble Classification of Microarray Data Based on Correlation Analysis
Yu Hualong, Gu Guochang, Liu Haibo, Shen Jing, and Zhao Jing
2010, 47(2):  328-335. 
Asbtract ( 398 )   HTML ( 0)   PDF (940KB) ( 465 )  
Related Articles | Metrics
The tumor diagnosis method based on microarray data will be developed into a fast and effective molecular-level diagnosis method applied in clinic in the near future. However, it is a challenging task for traditional classification approaches due to the characteristics of high dimensionality and small samples for microarray data. Therefore, ensemble classification algorithms with better performance have attracted more researchers. A novel ensemble classification algorithm for microarray data based on correlation analysis is proposed in this paper to solve the problems of low classification accuracy and excessive computation for current ensemble classification algorithms. The proposed algorithm may extract some training subsets which have the most difference between each other by computing their correlation. Therefore, the proposed algorithm could effectively improve diversity among base classifiers. Support vector machine is selected as base classifier in this paper and the experiment results on leukemia dataset and colon tumor dataset show the effectiveness and feasibility of the proposed algorithm. Meanwhile, the performances of the proposed algorithm based on different parameters are tested and the results are helpful for selecting appropriate parameters.
Automatic Labeling of Chinese Functional Chunks Based on Conditional Random Fields Model
Li Guochen, Wang Ruibo, and Li Jihong
2010, 47(2):  336-343. 
Asbtract ( 571 )   HTML ( 3)   PDF (1124KB) ( 582 )  
Related Articles | Metrics
In the schema of Chinese chunking, the words are firstly combined into base-chunks, and then the base-chunks are further combined into functional chunks, and finally formalized into a hierarchical syntactic structure. In this paper, the problem of automatic labeling of Chinese functional chunks is modeled as a sequential labeling task, and then words and base chunks are regarded as labeling units of the Chinese functional chunk labeling models. For each of the labeling models a series of new features on the level of base-chunks are constructed, and conditional random fields model is employed in the model. The data set in the experiments is selected from Tsinghua Chinese Treebank (TCT) corpus, and split into train set and test set according to the proportion of 8∶2. The experimental results show that in comparison with the model in which the feature set at word level is only used, the addition of some base-chunk features can significantly improve the performance of functional chunk labeling. The proposed functional chunk labeling method based on human-corrected base-chunks can achieve precision of 88.47%, recall of 89.93% and F-measure of 89.19%. When auto-parsed base-chunks are used, the labeling of Chinese functional chunks achieves precision of 84.27%, recall of 85.57% and F-measure of 84.92%.
Free Form Deformation and Its Application
Xu Gang, Wang Guozhao, and Chen Xiaodiao,
2010, 47(2):  344-352. 
Asbtract ( 574 )   HTML ( 4)   PDF (952KB) ( 819 )  
Related Articles | Metrics
Research on object deformation is always the hot point in the area of computer graphics and computer aided design. Free form deformation, which is one of the important technologies of object deformation, has been introduced into the popular modeling software and animation software successfully. In this paper, the research on free form deformation in the past twenty years has been conducted as a comprehensive survey. Firstly, the authors classify the existing technologies into accurate free form deformation and non accurate free form deformation, and then from the deformation tools used in the different methods, the non accurate free form deformation is classified into four classes: volume based method, surface based methods, curve based methods and point based methods. Furthermore, they also compare the existing non accurate methods of free form deformation by listing their advantages and disadvantages from the view of convenience of deformation tools creation, efficiency of parameterization, interactivity for users, versatility and their application in popular modeling software. The intrinsic relations between them are also presented. Finally, their applications in several practical fields and their research directions in the future are introduced. This survey is not only valuable for the scientific researchers in this field but also valuable for the practical users of 3D modeling software and 3D animation software.
Multicast Algorithm Based on the Adaptive Dimensional Bubble Routing on 2-D Torus Network
Xiao Canwen, Zhang Minxuan, and Guo Feng
2010, 47(2):  353-360. 
Asbtract ( 581 )   HTML ( 0)   PDF (1905KB) ( 498 )  
Related Articles | Metrics
One novel multicast algorithm called 2-D torus dimensional bubble multicast routing (2DTDBMR) is presented in this paper. According to the idea of the unicast and multicast communication operations supported by the same underlying routing strategy, the 2DTDBMR multicast algorithm based on torus adaptive dimensional bubble routing (TADBR) algorithm is realized on the 2-D torus network. The multicast algorithm supports multicast packets to realize multi-destinations routing and packets replications by the router. Furthermore, 2DTDBMR multicasting algorithm is deadlock-free. By the analysis of all possible situations of routing packets in the 2-D torus network, it is concluded that all kinds of packets can arrive at their destinations when the 2DTDBMR multicasting algorithm is accepted. The detail proof is provided for these conclusions in the paper. Lastly, the 2-D torus simulator called RingNetSim is adapted. The simulator realized the 2DTDBMR multicasting algorithm. At the same time, some traditional multicast routing algorithms are also realized in the simulator such as BRCP-HL algorithm, Hamilton algorithm, Umesh algorithm and so on. The performance of the 2DTDBMR on RingNetSim is tested. The performance of those multicast routing algorithms are evaluated by adopting different buffering space, communication models and arbitration algorithms. The results show that the 2DTDBMR algorithm owns preferable performance.
Fast Memory Size Estimation of Application Programs for System-on-Chip Signal-to-Memory Mapping
Zhao Peng and Li Sikun
2010, 47(2):  361-369. 
Asbtract ( 602 )   HTML ( 0)   PDF (1224KB) ( 421 )  
Related Articles | Metrics
System-on-chip (SoC) is comprehensively applied in the field of multimedia information processing. The application programs of multimedia information processing are abundant in nested loops and multi-dimensional signals, which greatly affects the efficiency of data transfer and storage. Therefore, the SoC signal-to-memory mapping, which pays attention to memory size of application programs and optimizes the efficiency of memory system, tends to obtain better SoC performance and declining power consumption. Concentrating on nested loops and multi-dimension signals in multimedia processing programs, a fast memory size estimation approach is presented for SoC signal-to-memory mapping. On the basis of polyhedral model and linear bounded lattice, orthogonal linear bounded lattices are put forward to partition the data domain of the multi-dimension signals, and then the memory requirement size is computed according to the data dependency of orthogonal linear bounded lattices. Orthogonal linear bounded lattices are the minimum process unit during dependency analysis and memory size computation, which can greatly reduce the analysis time and keep the estimation accuracy. The memory requirement size is taken as heuristic information and mapping criterion by the SoC signal-to-memory mapping algorithm, which helps to explore the signal-to-memory mapping space for the purpose of efficient data transfer and storage.
A Fault-Tolerant Scheduling Method Based on Predictable Deadline Miss Ratio in High Utilization
Wu Wei, Ni Shaojie, and Wang Feixue
2010, 47(2):  370-376. 
Asbtract ( 380 )   HTML ( 0)   PDF (1048KB) ( 460 )  
Related Articles | Metrics
Modern real-time systems, such as navigation and communication, emphasize high reliability and validity, which requires 100% error detection and fault recovery. Meanwhile, those systems also face the requirement of sophisticated real-time digital signal processing and information intercommunication, where real-time processing is in high utilization, resulting in a lack of timing resource for error detection and fault recovery. This requires that systems have predictable performance. Traditional time redundancy methods cannot deal with high utilization, since they need to guarantee fault recovery while no deadline is missed, which severely restricts their use in high utilization systems. Researches show that traditional time redundancy fault-tolerance in high utilization real-time systems may result in a disaster. A new fault-tolerant method is presented to deal with the problem. The main contribution of this paper is that a fault-tolerant method with predictable DMR(deadline miss ratio) in high utilization is proposed. The number of deadline missing is not greater than the number of error occurrences, which eliminates the domino effect of continuous deadline missing. A further improved approach is presented, and DMR can be effectively reduced by adopting time redundancy techniques based on off-line checkpointing analysis. The simulation experiments show that the proposed method leads to lower predictable DMR compared with the well-known algorithms so far.