Loading...
ISSN 1000-1239 CN 11-1777/TP

Table of Content

15 October 2008, Volume 45 Issue 10
Paper
Confusion Detection Based on Petri-Net
Zhao Mingfeng, Song Wen, and Yang Yixian
2008, 45(10):  1631-1637. 
Asbtract ( 592 )   HTML ( 0)   PDF (700KB) ( 451 )  
Related Articles | Metrics
Both concurrence and conflict are very important conceptions in the Petri net. Concurrence and confliction mix in the Petri net system is called confusion, which is not a good property of a model,because it appears to be difficult to obtain a “correct” implementation when confusion emerges in the Petri net system, and for systems exhibiting confusion it is also difficult to analyze some properties in the Petri net system. Net theory suggests that it is not the combination of conflict and concurrence itself that causes difficulties in the Petri net system. Rather, it is those combinations of conflict and concurrence resulting in confusion that cause trouble. It is necessary to detect confusion and study some sub-net structure which might affect confusion. Based on the definition of elementary Petri net systems’ confusion, three kinds of basic structure of structural confusion are defined, which constitute a minimum-entirety-set of structural confusion. In this paper, some conclusions are proved: every structural confusion can be reduced to the basic structural confusion, and an algorithm detecting structural confusion and one method detecting three kinds of confusion are presented. This paper also motivates the need for illustrating the difference and affiliation between and with structural confusion and confusion in WF-net. It is shown that soundness is affected by its structural confusion with some examples.
Counterexample Generation for Probabilistic Timed Automata Model Checking
Zhang Junhua, Huang Zhiqiu, and Cao Zining
2008, 45(10):  1638-1645. 
Asbtract ( 584 )   HTML ( 2)   PDF (756KB) ( 610 )  
Related Articles | Metrics
Counterexample is a typical topic in model checking. Model checking probabilistic systems have been studied well these years, but counterexample generation for probabilistic system model checking has just drawn some attentions recently. Current works are mainly focusing on the counterexample generation for Markov chain. Probabilistic timed automata (PTA) are the extension of Markov chain with non-determinism and system clocks, and have been used broadly on network protocol modeling and verification. The focus of this paper is on counterexample generation while model is checking PTA. Firstly, a research is made for the k most probable paths whose probability sum is just greater than λ. PTA can be regarded as discrete-time Markov chain (DTMC) in this situation. The sub-graph of PTA constructed from the above paths and the initial PTA is a counterexample which can be obtained quickly with small number of testimonies. When the maximal probability is calculated in a PTA, the contribution to probability not only comes from the contained paths, but also from the symbolic state intersections originated in the existence of system clocks. So refinement can be done as a further step—By adding paths from the above one by one in order to decrease probability, and to calculate the precise maximal probability on the sub-graph of PTA constructed from the added paths and initial PTA, the counterexample occupying less testimonies can be obtained. The refinement process is accomplished through an executable and converging algorithm with high efficiency.
A Scheme of Test Data Compression Based on Sharing-Run-Length Code
Zhan Wenfa, , Liang Huaguo, Shi Feng, Huang Zhengfeng, and Ouyang Yiming
2008, 45(10):  1646-1653. 
Asbtract ( 566 )   HTML ( 0)   PDF (880KB) ( 494 )  
Related Articles | Metrics
One of the major challenges in testing integrated circuits is dealing with the large test data size. To reduce the volume of test data, several test data compression schemes have been presented. But all of these schemes do not explore the relationship between consecutive runs. So a new scheme of test data compression/decompression, namely sharing-run-length code scheme (SRLCS) is presented, which is based on run length coding. It explores further the relationship between consecutive runs on the basis of traditional run length coding characteristic which uses shorter codeword to represent longer run length. Thus, only 1 bit needs to represent the whole later run in immediate two runs whose lengths are the same in this scheme. ATPG tools generate test patterns with many dont care bits, which are 95% to 99% of the bits in test data for large industrial circuits. So filling the dont care bits in test data appropriately can increase the probability of the consecutive runs whose lengths are the same. A strategy of filling dont care bits is also proposed for this scheme. Compared with other schemes, this scheme has some characteristics, such as high compression ratio and easy control and implementation. Theoretical analysis and experimental results for the Mintest test set of ISCAS-89 benchmark circuits show that the proposed scheme is a very efficient compression method.
An Avatar Migration Mechanism Based on Cell Buffer
Zhou Zhong, Wang Shaofeng, and Wu Wei
2008, 45(10):  1654-1661. 
Asbtract ( 454 )   HTML ( 0)   PDF (1776KB) ( 559 )  
Related Articles | Metrics
Avatar migration which changes the resident server of an avatar is an important issue in distributed virtual environment. Existing methods for multi-server DVE (distributed virtual environment) systems mainly conduct avatar migration according to the relationship between the avatars movement and servers region buffer, and the region buffers are fixed. However, regions are often regulated to balance the load in the latest multi-server DVE technologies. Then affected avatars have to migrate during the period of region regulation. This avatar migration of dynamic region buffer needs further study. An avatar migration mechanism based on cell buffer is proposed in this paper, where each region buffer is composed of a group of cells. Both cell and avatar are assigned to some status. Avatar migration is induced by two kinds, either avatar movement or region regulation. The mechanism abstracts the two kinds into a process of four critical conditions of status migration. Migration operations are devised on the status of avatar and the cell it resides on. In this way, many avatars can migrate concurrently with efficiency. The attribute update request of an avatar is only put in its region servers charge, which guarantees the status consistency. Experiments show that the mechanism can support concurrent migration efficiently, adding a low communication cost.
A Clustering Control Algorithm of Wireless Sensor Networks in Low Probability Event Scenario
Liu Linfeng, and Jin Shan,
2008, 45(10):  1662-1668. 
Asbtract ( 480 )   HTML ( 0)   PDF (853KB) ( 551 )  
Related Articles | Metrics
In order to fulfill the task of prolonging network lifetime, the primary objective of wireless sensor network execution is to consume the battery energy efficiently. The network topology, which is the important foundation of upper layer protocols, serves as the supportive groundwork for this goal. A significant feature of WSN is application diversity; therefore the topology control techniques under different event scenarios should be obviously different. In searching for a topology control scheme that conforms to the low probability event scenario, a theoretical model of sensor networks is constructed and analyzed. Because the listening cost is the dominating power cost under the low probability event scenario. It turns out that there is consanguineous relationship between network lifetime and the kcenter problem, which are dual to each other in the theoretical sense. A periodical clustering algorithm (PCA) based on kcenter problem is introduced consequently. PCA is composed of three phases: neighbor discovery phase, head decision phase and node attachment phase. PCA algorithm reflects the thinking of the load balancing, while minimizing the number of cluster heads. The performance of PCA algorithm is analyzed through theoretical model and simulation experiments, which indicates that PCA algorithm can be deployed quickly, and a wellconstructed topology and an effectively prolonged network lifetime can be acquired.
Study of Grid Locality and Its Optimization
Ai Lihua and Luo Siwei
2008, 45(10):  1669-1675. 
Asbtract ( 449 )   HTML ( 0)   PDF (1077KB) ( 442 )  
Related Articles | Metrics
Data grid is playing an important role in scientific research work. Essentially, data grid is an infrastructure that manages large scale data sets and provides computational resources across widely distributed communities. It has been a research hotspot to improve the performance of data grid platform used to handle and manage large distributed data files. Data file replacements due to limited local storage and grid job assignments are key elements to the efficient data grid platform. Presented here is the concept of grid locality, which involves in job locality and file locality. And the impact on the performance of data grid produced by the grid locality is analyzed. Further, the performance improvement of data grid platform is studied in the perspective of grid locality. Considering the enhancement of grid locality, a composite policy focusing on file replacement and job assignment is put forward, which is jump-DRP (jump-diffusion replacement policy), as well as JARIP (job assignment referencing to insect pheromone). Jump-DRP is based on jump-diffusion features and JARIP is based on insect pheromone characteristics. Application simulation is carried out at the file access patterns of sequential, unitary random walk and Gaussian random walk, whereas job submission is in accordance with Gaussian distribution. Experiment results show the integration of jump-DRP and JARIP is robust for both various applications and diverse jobs on data grid platform.
TXACML—An Access Control Policy Framework Based on Trusted Platform
Nie Xiaowei and Feng Dengguo,
2008, 45(10):  1676-1686. 
Asbtract ( 448 )   HTML ( 0)   PDF (887KB) ( 761 )  
Related Articles | Metrics
Trusted platform based on trusted hardware has been used widely in the access control of content distributed environment. To express trusted platform policy by a standard language, XACML is introduced to trusted platform. But XACML does not provide attribute evaluation function for platform, which can not meet the access control requirement of the trusted platform. In this paper, the access control requirement is divided into data distribution and access on the trusted platform. Then based on this analysis, a classification rule for trusted platform attribute and the corresponding attribute evaluation function are proposed, which can be used in the safe distribution and access of trusted platform data. In XACML combining algorithms, an administrative center is always needed, which could not be considered conforming to the policy combining requirement of the trusted platform. A policy combining algorithm based on trusted degree of the platform is presented to combine the policy of independent parties, which makes the policy preference be in conformance with trusted degree of the platform. Furthermore, based on the evaluation function and combining algorithm, XACML is extended to form a policy language framework based on trusted platform-TXACML. The policy description and combining process for an instance are given subsequently, proving the validity of TXACML.
An Efficient and Provably Secure IBE Scheme Without Bilinear Map
Xu Peng, Cui Guohua, and Lei Fengyu
2008, 45(10):  1687-1695. 
Asbtract ( 579 )   HTML ( 0)   PDF (857KB) ( 613 )  
Related Articles | Metrics
According to the MOV reduction theory, the identity-based encryption scheme which is based on the bilinear map will lose the high efficiency of elliptic curve. For this reason, a provably secure identity-based encryption scheme without the bilinear map is proposed, which is based on combined public-key scheme. Furthermore, by applying the pair public-key technology introduced by Katz and Wang, the security proof of the proposed identity-based encryption scheme has “tight” reduction in the random oracle model. For showing the good efficiency of the proposed identity-based encryption scheme, the degree of reduction in the security proof and the performance of time and space complexity are analyzed, and these terms are compared with other identity-based encryption schemes without the bilinear map. Finally, for overcoming the conspiracy attack of combined public-key scheme, the number of users in the proposed identity-based encryption scheme is confined, thus leading to the result that the proposed identity-based encryption scheme is inefficient when the number of user is too large. So, for keeping the efficiency of the proposed identity-based encryption scheme in the system having a lot of users, multi-field model in the Kerberos protocol is consulted with, and then a basic multi-field model based on the proposed identity-based encryption scheme is proposed.
Steganalysis on Synonym Substitution Steganography
Luo Gang, Sun Xingming, Xiang Lingyun, Liu Yuling, and Gan Can
2008, 45(10):  1696-1703. 
Asbtract ( 758 )   HTML ( 1)   PDF (1030KB) ( 870 )  
Related Articles | Metrics
As text steganography becomes a new research hotspot of security communication recently, text steganalysis, whose purpose is to detect the presence of secret message in text, has attracted more and more attention. At present, all existing methods concerning text steganography can be roughly divided into three categories: those based on invisible characters, those based on format, and those based on natural language processing. The important major technique shared by most of the natural language processing based steganographic methods is utilizing synonym substitution, which embeds secret information by substituting the synonyms selectively. Since it has the advantages of good imperceptibility, robustness, it is much more difficult for the steganalysis researchers to detect the existence of the hidden information embedded using this type of approaches. Nevertheless, it is found that the synonym substitution based steganography can lead to an obvious increase in the probability of synonym pairs in the carrier text. In the light of this observation, a steganalysis algorithm which makes use of the number of synonym pairs to decide whether the hidden information exists in text or not is proposed. Experimental results show that the proposed algorithm can efficiently break the text steganography lying on synonym substitution. The achieved false negative rate is approximately 4% and the false positive rate is approximately 9.8%.
Research on Trust Model Based on Game Theory in Mobile Ad-Hoc Networks
Luo Junhai and Fan Mingyu
2008, 45(10):  1704-1710. 
Asbtract ( 605 )   HTML ( 0)   PDF (1358KB) ( 527 )  
Related Articles | Metrics
A mobile ad-hoc network (MANET) is a multi-hop temporary communication network of mobile nodes equipped with wireless transmitters and receivers without the aid of any current network infrastructure. MANETs have its fundamental characteristics, such as open medium, dynamic topology, distributed cooperation, and constrained capability. In MANETs environment, nodes depend on each other for routing and forwarding packets. Cooperation among nodes is a key issue in such an environment. However, some of the nodes in MANETs may behave selfishly, and not forward packets to save battery and other resources. Since nodes in MANETs communicate with each other without any central authority (which can monitor selfish behavior of nodes), a centralized solution to stimulate cooperation is not suitable. In this paper, game theory is used to study nodes behavior when nodes receive service based on their reputation. Reputation is employed as a mechanism to incite nodes to share resource and forward packets for other nodes. A trust model is proposed based game theory to encourage packets forwarding and discipline selfish behavior in MANETs. Simulation results show the proposed trust model can successfully identify selfish nodes and build trust among trust nodes to improve the efficiency of MANETs.
A Security Proof Method for Multilevel Security Models
Si Tiange, Tan Zhiyong, and Dai Yiqi
2008, 45(10):  1711-1717. 
Asbtract ( 542 )   HTML ( 1)   PDF (659KB) ( 502 )  
Related Articles | Metrics
Some variants of BLP model can not prove whether their security policies match multilevel security requirements due to the limitation of security proof method of BLP model. It is pointed out that basic security theorem can only show whether a model match its security properties. So it is necessary to find a new way to formally analyze the improved BLP models instead of using basic security theorem as the original BLP, especially when the security of their improved rules is too complicated to judge directly or when the definition of security properties has been modified, which is considered as the reasoning basics of basic security theorem. In order to accomplish this, a security proof method is developed which takes advantages of a new noninterference model and provides a way to prove the security of multilevel security models from the point of view of information flow. The new noninterference model reclaims the definition of transitive noninterference relationship between system actions, presents a new unwinding theorem, and offers a great help in expressing dynamic policies of multilevel security models. To show practicability of the new noninterference model and the formal method, the security properties of ABLP and DBLP models are examined as examples.
A Risk Detection and Fault Analysis Method for the Strategic Internet
Li Qianmu and Liu Fengyu
2008, 45(10):  1718-1723. 
Asbtract ( 542 )   HTML ( 0)   PDF (822KB) ( 599 )  
Related Articles | Metrics
With the development of computer science and communication network, the scale of the strategic Internet is growing larger with the emergence of more network applications. Owing to the simple function, complex operation and lower efficiency, the old network troubleshooting system already cant satisfy for the demands of carrier development. Put forward in this paper is RSNN algorithm, a design fault diagnosis method, which tightly combines neural network and rough sets. Reduced information table can be obtained, which implies that the number of evaluation criteria is reduced with no information loss through rough set approach. And then, this reduced information is used to develop classification rules and train neural network to infer appropriate parameters. The rules developed by RS-neural network analysis show the best prediction accuracy, if a case does match any of the rules. Its capable of overcoming several shortcomings in existing diagnosis systems, such as a dilemma between stability and redundancy. Since the essence of fault diagnosis is a kind of mapping, an artificial neural network model is adopted to deal with the mapping relation, categorizing the network faults. The experiment system implemented with this method shows a good diagnostic ability.
Combination of Multiple Features for Object/Background Segmentation Using Graph Cut
Deng Yu and Li Hua
2008, 45(10):  1724-1730. 
Asbtract ( 589 )   HTML ( 1)   PDF (1458KB) ( 576 )  
Related Articles | Metrics
Moving objects segmentation is a fundamental problem in many computer vision applications, which aims at detecting regions corresponding to moving objects such as vehicles and people in natural scenes. It is known to be a significant and difficult problem. Changes from illumination and shadow make segmentation difficult to process quickly and reliably. In this paper, a novel method is proposed to detect moving objects based on the linear combination of multiple features. Firstly, the color, gradient and texture features are synchronously used to construct background model, and the statistics of the background can be updated dynamically during processing. Because gradient and texture features are insensitive to illumination change, the method can improve accuracy of the objects segmentation. Secondly, graph cut algorithm is employed to compute the objects segmentation. Researchers have traditionally used combinations of morphological operations to remove the noise inherent in the result. Such techniques can effectively isolate foreground objects, but tend to lose fidelity around the borders of the segmentation, especially for noisy input. Graph cut algorithm results in qualitatively and quantitatively cleaner segmentation. Results can be temporally stabilized from frame to frame. The experimental results of different real scenes show that the proposed method is effective and practical.
A Point Based LargeScale Crowds RealTime 3D Visualization Method
Shu Bo, Mao Tianlu, Xu Wenbin, and Wang Zhaoqi,
2008, 45(10):  1731-1738. 
Asbtract ( 480 )   HTML ( 0)   PDF (2203KB) ( 577 )  
Related Articles | Metrics
3D visualization of large-scale virtual crowds is a very important and interesting problem in research fields of virtual reality. For reasons of efficiency and visual realism, it is very difficult to populate, animate and render large-scale virtual crowds with hundreds of thousand individually animated virtual characters in real-time applications; especially for the scene which has more than ten thousands of individuals with different shapes and motions. In this paper, an efficient method to visualize large-scale virtual crowds is presented. Firstly, using model variation technique, many different models can be derived from a small number of model templates. Secondly, the crowds can be animated individually by deforming a small number of elements in motion database. Thirdly, using a developed point sample rendering algorithm, large-scale crowds can be displayed in real-time. This method can be used to visualize different dynamic crowds which require both real-time efficiency and large number of virtual individuals support. Based on this work, an efficient and readily usable 3D visualization system is presented. It can provide very high visual realism for large crowds visualization in interactive frame rates on a regular PC. The 3D visualization of 60000 people evacuating from a building is also realized in real-time based on the system.
Extracting Bottom-Up Attention Information Based on Local Complexity and Early Visual Features
Tian Mei, Luo Siwei, Huang Yaping, and Zhao Jiali
2008, 45(10):  1739-1746. 
Asbtract ( 464 )   HTML ( 0)   PDF (2157KB) ( 1323 )  
Related Articles | Metrics
During the last few years, extraordinary progress has been made in understanding the basic principle of how information is processed by visual cortex. This brings more and more concerning to bottom-up attention at home and abroad, and a large number of models based on it are built. But difficulties are met about how to extract efficient bottom-up attention information which is invariant to image scale, rotation and translation. Inspired by the research of visual attention in psychology, a novel algorithm for extracting bottom-up attention information (integration of local complexity and early visual features,LOCEV) is proposed in this paper. Bottom-up attention information is composed by saliency of certain regions correspond to each point in image, and scale of the region varies with complexity of local features adaptively. New saliency metric is defined as a product of three terms: local complexity, statistical dissimilarity and early visual features. Saliency of certain regions corresponding to all points in image is defined as bottom-up attention information. Salient regions are salient both in feature space and over scale. The extracted bottom-up attention information is invariant to image scale, rotation and translation, and is shown to be robust to noise. An attention model is developed based on this algorithm. Experiments results with natural images demonstrate its effectiveness.
Effectiveness Analysis of AdaBoost
Fu Zhongliang
2008, 45(10):  1747-1755. 
Asbtract ( 701 )   HTML ( 4)   PDF (1010KB) ( 647 )  
Related Articles | Metrics
Weak learning theorem in machine learning area shows that if the weak learning algorithm slightly better than random guess can be found, the strong learning algorithm with any precision can be constructed. AdaBoost and Bagging are the methods most in use based on this theorem. But many problems about AdaBoost and Bagging have not been well solved: The error analyses of AdaBoost and Bagging are not uniformed; The training errors used in AdaBoost are not the real training errors, but the errors based on sample weights, and if they can represent the real training errors, explanation is needed; The conditions for assuring the effectiveness of final strong learning algorithm also needs to be explained. After adjusting the error rate of Bagging and adopting weighted voting method, the algorithm flows and error analyses of AdaBoost and Bagging are unified. By direct graph analysis, how weak learning algorithm is promoted to strong learning algorithm is explained. Based on the explanation and proof of large number law to weak learning theorem, the effectiveness of AdaBoost is analyzed. The sample weight adjustment strategy of AdaBoost is used to assure the uniform distribution of correct samples. Its probabilities of training errors are equal in probability to that of the real training errors. The rules for training weak learning algorithm are proposed to assure the effectiveness of AdaBoost. The effectiveness of AdaBoost is explained, and the methods for constructing new integrated learning algorithms are given. Some suggestions about the selection strategy of training set in Bagging are given by consulting AdaBoost.
An Anytime Coalition Structure Generation Algorithm Based on Cardinality Structure
Su Shexiong, Hu Shanli, Zheng Shengfu, Lin Chaofeng, and Luo Jianbin
2008, 45(10):  1756. 
Asbtract ( 376 )   HTML ( 0)   PDF (680KB) ( 498 )  
Related Articles | Metrics
Coalition formation is a key topic in multi-agent systems. One may prefer a coalition structure that maximizes the sum of the values of the coalitions, but often the number of coalition structures is too large to allow exhaustive search for the optimal one. Furthermore, finding the optimal coalition structure is NP-hard and even finding a sub-optimal solution requires searching an exponential number of solutions. But then, can the coalition structure found via a partial search be guaranteed to be within a bound from optimum? Sandholm et al. show that it suffices to search the lowest two levels of the coalition structure graph in order to establish a worst-case bound K(n). Dang et al. present an algorithm which is obviously faster than that of Sandholm et al, which is the best result known so far. Against this background, the relations among coalition structures are analyzed in depth and a novel anytime algorithm based on cardinality structure is presented, which only has to take a step further and search those coalition structures whose cardinality structure is in the CCS (n, b). Consequently, the algorithms reported are obviously better than that of Sandholm et al. (up to 10\+\{35\} times faster when n=100, K=2) and Dang et al.( up to 10\+\{18\} times faster when n=100, K=3).
A Policy-and Value- Iteration Algorithm for POMDP
Sun Yong, Wu Bo, and Feng Yanpeng
2008, 45(10):  1763-1768. 
Asbtract ( 1086 )   HTML ( 7)   PDF (551KB) ( 576 )  
Related Articles | Metrics
Partially observable Markov decision processes (POMDP) changes the non-Markovian into Markovian over the belief state space. It has been an important branch of stochastic decision processes for its characteristics of describing the real world. Tradional algorithms to solve POMPDs are value iteration algorithm and policy iteration algorithm. However, the complexity of exact solution algorithms for such POMDPs are typically computationally intractable for all but the smallest problems. At first, the authors describe the principles and decision processes of POMDP, and then present a policy- and value- iteration algorithm(PVIA) for partially observable Markov decision processes. This algorithm uses advantages of policy iteration and value iteration when programming makes use of policy iteration and when computing uses value iteration. This algorithm using linear programming and dynamic programming resolves curse of dimensionality problem when the belief state is large, and obtains the approximate optimal value. A key contribution of this paper is that it shows how the basic operations of both algorithms can be performed effciently together. The algorithm was implemented in the SZPT_Roc team, which took the 2nd place in the simulation league of the RoboCup 2006 Chinese Open Championship. Finally, compared with some typical algorithms, experimental results show that the algorithm is practical and feasible.
A Fast On-Line Index Construction Method Based on Dynamic Balancing Tree
Guo Ruijie, Cheng Xueqi, Xu Hongbo, Wang Bin, and Ding Guodong
2008, 45(10):  1769-1775. 
Asbtract ( 507 )   HTML ( 0)   PDF (1335KB) ( 732 )  
Related Articles | Metrics
Examined in this paper is the issues of on-line maintenance for growing text collections. This approach builds inverted index to reflect changes in the collection it describes, supporting efficient document insertions and deletions. A fast on-line index construction method based on dynamic balancing tree is proposed. This method takes a merge-based approach and aims to keep a good performance during indexing and query processing by always merging indices with similar sizes. This is achieved by using a tree in which each node corresponds to one sub-index. The placing of sub-indexes in the tree depends on the sizes of the sub-indexes, so that each merging takes place among similarly sized sub-indexes. The method is suited very well not only for growing text collections, but also for dynamic text collections, and is a uniform framework for merge-based index maintenance strategies. It is more general, faster and scales better than previous methods, and can be adapted to suit different balances of insertion and querying operations. Using experiments on large scale Web data, the efficiency, flexibility and scalability of this method are demonstrated in practice, showing that on-line index construction for growing text collections can be performed efficiently and almost as fast as for static text collections.
An Approximate Matching Algorithm for Large Scale Lexicons
Gong Caichun, Huang Yulan, Xu Hongbo, and Bai Shuo
2008, 45(10):  1776-1781. 
Asbtract ( 526 )   HTML ( 0)   PDF (746KB) ( 699 )  
Related Articles | Metrics
Approximate lexicon matching is widely used for spelling correction of editors, query suggestion of search engines, post-processing of optical character recognizers and other applications. It is quite difficult for traditional single index schemes to obtain high recall and high performance at the same time. When the background lexicon becomes large, things go from worse to worst. A multiple-indices scheme is presented to handle this problem. The background dictionary is partitioned into several sub-dictionaries, each of which shares the same word length. Unigram, bigram, trigram and quadgram indices are constructed for each sub-dictionary if needed. As for the input pattern P, appropriate matching policies are deployed to obtain the set C of candidate matches, the edit distance between P and each element W in C is then computed, and the final approximate matches are then engendered. When a longer pattern P is queried, only indices of longer n-gram will be used to engender the candidate matches. Whats more, for each query pattern P, only a very small proportion of the input lexicon will be checked, so that this approximate matching scheme is quite efficient than traditional single index schemes. Experiments show that the number of candidate matches is much less so that the matching speed is much more promising.
An O(1.414\+n) Volume Molecular Solutions for the Exact Cover Problem on DNA-Based Supercomputing
Li Kenli, Liu Jie, Yang Lei, and Liu Wenbin
2008, 45(10):  1782-1788. 
Asbtract ( 469 )   HTML ( 0)   PDF (741KB) ( 615 )  
Related Articles | Metrics
The scalability problem in DNA computer has been one of the important research areas in DNA computing. According to the requirement of the DNA parallel computing for exact cover problem, a DNA model for good scalability is proposed, which is based on the biological operations in the Adleman-Lipton model and the solution space of stickers in the sticker-based model by simultaneously taking the method of fluorescence labeling and the technique of gel electrophoresis into the model. Based on this model, a DNA-based algorithm for the exact cover problem, by making use of the strategem of divide-and-conquer, is also proposed which consists of three sub-algorithms: Init(), IllegalRemove(), and ParallelSeacher(). Compared with by far the best molecular algorithm for the exact cover problem with n variables and m sets in which O(2\+n) DNA strands are used, this algorithm can solve the same problem only using O(1.414\+n) DNA strands on the condition of not varying the time complexity. Therefore, the cardinal number of the exact cover problem that can be theoretically resolved in a test tube may be enlarged from 60 to 120, and thus the proposed algorithm is an improved result over the past researches.
Using Graph Match Method to Resolve Multi-Way Branch in Binary Translation
Chen Long, Wu Chenggang, Xie Haibin, Cui Huimin, and Zhang Zhaoqing,
2008, 45(10):  1789-1798. 
Asbtract ( 494 )   HTML ( 0)   PDF (1089KB) ( 533 )  
Related Articles | Metrics
Binary translation has become an important way for software migration. For binary translation systems, how to exploit binary codes and translate them effectively is an important problem. However, indirect jumps in binary codes dont give the jump target explicitly, which will narrow the translation coverage and negatively affect the translation performance. In regular binaries, indirect jumps are often used for multi-way branches, candidate targets are reserved in a jump table. A technique to resolve these multi-way branches are presented, which can find the jump table to make the indirect jump target codes in view, and help translate these codes effectively and finally improve the performance of binary translation system. The technique presented can extract the semantic of the examined codes, and finally recover the codes accord with the expected semantic. In this technique, the semantic of some correlated codes is expressed using a graph, which is called semantic graph here. The technigue can efficiently express the information of pre-defined semantic and can be easily constructed from binary codes. This technique tolerates different codes generated from different compile options, and also effectively sieves the instructions irrelevant to the expected semantic. Experiment results shows that, for gcc, perlbmk, and crafty in SPEC CINT2000 benchmark, translation coverage can be improved by a range of 9.85% to 22.13%, and the performance speedup(ref input size) can be improved by a range of 8.30% to 17.71%, while the time complexity of the algorithm used is O(1).
An Approach for Implementing Service-Oriented and Event-Driven Information Integration Platform
Yang Zhiyi, Yang Gang, and Zhang Haihui
2008, 45(10):  1799-1806. 
Asbtract ( 432 )   HTML ( 2)   PDF (1118KB) ( 782 )  
Related Articles | Metrics
Researches on the information system integration are of academic and practical value for enterprise application integration. Service oriented event driven architecture (SOEDA) is proposed for constructing flexible, distributed information integration platform. Taking the form of service unit (SU), multiple adapters are designed to encapsulate various heterogeneous information systems within enterprise into information integration system. SU is service oriented component, with standard service interface, which wraps details of information process on legacy system and behaves as uniform information resource. Distributed normalized message router (DNMR) is presented as a basic platform for SU registration, discovery, and communication. Hierarchical architecture and module pattern are used to improve agility, interoperability, and integration ability of application systems and to support efficient operation on massive information. Independent running of SU enhances stability and extensibility of the information integration system. Technologies, such as event-driven model and dynamic thread pool, are introduced in SU to ensure high performance of the system and robustness while in heavy load conditions. Taking into consideration the system external load, thread cost, and resource conflicts, the efficiency of the SOEDA is evaluated based on classical queue theory. The example demonstrates that SynchroESB, built with SOEDA, enables flexible information integration with legacy enterprise systems and provides services with high efficiency and reliability.