Loading...
ISSN 1000-1239 CN 11-1777/TP

Table of Content

15 May 2011, Volume 48 Issue 5
Paper
Path Nodes-Driven Least-Cost Shortest Path Tree Algorithm
Zhou Ling, and Wang Jianxin
2011, 48(5):  721-728. 
Asbtract ( 577 )   HTML ( 1)   PDF (909KB) ( 727 )  
Related Articles | Metrics
Dijkstra algorithm is an excellent shortest path algorithm, which can compute a shortest path between two given nodes and also gain a shortest path tree (SPT) from a source to some destination nodes. It has been widely applied in many aspects of network computing. In order to optimize the cost performance of SPT, a path nodes-driven idea is introduced which derives from the destination nodes-driven idea. By using the idea, an algorithm called least-cost shortest path tree (LCSPT) is designed. To describe LCSPT algorithm in detail, its main running steps and the pseudo codes are introduced. Through LCSPT algorithm, a computing node can share links with path nodes which have belonged to the existing shortest path tree, and so a high-performance least-cost SPT can be constructed. As an important component of LCSPT, its correctness is proofed by mathematical induction, and the cost performance of LCSPT is analyzed in theories. Moreover, the time complex and the space complex are computed and compared with the other SPT algorithms. At last, three simulation experiments are done and the resulted data show that LCSPT algorithm not only produces a SPT tree correctly but also has the best cost performance among all the shortest path tree algorithms.
Altruism Driven Application-Layer Multicast
Wang Miao, Peng Ge, Zhang Yujun, and Li Guojie
2011, 48(5):  729-735. 
Asbtract ( 459 )   HTML ( 0)   PDF (1511KB) ( 426 )  
Related Articles | Metrics
Selfishness issue is one of big challenges of current application-layer multicast techniques. The selfish participants might stop forwarding data accidentally or deliberately, which will affect the overall streaming quality. To address the selfishness issue in the application-layer multicast, an altruism driven application-layer multicast (ADALM) is presented. ADALM defines an altruism value for each node associative to its contributions to the system. The multicast tree is constructed to place the nodes with greater altruism value at the higher layer of the tree. As compared with other studies in this area, ADALM exhibits innovative advantages in both altruism value computation and multicast tree construction. Firstly, the nodes altruism value is generated from the feedback from its parent and children, which enables the system to detect the selfish nodes effectively. Peers dont need the extra probe messages to measure the QoS of their neighbors. During the process of tree construction and maintenance, only O(lg N) nodes needs to be adjusted. Lastly, the altruism value calculation and multicast tree construction are realized in a decentralized manner without any single point of failure. Simulation results show that even with a significant portion of nodes being selfish, ADALM is able to build a dissemination tree that provides high overall streaming quality with low control overhead.
SOSC:A Self-Organizing Semantic Cluster Based P2P Query Routing Algorithm
Zhu Guiming, Jin Shiyao, Guo Deke, and Wei Hailiang
2011, 48(5):  736-745. 
Asbtract ( 687 )   HTML ( 0)   PDF (1196KB) ( 424 )  
Related Articles | Metrics
The resource location of unstructured P2P network is usually with low efficiency, and whether a routing message walks in the right direction is not assured. Therefore it is hard to achieve low latency and high query hit rate with low cost under no other supporting mechanism. In this paper, we present a self-organizing semantic cluster based P2P query routing algorithm SOSC. SOSC first categorizes resources shared by each node, and then expresses the semantic of each category by a keywords frequency vector. All of a nodes categories together express its shared resource semantic. Each node tries to establish links with nodes which have most similar category semantic, and therefore SOSC tries to make nodes clustered according to their shared resources semantic. What is more, SOSC transmits a nodes semantic in P2P network by exponentially decaying its keywords frequency vectors of all of its categories. By this way, SOSC creatively solves the problem of cluster semantic expressing and transmitting in a totally distributed environment. SOSC makes a node feel the semantic hierarchy of the semantic of its nearby nodes. Analysis and experiment results show that SOSC is able to achieve high query hit with small routing latency and query cost.
Distributed and Dynamic Spectrum Allocation in Unlicensed Band Networks
Hu Yun, Wang Dapeng, and Yang Shoubao
2011, 48(5):  746-755. 
Asbtract ( 496 )   HTML ( 0)   PDF (2423KB) ( 316 )  
Related Articles | Metrics
Wireless devices using different communication protocols share the same unlicensed frequency band. The communication frequency is overlapped between different transmission pairs. Thus, the transmission pairs interfere with each other. In order to achieve better communication efficiency and network capacity, coexistence of multiple devices using different protocols is studied in this paper. All communication pairs should do the attitude cooperation when developing the media access control strategy. In the unlicensed band environments, we propose a distributed and dynamic algorithm for the spectrum allocation. This algorithm randomly allocates spectrum to the wireless devices with different protocols. And the communication pairs update their operating channel in each timeline round. Finally, it can make the different devices distributed uniformly in all parts of the unlicensed spectrum. By fully utilizing wireless resources, the unlicensed band network capacity improves accordingly. In addition, the term‘scanning factor’ is presented in the paper, which leads to lower communication overhead of the algorithm. Moreover, the reduced control frames can decrease the communication interference between neighbor transmission pairs. In the end, simulation results show that this algorithm can dramatically improve the overall efficiency of network capacity. Furthermore, it guarantees end to end fairness between different transmission pairs to a certain extent.
libpcap-MT: A General Purpose Packet Capture Library with Multi-Thread
Wen Shuguang, and Xie Gaogang
2011, 48(5):  756-764. 
Asbtract ( 2051 )   HTML ( 13)   PDF (1410KB) ( 663 )  
Related Articles | Metrics
libpcap is a packet capture library providing the upper APIs for packet capture, filter and other functionalities, and being used widely in network protocol analysis, intrusion detection and other packet processing systems. It is feasible to perform high-speed packet processing with multi-core and multi-CPU architecture on general purpose computing platform, but it is difficult to take full advantage of the capability of multi-core and multi-CPU for applications based on libpcap because of its single thread model. In this paper, we design and implement a multi-thread packet capture library named libpcap-MT based on libpcap. libpcap-MT can capture and dispatch packets to multiple buffer queues very efficiently in kernel mode. In kernel capturing and dispatching reduces synchronization and memory copy overhead. Lockless multiple buffer queuing allows kernel and threads write and read packets in parallel. libpcap-MT provides a flexible dispatching strategy description method like C language. Its API extends libpcaps API with multi-thread operations and is compatible. Each thread can register with a buffer queue and get packets by traditional read() to copy from it, or mmap() to setup memory map then access the packet directly. Experimental results also indicate that it is easy to migrate current systems to multi-thread model with better performance and scalability using libpcap-MT.
Content-Based Publish/Subscribe Mechanism and Algorithm Based on Predicate Covering
Pan Yi, Zhang Kailong, and Pan Jingui
2011, 48(5):  765-777. 
Asbtract ( 375 )   HTML ( 0)   PDF (2271KB) ( 343 )  
Related Articles | Metrics
The content-based publish/subscribe system is adapted to large-scale distributed interaction applications well and widely due to its asynchronous, many-to-many and loosely-coupled communication properties. Efficient matching and routing algorithm and dynamic adaptability are the key issues in the large-scale content-based publish/subscribe systems. Consequently, in order to enhance publish/subscribe systems matching and routing efficiency, the methods, which can reduce subscription scale and routing table sizes at internal content-based routing routers and optimize the structure of subscription expressions, are much feasible. On analysis of publish/subscribe systems related technologies, this paper proposes the concept of predicate relation and a new structure called the predicate relation binary tree (PRBT). PRBT describes the relations among predicates; designs and implements the subscription maintaining, unsubscription and matching algorithm based on the PRBT. By optimizing the structure of the predicate relation and subscription selectivity transmitting strategy, it not only reduces the maintained subscription sizes at each internal router, but also enhances the publish/subscribe systems performance, such as events matching and routing efficiency. In addition, this paper explains some cases of publish/subscribe system and proves the validity of the PRBT-* algorithms properties. The theory analysis and extensive experiments reveal that the method of the predicate covering obtains better results in maintenance overhead of subscription scale, algorithms efficiency and publish/subscribe systems performance.
Multi-Source Interactive Application Layer Multicast Routing Protocol
Gao Jianmin, Lu Huimei, and Cao Yuanda
2011, 48(5):  778-785. 
Asbtract ( 530 )   HTML ( 0)   PDF (1797KB) ( 405 )  
Related Articles | Metrics
Application layer multicast (ALM) without expanding the basic underlying network can support greater range of multicast communications, which makes it a popular mechanism for multicast applications. However, compared with the traditional IP multicast, ALM has larger network delay and the node stability is poor, which makes it become a unique and challenging problem to implement multi-source interactive multicast application. In this paper, an ALM protocol called Thunder is proposed. According to the roles of members in the group, Thunder divides the interactive group into core-network and peripheral-tree. The core-network consists of the multicast data sources. By using Mesh-Tree structure, it constructs an optimal distribution tree for each data source and pursues fast forwarding to optimize the interactive process. The peripherals-tree is a tree structure composed by the unstable members which just receive data or only produce a small amount of data. Additionally, Peripherals-tree structure allows more members to receive multicast data. Those unstable nodes will not influence the interaction process of core-network, so the response speed and the protocol scalability of the interactive process are improved. Experiments demonstrate that Thunder can decrease network delay of interactive ALM and improve the scalability, which makes the multi-source interactive multicast applications be deployed and implemented more easily.
A Distributed Area Coverage Algorithm Based on Delayed Awakening in Wireless Sensor Networks
He Xin, Gui Xiaolin, and An Jian
2011, 48(5):  786-792. 
Asbtract ( 458 )   HTML ( 0)   PDF (1442KB) ( 407 )  
Related Articles | Metrics
The area coverage technology is one of the basic technologies of wireless sensor network, and is mainly concerned about how to prolong network lifetime on the basis of meeting area full coverage and network connectivity. The existing distributed area coverage algorithms often have phenomenon of coverage loopholes and ignore connection issue. Otherwise, they have phenomenon of nibble and shorten network lifetime. Therefore, connectivity issue is analyzed for distributed area coverage algorithms and the connectivity critical condition to ensure area full coverage and network connectivity is proposed. It provides connectivity guarantee for area coverage of active nodes set. On this basis, a distributed area coverage algorithm based on delayed awakening scheme is proposed. It applies time round mechanism,and carries out coverage decision judgment through exchanging local state information with neighbor nodes. So, the network topology need not be known in advance. Active nodes set select scheme based on delayed awakening selects active nodes set by using circle intersection coverage evaluation method and delayed awakening method based on distance threshold, and ensures fully covered area and avoids the phenomenon of coverage loopholes, and reduces the phenomenon of nibble. Simulation results show that compared with the existing distributed area coverage algorithms, this algorithm can prolong the network lifetime on the basis of meeting users’ sense demands.
Message Delivery Properties in Opportunistic Networks
Cai Qingsong, Niu Jianwei, and Liu Yan
2011, 48(5):  793-801. 
Asbtract ( 524 )   HTML ( 0)   PDF (1801KB) ( 568 )  
Related Articles | Metrics
Finding an effective message delivery path in opportunistic networks is a challenging task as there is no complete end-to-end path existing in such a network and mobile nodes rely on encounter opportunities to exchange data with each other. Based on the trace datasets publicly released by CRAWDAD, we comprehensively analyze the nodal encounter occurrence and node contact frequency, and find that both of them exhibit unique power-law distributions. Most of the contacts occurring in short period of time show that mobile nodes cluster into communities during moving, which indicates the spatial dependency among them. The fact that most node pairs only encounter few times implies that the network connectivity greatly depends on those rare contacts. Using the time evolving graph (TEG) theory, we analyze the single copy minimal delay path (SC-MDP) for each node pair on TEG and find that the average hops of SC-MDP is relative small even with a large number of nodes in the network, which indicates that communities are inherently organized into a hierarchy structure as our human society is, and some rare encounters have significant influence on the average length of MDP as well as the transport delay. Our results demonstrate that decentralized community detection algorithms based on nodal historical contact information for inter-community based message delivery can achieve optimal performance.
ADFA Protocol for RFID Tag Collision Arbitration
Wu Haifeng and Zeng Yu
2011, 48(5):  802-810. 
Asbtract ( 705 )   HTML ( 0)   PDF (874KB) ( 471 )  
Related Articles | Metrics
When a radio frequency identification (RFID) system identifies multiple tags, tag collisions will happen. The RFID system generally applies a tag anti-collision protocol to resolve the multi-tag collisions. To reduce identified time, this paper proposes a new adaptive dynamic framed Aloha (ADFA) for RFID tag collision arbitration. Based on dynamic framed Aloha protocol, ADFA adaptively allocates each identified tag a slot number. During the next reading round, the tags will be identified according to the slot number, which can reduce collision and idle slots when a reader repeatedly tags. In many RFID applications where a reader may repeatedly identify tags, such as supply chain operation, object tracking and locating, the proposed protocols can reduce time of re-identifying tags. Furthermore, to reduce more identified time, we improve ADFA protocol and propose a tag quantity estimate with low computational complexity and an optimal frame length. The tag estimate is based on Vogt method, and can reduce computational complexity by narrowing the search range of the tag quantity. And the optimal frame length scheme can achieve maximum throughput under the condition that the slot durations are different. The theoretical computation and simulation results both show that ADFA can reduce identified time when repeatedly reading tags, and the tag estimate in the improved ADFA can lower computational complexity. In addition, the optimal frame length in the improved ADFA can also advance system throughput.
Survey on QBF Evaluation Algorithms
Li Zhoujun, and Chen Shikun
2011, 48(5):  811-822. 
Asbtract ( 815 )   HTML ( 4)   PDF (1201KB) ( 455 )  
Related Articles | Metrics
Dramatic improvements in Boolean satisfiability (SAT) solver technology over the past decade have led to mass applications of SAT in formal verification. SAT methods have been applied in model checking and theorem proving in a variety of ways. More recently, bounded model checking (BMC) has highlighted the potential for symbolic techniques based on SAT. However, usage of propositional formulas imposes a potential exponential memory blow-up on the SAT-based model checking algorithms due to the big formula sizes. As a natural extension of SAT, quantified boolean formulas (QBF) which captures a wider class of problems in a natural and compact way allows an exponentially more succinct representation of the checked formulas. Model checking based on QBF solvers have recently emerged as a promising solution to reduce the state explosion problem. However, practical experiences show that compared with SAT-based approaches, QBF-based method has invariably yielded unfavorable experimental results, because of the lack of an efficient decision procedure for QBF. Many QBF solvers are obtained by extending satisfibility results; however, experiences show that although search-based algorithms are considered to be the most efficient approaches to SAT, decision paradigms other than search may have more chances to succeed in QBF than they had in SAT. This paper presents a survey of the latest developments in algorithms and optimizations of QBF evaluation, and summarizes some noteworthy trends in this area.
Fast Incremental Outlier Mining Algorithm Based on Grid and Capacity
Zhang Jing, Sun Zhihui, Yang Ming, Ni Weiwei, and Yang Yidong
2011, 48(5):  823-830. 
Asbtract ( 737 )   HTML ( 0)   PDF (1202KB) ( 502 )  
Related Articles | Metrics
Outlier mining is an important branch in the area of data mining. It has been widely applied to many fields such as industrial and financial applications for IDS and detecting credit card fraud. Dealing with massive and high dimensional data has become tasks and challenges for outlier algorithm to be faced. Based on the definitions of density and grid, a fast incremental outlier mining algorithm is proposed. It introduces seven-tuple information grid to reduce the number and dimension of data, and use incremental updates to reduce memory requirements. Dense grid, sparseness grid and neighbor grid are defined, which could make computation deal with grid conveniently. Through the appropriate representative point filtering the main data, an approximate method to reduce computation and decrease the complexity of the algorithm is adopted. The experiments are performed on different initial datasets and incremental datasets. And the results demonstrate the detection rate, false rate alarm rate, precisions and average running time. The real and simulated data sets of tests show that the proposed algorithm can maintain the same accuracy with LOF algorithm, but the implementation efficiency is improved significantly.
Composing Semantic Web Service with Description Logic Rules
Liu Sipei, Liu Dayou, Qi Hong, and Guan Jinghua
2011, 48(5):  831-840. 
Asbtract ( 734 )   HTML ( 0)   PDF (1473KB) ( 1115 )  
Related Articles | Metrics
This paper introduces one kind of semantic Web service composition method based on description logic (DL) rules. Firstly, this method uses DL concepts and roles to describe the input, output, precondition and post-condition characters of atomic service in OWL-S, and adopts DL rules to figure out the hyponymy relation between concepts of domain ontology. Secondly, it defines R1, R2 and R3 DL rules to describe DL rule can describe the dynamic characters of semantic Web service, such as sequence, split and join composition process in the process model of OWL-S respectively. Thirdly, it introduces DL rule chain to figure out sequential service composition and proposes Web service community (WSC) model that can convert parallel service composition to sequential service composition based on WSC chain. Furthermore, a WSC discovery algorithm based on eliminating R2 and R3 rule can discover the composition result for one given WSR. Furthermore, the analysis and comparison indicate the superiority of this method in theory: it builds the direct semantic link between static, dynamic characters of service and domain ontology, besides it provides one kind of reasonable solution for sequential and parallel service composition in the framework of semantic Web. Finally, the simulation experiments show that WSC search algorithm can find the expected results automatically and efficiently.
Modeling Chess Strategy by Classifier Based on Imbalance Learning and Application in Computer Chinese Chess
Su Pan, Wang Xizhao, and Li Yan
2011, 48(5):  841-847. 
Asbtract ( 533 )   HTML ( 1)   PDF (1686KB) ( 479 )  
Related Articles | Metrics
Computer chess game (CCG) is an important topic in the field of artificial intelligence. This technique is widely used in some entertainment PC games and chess games on different platforms. Most CCG systems are developed based on the combination of game tree searching and evaluation functions. When using game tree searching method, the level of the computer player depends on the searching depth. However, deep game tree searching is time-consuming when the games are applied on some mobile platforms such as mobile phone and PDA. In this paper, a novel method is proposed which models Chinese chess strategy by training a classifier. When playing chess games, the trained classifier is used to predict good successor positions for computer player. The training procedure is based on imbalance learning and it uses Chinese chess game records as the training sets. Specifically, the training sets extracted from game records are imbalanced; therefore, imbalance learning methods are employed to modify the original training sets. Compared with the classical CCG system, this new method is as fast as 1-level game tree search when playing games, and it contains an offline learning process. Experimental results demonstrate that the proposed method is able to model Chinese chess strategies and the imbalance learning plays an important role in the modeling process.
Physicomimetics Method for Global Optimization
Xie Liping and Zeng Jianchao
2011, 48(5):  848-854. 
Asbtract ( 811 )   HTML ( 0)   PDF (972KB) ( 459 )  
Related Articles | Metrics
Inspired by artificial physics (AP) approach, a framework of artificial physics optimization (APO) algorithm is presented to solve global optimization problem. Comparing the similarities and differences of physical individual and ideal particle, we construct a mapping between AP approach and a population-based optimization algorithm. APO algorithm is a population-based stochastic search method. In the framework, each sample point can be treated as a physical individual with the properties of mass, velocity and position. The mass of each individual corresponds to a user-defined function of the value of an objective function to be optimized. The better the objective function value, the bigger the mass, and then the higher the magnitude of attraction. The virtual forces among individuals are defined by Newtons gravity law and an attraction-repulsion rule is established among them, which makes the individual attract others with the worse fitness and repel others with the better fitness, and the individual with the best fitness attracts all the others, whereas it is never repelled or attracted by others. The attractive-repulsive rule can be treated as the search strategy in optimization algorithm which will lead the population to search the better fitness region of the problem. The simulation results indicate the validity of the approach.
A Method of Text Sentiment Classification Based on Weighted Rough Membership
Wang Suge, Li Deyu, and Wei Yingjie
2011, 48(5):  855-861. 
Asbtract ( 473 )   HTML ( 0)   PDF (874KB) ( 382 )  
Related Articles | Metrics
Facing with promptly increasing reviews on the Web, it has been great challenge for information science and technology that how people effectively organize and process document data hiding large amounts of information to meet with particular needs. Text sentiment classification aims at developing some new theories and methods to automatically explore the sentiment orientation of a text by mining and analyzing subjective information in texts such as standpoint, view, attitude, mood, and so on. A method of text sentiment classification based on weighted rough membership is proposed in this paper. In the method, the model of text expression is established based on two-tuples attribute (feature, feature orientation intensity), by introducing feature orientation intensity into the method of vector space representation. An attribute discretization method is proposed based on the sentiment orientation sequence for feature selection unifying the discretization processing to depress data dimension. To utilize the feature orientation intensity, a weighted rough membership is defined for classifying new sentiment text. Compared with SVM classifier, on the reality car review corpus, the proposed method based on rough membership for text sentiment classification has the best performance after data being compressed in a certainty extent for text sentiment classification.
A Secure Mobile Code Protocol Based on Committed Garbled Circuit
Ye Jianwei, Zhang Hongli, and Zhang Yongzheng
2011, 48(5):  862-868. 
Asbtract ( 391 )   HTML ( 0)   PDF (783KB) ( 397 )  
Related Articles | Metrics
The lack of protections hinders the application of mobile code, and no sound solutions have been proposed for it so far. Garbled circuit is the only pure software protecting technique that is universal and has provable security, by now. The existing CCKM, ACCK, Tate-Xu and Zhong-Yang protocols based on garbled circuit cannot prevent the attacks from malicious participants and cannot fit to mobile code non-interactively. Based on the committed garbled circuit technology of Jarecki et al. and Pedersens verifiable threshold secret sharing scheme, this paper presents a new secure mobile code protocol against the malicious participants. In the new protocol, a group of third-party servers are employed to “challenge” the provers, and to share secrets in every secret sharing scheme. When more than two-thirds of the servers are honest, the new protocol: 1) protects the inputs and outputs of the mobile codes simultaneously and offers more protection than existing protocols; 2) suits for mobile code application non-interactive; 3) makes the executors be able to verify the garbled circuit non-interactively and thus protect themselves from malicious codes; and 4) guarantees that the generators and executors can get correct outputs full fairly.
A Software Behavior Oriented Requirements Model and Properties Verification
Wu Huaiguang, Wu Guoqing, Chen Shu, and Wan Li
2011, 48(5):  869-876. 
Asbtract ( 484 )   HTML ( 1)   PDF (1139KB) ( 537 )  
Related Articles | Metrics
Requirements model and its verification are important procedures in software requirements. In this paper, the existing methods on requirements model are discussed and the related works of software behavior are concerned; furthermore, why utilizing software behavior in requirements model is illuminated. So a software behavior oriented requirements model is proposed formally which is named behavior description language (BDL). Its syntax and semantics of the model are given subsequently, and the hierarchy of behavior-oriented requirements model is presented based on BDL. In order to compare BDL with the current results and make full use of the latter related tools, the transformational relation is considered between BDL and CCS (calculus of communication system) which is an algebra process developed by Milner Robin. A transformation function which maps BDL into CCS is constructed and denoted by M. For the sake of verifying properties of behavior-oriented requirements model, consistency of system, safety of system, behavioral trust and behavioral non-termination are depicted by μ-calculus, which is used to describe properties of labeled transition systems and for verifying these properties. At the end, an example described by BDL is analyzed and the specified properties are verified through CWB (Concurrency WorkBench) which is a model checking tool of CCS.
Grid Workflow Scheduling Algorithm Based on Deadline Satisfaction
Li Xi, Hu Zhigang, Hu Zhoujun, and Yan Chaokun,
2011, 48(5):  877-884. 
Asbtract ( 631 )   HTML ( 3)   PDF (931KB) ( 526 )  
Related Articles | Metrics
In grid, users usually pay more attention to the execution time of workflow than other QoS metrics. Consequently how to effectively guarantee meeting users deadline requirements is a challenging problem for workflow scheduling in dynamic grid environment. Stochastic service model is utilized to describe dynamic processing capacity of grid resource and its dynamic workloads. The concept of deadline satisfaction degree (DSD) is defined and a corresponding calculation method for deadline satisfaction degree of workflow (DSDW) is provided. The task precedence relations represented in a DAG are converted into task execution priorities represented in numbers based on task length, and then the candidate resource for each task in the workflow is selected based on the rule of maximizing DSD. The deadline distribution is modeled as a non-linear programming problem with constraints and resolved with an interior point algorithm. A deadline satisfaction enhanced scheduling algorithm for workflow (DSESAW), which includes resource selection and overall deadline distribution, is put forward finally. The extensive simulations using real-world workflow application and grid system are made to validate this algorithm. The experimental results show that this scheduling algorithm achieves better performance than other two algorithms used in real grid system on adaptation to dynamic grid environment and users deadline guarantee.
TGG Based Automatic Transformation Between SBML and Other Biological Modeling Languages
Zhu Shijia, Wang Yadong, Ji Chunguang, and Tao Haijun
2011, 48(5):  885-896. 
Asbtract ( 566 )   HTML ( 0)   PDF (3621KB) ( 499 )  
Related Articles | Metrics
XSLT based transformation, between SBML and other biological modeling languages, cannot describe comprehensive context-sensitive semantic correspondences among the inner elements of biological modeling objects; cannot guarantee the determinacy and syntactic correctness of transformation results; and also cannot meet industrial transformation requirements. Toward these problems, a triple graph grammar (TGG) based transformation method is presented, which utilizes graph grammars to define SBML schema and other biological modeling languages, and utilizes TGG to construct transformation between them. On this basis, a transformation algorithm is presented, which has polynomial time complexity and can guarantee determinacy and syntactic correctness. Compared with the traditional transformation between SBML and other biological modeling languages, the method in this paper has the following characteristics: 1) It utilizes context-sensitive grammar and has strong description capability; 2) It imposes graph-based approach to simplify transformation definition process; 3) It only needs static analysis of transformation rules at the design time without exploring dynamic analysis, because validation must be achieved if transformation rules satisfy some constraints; 4) It only requires to change direction of transformation rules to implement bi-directional transformation, without modifying any element; and 5) It supports incremental change propagation, since it preserves the correspondence information between source and target objects. Finally, correctness and effectiveness of this method are verified through an example of transformation between Petri net and SBML.
A Random Sampling Based SPM Management Mechanism
Deng Ning, Ji Weixing, Shi Feng, and Song Hong
2011, 48(5):  897-905. 
Asbtract ( 501 )   HTML ( 0)   PDF (1963KB) ( 399 )  
Related Articles | Metrics
Energy and area of on-chip memory in embedded systems are highly regarded. Scratchpad memory(SPM) has been widely adopted as an alternative to the L1 Cache in many modern embedded processors due to its advantages on power and area efficiency. An effective SPM management strategy is of great significance for the whole system performance. Previous approaches handled the SPM management at compile time in a software manner. The development of mobile technology and the Internet diversifies the deployment of the embedded applications, which incurs huge constraints for previous SPM management. This paper proposes a new dynamic SPM management (DSPMM) strategy based on random sampling. The novelty of this method lies in its co-design of hardware and software, which can predict the most frequently accessed data items by monitoring the memory access pattern at runtime. Differing from the traditional profiling-driven and compiler-based methods, DSPMM depends neither on the assistance of the compiler, nor the profiling information beforehand. Experimental results show that, DSPMM can fully utilize the advantages of SPM on energy and area compared with the traditional cache systems. Even higher flexibility and generality can be achieved in contrast with the classical profiling-driven SPM management methods, without severe performance decrement.
A Structural Vulnerability Analysis Algorithm for LargeScale Distributed System
Zhao Gang, Kuang Xiaohui, Li Jin, and Zheng Weimin
2011, 48(5):  906-912. 
Asbtract ( 430 )   HTML ( 0)   PDF (788KB) ( 459 )  
Related Articles | Metrics
As largescale distributed system plays an increasingly important role in such fields as national security, critical infrastructure and social life, its vulnerability analysis problem has become a growing focus nowadays. Because of wide area deployment, heterogeneity, dynamism, incapable centralized control and other characteristics, structural vulnerability is a typical type of vulnerabilities on largescale distributed system. Aiming at the complex relation between entities and redundancy mechanism in largescale distributed system, a new entity topology model is proposed. In the model, the simple digraph is used to describe the relation between entities; the fault tolerance mechanism is put forward to describe the redundancy of entities; and weight is introduced to score the influence of the entity or link failure on system function. Based on the entity topology model, a structural vulnerability analysis algorithm based on weight is put forward, which discovers the structural vulnerabilities by computing weight of every entity and link, and validates them by graph pruning. Experimental results by analyzing, implementation and evaluation, show that the algorithm can effectively discover the structural vulnerability of largescale distributed system including single point failure node or link and combination failure of nodes and links.