ISSN 1000-1239 CN 11-1777/TP

Table of Content

15 May 2007, Volume 44 Issue 5
Research on Preprocessing Policies in XACML Admin
Li Xiaofeng, Feng Dengguo, and He Yongzhong,
2007, 44(5):  729-736. 
Asbtract ( 410 )   HTML ( 0)   PDF (477KB) ( 449 )  
Related Articles | Metrics
Access policies and administrative policies are mixed together in XACML administrative policy schema. It would worsen the performance of making decision. In XACML administrative policy, whether a policy is trusted is checked when making access request decision. It would cause denial-of-service (DoS) attack. In this paper, a scheme is presented to improve the on-line decision performance through dividing policy tree into an access policy tree and an administrative policy tree in policy decision point or in policy repository. According to logic implication of delegation, a method of constructing delegation graph is proposed. The invalid policies in which there doesn't exist a path to trusted policy are deleted. Deleting invalid policies makes the policies created by attackers applicable in making access request decision so that policy decision point can resist such DoS attack. In XACML administrative policy, the delegation element process is different with elements in XACML. It is recognized as a bug in XACML administrative policy. An improved policy schema definition is presented to correct the bugs, which makes the processing of delegations be in conformance with the elements of subject, resources, etc in XACML core, and defines administrative policies more efficiently. Through these improvements, the performance of making decision is accelerated. Policy decision point can resist DoS attack in some sense.
XSSA/ADL: An XML-Based Security Requirement Architecture Description Language
Tan Liang, and Zhou Mingtian
2007, 44(5):  737-747. 
Asbtract ( 433 )   HTML ( 2)   PDF (578KB) ( 524 )  
Related Articles | Metrics
It is imperative to consider the functional requirements and the security requirements on architecture level when developing the large and complex software systems in Internet, and the security requirement architecture description language (SADL) is the foundation for researching and implementing the security requirement architecture. Because traditional architecture description languages have no direct component, connector and style for the security requirements, it is difficult to describe these security requirements on the architecture level. An XML-based software security requirement architecture description language (XSSRA/ADL) is presented, which, based on the traditional software architecture, puts forward some new fundamental units, such as security component, security connector, half-security component, half-security connector, and so on. XSSRA/ADL not only can describe the security architecture of software systems, but also can resolve the interaction and dependency between security requirements and other functional requirements on the architecture level of software systems. On the other hand, XSSRA/ADL adopts XML, the data inter-operation standard, as the meta-language, which enables it to have inter-operability with other ADLs, and to be convenient for supporting refinement and evolution of the system.
A Kind of Group Signature Scheme with Authorization
Zhong Jun and He Dake
2007, 44(5):  748-755. 
Asbtract ( 393 )   HTML ( 2)   PDF (483KB) ( 590 )  
Related Articles | Metrics
Group signature has a wide application in the field of electronic business and politics. Lots of variant group signature schemes have been invented in the last 15 years. Among them, some research papers are about the group signature itself, which means that new group signature schemes are to be constructed based on find new intractable assumptions; some papers are about the combination of blind signature and group signature, called group blind signature schemes; some papers are about the combination of forward secure signature and group signature, called forward secure group signature schemes. But there is no research papers about the group signature scheme with authorization. In this paper, a new kind of group signature scheme with authorization is constructed based on the BMW's scheme, which is a group signature scheme whose security is based on the standard assumption. Compared with BMW's scheme: Adding authorization generation algorithm to key generation algorithm in group signature scheme, although the speed of instance production of group signature scheme slows down, strengthens group signature scheme with authorization characteristic, which promotes group signature application in electronic business and electronic politics; and furthermore, the proof of theorem 2 in this paper is simpler than that of BMW's scheme.
An Algorithm of Webpage Information Hiding Based on Equal Tag
Sun Xingming, Huang Huajun, Wang Baowei, Sun Guang, and Huang Junwei
2007, 44(5):  756-760. 
Asbtract ( 429 )   HTML ( 1)   PDF (302KB) ( 769 )  
Related Articles | Metrics
Information hiding can hide secret messages inside of image, video, audio, text, and other digital objects. To a casual observer inspecting these, the messages are invisible. Information hiding has become a new research hotspot of information security and copyright protection of digital multimedia content recently. Webpage information hiding algorithm uses a webpage as cover-object. Based on equal tag, an algorithm of webpage information hiding is presented in this paper, which overcomes the shortcoming of the ability of imperceptibility and the ability of contradict with the machine filtration of traditional webpage information hiding algorithms, and improves the embedded capacity of the algorithm based on the order of attributes pair. According to the embedded rule, firstly a secret message M is changed to a large integer N, and then the large integer N is embedded into the cover webpage by changing the tag to its equal tag. The experimental results and analysis show that the algorithm does not length the size of cover-webpage, and has good imperceptibility and perfect security than the traditional algorithm. The embedded capacity of the algorithm gets better increase than the algorithm based on the order of attributes pair. So the algorithm could be used to protect the content of a webpage and covert communication.
The Application of Information Leakage Defendable Model in Enterprise Intranet Security
Zhao Yong, Liu Jiqiang, Han Zhen, and Shen Changxiang,
2007, 44(5):  761-767. 
Asbtract ( 413 )   HTML ( 0)   PDF (361KB) ( 782 )  
Related Articles | Metrics
Confidentiality is one of the goals of information security, which is to prevent information from being accessed by unauthorized entities during the course of its storage and distribution. In the enterprise network terminals, they were not allowed to leak sensitive information outside the enterprise application environment for the reason of confidentiality. While in the reality, these information can be leaked outside in the following ways, 1) with floppy disk, USB disk and so on, 2) first printed with printers, and then taken away, and 3) with all kinds of network devices. But unfortunately, there is not a reasonable solution, which can maintain the availability of the system while protecting the confidentiality of sensitive information. In view of this reality, an intranet information disclosure defendable security model based on crypt-isolation is proposed, in which the process's behavior is monitored, and its security level is adjusted dynamically. When a high level process wants to write information to a media that is liable to leak the information outside, the system will encrypt the information automatically. As a result, the user's behavior is controlled, and no sensitive information can be leaked, intentionally or unintentionally. Furthermore, in order to achieve crypt-isolation, a new key management solution is presented. Combined with the existing symmetric encryption algorithms, this key management solution can provide “one person encryption and specified people decryption” ability, which is very worthy.
Research on Simultaneous Multi-Microthreading Architecture
Li Zusong, Xu Xianchao, Hu Weiwu, and Tang Zhimin
2007, 44(5):  768-774. 
Asbtract ( 456 )   HTML ( 0)   PDF (352KB) ( 499 )  
Related Articles | Metrics
With the development of VLSI technology, a single chip can contain over one billion transistor. Multithreading technique is the developing trend of high performance processor in the future. A novel processor architecture and implementation scheme of the simultaneous multi-microthreading(SMMT) that combines simultaneous multithreading technique with microthreading technique is proposed in this paper. SMMT efficiently combines the advantage of little hardware overhead in simultaneous multithreading with the ability of speeding up single program in micro-threading. SMMT can exploit the microthreading-level parallelism of single program fully by means of software and hardware co-design. Simultaneous multi-microthreading architecture is implemented on Godson-2 processor to evaluate performance. The evaluation results show that simultaneous multi-microthreading speeds up the execution of a single program significantly. It improves the performance of microprocessors with little hardware overhead.
A Low Power Data Cache Design Based on Very Narrow-Width Value
Ma Zhiqiang, Ji Zhenzhou, and Hu Mingzeng
2007, 44(5):  775-781. 
Asbtract ( 373 )   HTML ( 1)   PDF (466KB) ( 458 )  
Related Articles | Metrics
Today, lowering power consumption has become one of the most critical design concerns. Most modern microprocessors employ on-chip caches to bridge the enormous speed disparities between the main memory and the central processing unit (CPU), but these caches consume a significant fraction of the total power. It becomes increasingly important to design power-efficient cache memories. The very narrow-width values (VNVs) that need only a few bits to store occupy a large portion of cache access and storage. Based on this observation, a low power very narrow-width value cache (VNVC) which exploits the prevalence of VNVs stored in the cache is proposed. In VNVC, the data array is divided into low-bit array and high-bit array. At the control of an additional flag bit, the higher bits of the data cells that store VNV are closed to save its dynamic and static power consumption. VNVC achieves low power consumption only by the modification of the data array without any extra assistant hardware, and does not impact cache performance. Thus it suits for most kinds of cache organization. Experiments on 12 Spec 2000 benchmarks show that on average 4-bit width VNVC can obtain the best improvement, providing 29.85% dynamic and 29.94% static power reduction.
An Innovative Architecture-Level Power Estimation Methodology For Godson Processor
Huang Kun, Zhang Longbing, Hu Weiwu, and Zhang Ge
2007, 44(5):  782-789. 
Asbtract ( 556 )   HTML ( 1)   PDF (413KB) ( 569 )  
Related Articles | Metrics
Now the research of computer architecture focuses on how to utilize the energy of CPU to attain high performance as much as possible. Obviously the architecture-level power estimation tool is important. Existing architecture-level power simulators only focus on full-custom dynamic circuits modeling, but ignores the power modeling of ASIC designs which are mainly composed of static circuits or standard cell libraries. So this paper is concerned with the implementation of a high performance and low power general purpose CPU, the Godson-2 processor, and analyzes the power characteristics of the CPU, and implements an architecture-level power estimation methodology which aims at the Godson-2 processor. This methodology takes the power modeling methodology of CMOS static circuits into account carefully, so it is better for the estimation of current high performance CPU architecture which is designed with ASIC methodology. Compared with the RTL power estimating method, this methodology has high speed and high flexibility and the accuracy is also very good. On the platform of Intel Xeon 2.4GHz, the speed of this methodology is about 300K instructions per second, which is 5000 times that of the RTL power estimating method with only little error penalty.
Research on Computational Complexity and Approximation Algorithm for General Facility Location Problem
Pan Rui, Zhu Daming, and Ma Shaohan
2007, 44(5):  790-797. 
Asbtract ( 486 )   HTML ( 1)   PDF (486KB) ( 623 )  
Related Articles | Metrics
Facility location problem, i.e. UFL problem, is an NP-hard combinatorial optimization problem, and one of the hot topics in clustering problem. It has important application in data mining and classification and ranking. Research on approximation algorithms for the problem has been a focus of computer scientists for many years. However, most of the existing results are based on the metric space. The results for the problem in general distance space have not been found all the time. For the facility location problem in general distance space where the maximum connection cost is at most ω>1 times the minimum connection cost, by making use of set cover problem and the method of problem reduction, it is first proved that there are no polynomial time approximation algorithms with approximation ratios smaller than 1.243+0.316ln(ω-1), unless NPDTIME(n\+{O(log log n)}), and then a (1+ω)/α local search approximation algorithm for the general facility location problem is proposed, where ω>1, 1≤α≤2. Finally, experimental results show that the quality of solutions obtained by the local search algorithm is remarkable. Through elaborate experiment, the local search algorithm is explored further and the improved method for it is proposed.
A Fault Tolerance Deadlock Detection/Resolution Algorithm for the AND-OR Model
Cheng Xin, Liu Hongwei, Dong Jian, and Yang Xiaozong
2007, 44(5):  798-805. 
Asbtract ( 507 )   HTML ( 0)   PDF (451KB) ( 526 )  
Related Articles | Metrics
Distributed system provides an effective method to build high performance computing system with relative low cost. During the running of distributed computing, deadlock would happen if the resource allocation and requirement confliction occur. All the processes are waiting each other for the holding resources releasing, and then the system running stops. In an unreliable distributed system, failures may prevent deadlock detection algorithms from properly detecting deadlocks. Few of the algorithms proposed in the literatures address the issue of handing these failures. In this paper, three types of failures are identified and a fault tolerance deadlock detection and resolution algorithm is proposed. Failures are treated as different detection termination conditions in the algorithm. The algorithm is based on a generalized AND-OR model and employs diffusion computation and centralized reduction methods to detect deadlocks. The algorithm distinguishes cycles and knots and gives all members of a cycle. The upper limit of the time and space complexity is d and e+n-1 respectively if the deadlock topology is a static cycle, where e is the number of the edge and n and d are the number of the nodes in the wait-for graph. The performance of the proposed algorithm is equal to or better than that of the current algorithms.
Dynamic Binding Mechanism and Its Operational Semantics of Component in Multi-Agent System
Chang Zhiming, Mao Xinjun, Wang Ji, and Qi Zhichang
2007, 44(5):  806-814. 
Asbtract ( 362 )   HTML ( 0)   PDF (539KB) ( 469 )  
Related Articles | Metrics
Recently, with the increasing complexity of applications based on network, many complex systems have appeared to be typically autonomous, open, dynamic, and heterotopous. These systems make current software theories and technologies confront with many challenges, one of which is that mechanisms need to be provided with these complex systems from the point view of software architecture. Agent technology provides higher level abstractions and more natural style, which is different from object orientation and well suited to tackle the complexity, to specify and design software systems. However, component and the relationship between component and agent in software architecture of multi-agent system (MAS) are understood from the perspective of object-orientation. Many existing agent-oriented methodologies see agent class or agent type as the component, but agent is still the instance of an agent class or agent type, which doesn't meet the requirement of the dynamic property of complex systems. In this paper, the motivation of component in software architecture of MAS is analyzed, a dynamic binding mechanism for the relationship between component and agent is proposed, four basic operations based on Caste: join, quit, activate and inactivate are put forward, and the operational semantics is defined, in order to implement the high-level model for MAS architectures.
Dynamic Binding Mechanism and Its Operational Semantics of Component in Multi-Agent System
Chang Zhiming, Mao Xinjun, Wang Ji, and Qi Zhichang
2007, 44(5):  806-814. 
Asbtract ( 228 )   HTML ( 0)   PDF (539KB) ( 408 )  
Related Articles | Metrics
Recently, with the increasing complexity of applications based on network, many complex systems have appeared to be typically autonomous, open, dynamic, and heterotopous. These systems make current software theories and technologies confront with many challenges, one of which is that mechanisms need to be provided with these complex systems from the point view of software architecture. Agent technology provides higher level abstractions and more natural style, which is different from object orientation and well suited to tackle the complexity, to specify and design software systems. However, component and the relationship between component and agent in software architecture of multi-agent system (MAS) are understood from the perspective of object-orientation. Many existing agent-oriented methodologies see agent class or agent type as the component, but agent is still the instance of an agent class or agent type, which doesn't meet the requirement of the dynamic property of complex systems. In this paper, the motivation of component in software architecture of MAS is analyzed, a dynamic binding mechanism for the relationship between component and agent is proposed, four basic operations based on Caste: join, quit, activate and inactivate are put forward, and the operational semantics is defined, in order to implement the high-level model for MAS architectures.
The Dynamic Deployment Problem and the Algorithm of Service Component for Pervasive Computing
Tang Lei, Liao Yuan, Li Mingshu, and Huai Xiaoyong
2007, 44(5):  815-822. 
Asbtract ( 355 )   HTML ( 2)   PDF (438KB) ( 551 )  
Related Articles | Metrics
Due to resource limitation, the embedded devices connect with each other through network and share resources to provide flexible services in pervasive computing environment. In order to reuse the component and reduce the software cost, the service components should be composed to provide new service. Because the composite service can not run in one device, the dynamic deployment problem of many service components for pervasive computing has become a research focus. Based on Liquid—an embedded and component-based system, a dynamic deployment problem and the algorithm of service component for pervasive computing are presented. First, the service model is described and the problem of dynamic deployment in the pervasive computing environment is defined. Second, according to the resource constraint conditions and two deployment goals of service component, the randomized algorithm and the heuristic algorithm are given to solve the problem. Finally, the experiment data is given to analyze and compare the performance of the different algorithms. According to the method, the service components are deployed into the embedded devices to satisfy resource requirement and improve the resource utilization. The algorithm simulation and analysis indicate that it can be applied to the pervasive computing environment with more devices or service components.
VPGE: An LALR(1) Parser Visualization and Breakpoint Debugging System
Li Hu, Jin Maozhong, and Xu Fu
2007, 44(5):  823-828. 
Asbtract ( 1283 )   HTML ( 5)   PDF (413KB) ( 606 )  
Related Articles | Metrics
Parser generators such as YACC have been used in a large number of applications by non-specialized developers, not just those that involve compiler construction. A consequence of this is that good support is required for the comprehension of LALR(1) parsing techniques in order to developing correct, complete and conflict-free parsing grammars. Several types of potential problems in a grammar input to LALR(1) parser generators are defined, and an LALR(1) parser visualization and debugging system called VPGE is described. VPGE is an interactive system visualizing operations of the parser, supporting step by step simulation of the generated parser as well as breakpoints attached to grammar productions. Experiment result shows that the speed of parser generation in VPGE is even faster than that in GNU's Bison, which makes VPGE a fast LALR(1) grammar debugging environment.
A Formal Framework for Reasoning on Metadata Based on CWM
Zhao Xiaofei and Huang Zhiqiu
2007, 44(5):  829-836. 
Asbtract ( 566 )   HTML ( 2)   PDF (438KB) ( 466 )  
Related Articles | Metrics
During the metadata creation based on common warehouse metamodel (CWM), the different experiences and views of describing data of organizations involved in metadata creation cause some problems inevitably, such as inconsistencies and redundancies. However, reasoning on CWM metadata for automatically detecting these problems is difficult because CWM metamodel and metadata are rendered to users by graphs, which lack precise semantics. In this paper, an approach is proposed to formalize and reason on CWM metamodel and metadata in terms of a logic belongingto description logics, which are subsets of first-order logic. First, accordingto the specialities of CWM metamodel and metadata, description logic DL\-{id} which supports identification constraints on concepts is proposed. Then consistency is distinguished into horizontal consistency and evolution consistency. Towards evolution consistency, CWM metamodel is extended with version capabilities so that reasoning about inconsistency caused by evolution can be done and then the formalization of CWM metamodel and metadata into DL\-{id} knowledge base for horizontal consistency checking and evolution consistency checking is studied. Finally, reasoning engine LOOM is applied to check consistency for the above two situations, and the results are encouraging. The approach can be exploited for developing intelligent system that supports automated reasoning on CWM metadata, so as to provide support for the development of the components of data warehouse systems, thus improving the reliability of metadata integration and data warehouse system.
A Highly Condensed and Semantics-Preserving Data Cube
Xiang Longgang and Gong Jianya
2007, 44(5):  837-844. 
Asbtract ( 352 )   HTML ( 0)   PDF (481KB) ( 490 )  
Related Articles | Metrics
In order to support fast response for ad-hoc and complex OLAP queries, a data cube approach is introduced. Among the various data cube methods proposed in the literature, quotient cube and QC-tree are two important ones, because they try to condense the size of a data cube, while keeping its semantics. However, the former does not store any semantics and the latter stores the semantics in an obscure and implicit manner. To follow this trend and solve the existing problem, drill-down cube is proposed in this paper. Drill-down cube considers the data cube store from the point of view of drill-down semantics, which stores the drill-down semantics between classes, not the content of classes. In a drill-down cube, each class is represented as a node and each direct drill-down relation is captured and represented as an edge between two nodes. The analysis and experiments show that drill-down cube not only reduces the storage size of a data cube dramatically, but also captures the drill-down semantics of the data cube naturally and clearly. The query answering against a drill-down cube, including both point queries and range queries, is also discussed. The key idea behind is to drill down to all target nodes from the root. The query answering of drill-down cube performs fairly well, especially for range queries, and this is verified in the empirical evaluation.
An Extended Algorithm for Constraint-Based XML Query Rewriting
Jin Xin and Jin Yuanping
2007, 44(5):  845-852. 
Asbtract ( 300 )   HTML ( 0)   PDF (511KB) ( 524 )  
Related Articles | Metrics
Query rewriting is one of the essential issues that are closely related to data integration, query optimization, physical data independence, etc. While most of the previous work has been focused on the cases of relational model, recently the University of Michigan's Timber Project Group presented a novel algorithm for constraint-based XML query rewriting, which can operate directly on the nested structures. However, the algorithm does not consider the problem of query rewriting with built-in predicates. Therefore its scope of application is limited. In this paper, an extension of the algorithm to solve the problem of built-in predicates' presence is proposed. The extended algorithm incorporates a concept of inferred constraints within XML mapping rules and seeks any implicit assumptions with non-Skolem functions to substitute Skolem terms within the built-in predicates after translation phase. By this means, the problem can be solved and the applicability of XML query rewriting can therefore be enhanced. It is also proved that the extended algorithm can find the maximally contained rewriting over the XML schema in the presence of built-in predicates. Both performance analysis and measurement results of the extended algorithm show that it does not incur a significant increase in the cost of performance.
Normalization of Temporal Scheme with Respect to Temporal Multivalued Dependency with Multiple Time Granularities
Hao Zhongxiao, and Li Yanjuan,
2007, 44(5):  853-859. 
Asbtract ( 383 )   HTML ( 0)   PDF (385KB) ( 488 )  
Related Articles | Metrics
The purpose of a good database logical design is to eliminate data redundancy and insertion, deletion and update abnomalies. For a temporal database, its normalization has been widely studied by using contraints of temporal functional dependency with multiple time granularities. Based on the process of normalization of traditional relational database, the concepts of temporal multivalued dependency (TMVD) with multiple time granularities based on temporal functional dependency and the theory of multivalued dependency of traditional relational databases are introduced. An axiomatization for TMVD is given. Because a finite set of TMVDs usually implies an infinite number of TMVDs, the notion of the temporal fourth normal form is introduced, and an axiomation for a finite closure to effectively capture a finite set of implied TMVDs is given, which are essential to the logical design. Temporal fourth normal form with respect to TMVDs is given, decomposition algorithm is presented that gives lossless T4NF decompositions, and the complexion of the algorithm is analyzed. In the meanwhile, the proof of the correctness and termination of the algorithm is given.
A Curvature-Driven Image Inpainting Model Based on Helmholtz Vorticity Equation
Wu Jiying and Ruan Qiuqi
2007, 44(5):  860-866. 
Asbtract ( 482 )   HTML ( 1)   PDF (392KB) ( 666 )  
Related Articles | Metrics
Image inpainting restores the target region according to the known information of image automatically, and the processed image conforms to human's vision system; the contour of target region is smooth and it is not clear to recognize. Based on theoretical analysis, inviscid Helmholtz vorticity equation is used to inpaint image. Helmholtz vorticity equation is an equation in fluid mechanics; the equivalence between it and the inpainting model is proved in this paper. Learning from curve and surface kinetic equation, curvature is used to drive the isophote diffusion transporting directions in the inviscid Helmholtz vorticity equation inpainting model. Curvature is determined by geometric structure in image, so the new model preserves linear structure well. Vorticity is the image smoothness measure; in two dimensional (2-D) image domain, vorticity has diffusion property. Diffusing the result of Helmholtz vorticity equation makes the information between isophotes affect each other. The coefficients and diffusion procedure in Helmholtz equation are determined by the diffusion and dissipation property of vorticity. Helmholtz equation processes image based on geometry property. The diffused inpainting model is stable and do not have erroneous transport directions. Both theoretical analysis and experiments have verified the validity of the curvature-driven Helmholtz vorticity image inpainting model proposed in the paper.
An Efficient Color Image Retrieval Technique Based on Multi-Features of Bit-Plane
Wang Xiangyang, and Hu Fengli
2007, 44(5):  867-872. 
Asbtract ( 331 )   HTML ( 0)   PDF (558KB) ( 716 )  
Related Articles | Metrics
Content-based image retrieval has become a significant research topic because of the proliferation of video and image data in digital form. Increased bandwidth availability to access the Internet in the near future will allow the users to search for and browse through video and image databases located at remote sites. Therefore fast retrieval of images from large databases is an important problem that needs to be addressed. The disadvantages of the traditional color image retrieval based on color histogram are not considering the color spatial distribution and high complexity of computation. And what's more, the retrieval results with the condition of noise image are not good as expected. So an efficient color image retrieval technique based on multi-features of bit-plane is proposed in this paper. Firstly, according to the noise attack characteristic, the significant bit-planes are extracted from the color image. Secondly, the weighted color histograms are extracted from the significant bit-planes as color feature, and the space information entropy of every significant bit-plane is computed as spatial feature. Finally, the similarity between color images is computed by using a combined index based on color feature and spatial feature. Experimental results show that the proposed color image retrieval is more accurate and efficient in retrieving the user-interested images. Especially, it can retrieve the noise (including fuzzy, sharpen, and illumination, etc) image effectively.
Synthesized Data Driving: An Approach Toward Signer-Independent Sign Language Recognition
Jiang Feng, Gao Wen, Yao Hongxun, Zhao Debin, and Chen Xilin
2007, 44(5):  873-881. 
Asbtract ( 385 )   HTML ( 0)   PDF (485KB) ( 594 )  
Related Articles | Metrics
The lack of training samples is an imperative problem in the research of sign language recognition. In this paper, a method of using derivative data is proposed, which facilitates synthesizing dynamic samples of multi-stream employed in sign language training. With the mean-shift algorithm, the movement direction of density function grads can be obtained easily, thus controling the direction and the intensity of derivation. At the same time, this algorithm can also satisfy the need for the synthesized samples to include as much and as effective information of unspecific signer as possible. Moreover, the method realizes a transformation of data which is irrevocable in the initialization process of the recognition system. The driving effect of the synthesized data depends on the capacity of the model, as well as the intensity and direction of synthesization. After assessing the driving effect under various experiment environments, it is found that in most cases the recognition rate is raised; and in some cases, it is even markedly raised.
QoS Optimized Spanning Tree for Layer 2 Multicast Applications
Wang Xianlei, and Wu Zhimei
2007, 44(5):  882-889. 
Asbtract ( 372 )   HTML ( 2)   PDF (477KB) ( 539 )  
Related Articles | Metrics
Recently, more and more QoS-sensitive multicast traffic comes into Ethernet, which requires higher quality of service (QoS) than traditional data traffic. As a result, the current spanning tree protocol (STP) can not provide satisfactory QoS for the new type traffic, because multicast traffic has inherent P2MP characteristic where the root bridge of STP is located greatly affecting its QoS in layer 2. However, the current STP just selects the bridge that owns the smallest bridge ID as its root bridge, leaving QoS out of consideration. In this paper, from the point of view of multicast receivers, the L2 multicast QoS optimized spanning tree concept is brought forward and it is proved that if multicast source is mobile, when multicast source is located at the root bridge of optimized spanning tree, multicast traffic gains optimized QoS. Otherwise, the bridge that multicast source is located at is selected as the root bridge of STP in order to get best multicast QoS. In addition, it is shown that unicast traffic, which shuttles through the root bridge of optimized spanning tree, gains optimized QoS too. Finally, an approximating lookup algorithm of the optimized root bridge is proposed as the complement of the spanning tree algorithm and the validity and reliability of this algorithm are verified in the experiment.
Routing Algorithms of the Wireless Sensor Network Based on Dynamic Programming
Yang Wenguo, Guo Tiande, and Zhao Tong
2007, 44(5):  890-897. 
Asbtract ( 640 )   HTML ( 2)   PDF (415KB) ( 772 )  
Related Articles | Metrics
Routing problem is one of the most important issues to the wireless sensor network. In sensor network, data is transmitted from source to sink node in a multi-hop mode, so dynamic programming principle lends itself well to design routing algorithms of sensor network. Based on dynamic programming, a hop value of each node that indicates the hop number needed to communicate with the sink is generated by a node hop number generation algorithm. As the topology of the sensor network is concerned, the difference between the hop values of each node and its neighbor is no more than 2. Then three algorithms, that is minimal hop routing, minimal hop with maximal residual energy routing and minimal hop with minimal cost routing algorithms, are presented in this paper, which can pick up the paths that meet different design target between the sink and the source node in wireless sensor network effectively. The relationship between minimal hop with minimal cost routing and minimal cost routing is studied. A necessary and sufficient condition that minimal hop with minimal cost routing is also minimal cost routing are given. Energy consumption analysis shows that the routing algorithms proposed can be energy saving to a great degree.
A Local Obstacle Avoidance Method Based on Velocity Space Approach in Dynamic Environments
Shi Chaoxia, Hong Bingrong, and Wang Yanqing
2007, 44(5):  898-904. 
Asbtract ( 484 )   HTML ( 0)   PDF (472KB) ( 613 )  
Related Articles | Metrics
Local obstacle avoidance in dynamic environments, as a principal capability for mobile robots, plays an important role in autonomous navigation. A variety of velocity space methods such as the curvature velocity method (CVM), the lane curvature method (LCM) and the beam curvature method (BCM) formulate the local obstacle avoidance problem as one of constrained optimization in the velocity space and thus perform better than other local obstacle avoidance techniques by taking into account the physical constraints of the environment and the dynamics of the vehicle. A new local obstacle avoidance approach is presented in this paper to remedy some limitations of the traditional velocity space method. The conversion from Cartesian space to configuration space makes it possible for the proposed method to be used in unknown or partially known environments. By adding the beam width into the objective function and combining the proposed prediction model of collision with the improved BCM, not only does the method inherit the smoothness of CVM, the safety of LCM and the speediness of BCM, but also it can realize smooth, safe and speedy navigation in dynamic environments. The comparative navigation experiments executed by actual mobile robots in both static and dynamic scenes demonstrate that the proposed method is not only feasible but also valid.