ISSN 1000-1239 CN 11-1777/TP

Table of Content

15 February 2006, Volume 43 Issue 2
A Distributed Energy-Efficient Location-Independent Coverage Protocol in Wireless Sensor Networks
Mao Yingchi, Liu Ming, Chen Lijun, Chen Daoxu, and Xie Li,
2006, 43(2):  187-195. 
Asbtract ( 448 )   HTML ( 1)   PDF (604KB) ( 630 )  
Related Articles | Metrics
One of the major challenges in constructing WSNs is to maintain long network lifetime as well as sufficient sensing area. In this paper, the broad problems of coverage in WSNs are discussed, and coverage protocols dependent on infrastructures (e.g., GPS, directional antennas, etc) or some localization schemes in the existing solutions are identified. To eliminate the reliance on infrastructure, a distributed energy-efficient location-independent coverage protocol (DELIC) is proposed, which aims to preserve coverage based on the local neighborhood information. Comparing its residual energy with its neighborhood, one node can independently make its decision to compete for becoming a working node. The simulation and analysis study demonstrate that the DELIC not only provides the high quality of area coverage and good scalability, but also provides better performance in the energy efficiency. The DELIC outperforms the PEAS, the GAF-like, the sponsor area and OGDC algorithm with respect to the quality of area coverage, total energy-consumption and energy-consumption balance.
A Packet Classification Algorithm Based on Self-Adaptive Cache
Zhang Jianyu, Wei Tao, and Zou Wei
2006, 43(2):  196-203. 
Asbtract ( 505 )   HTML ( 0)   PDF (455KB) ( 565 )  
Related Articles | Metrics
An applicable and easy-to-implement packet classification algorithm CSAC (classification on self-adaptive cache) with high performance is presented. It caches the searching path of the packet set within a field subspace, reuses the searching result for the classification of the subsequent packets in the same field subspace, and reduces the cache hit-miss penalty. According to state changes of the cache by the fluctuation of network traffic, the algorithm introduces a self-adaptive cache scheme to guarantee effectively the cache hit ratio, which adjusts dynamically the granularity and structure of the cache, and locations of cache items in hash buckets. Furthermore, CSAC does not need the preprocessing phase required by most of heuristic algorithms, and it supports multi-field complex rules (such as layer 4-7 fields, logic match operation, etc.) and increment update of rule set. It is suitable for applications with various packet classification requirements, such as network edge security, traffic audit and load balancing, etc. Some firewall and IDS appliances using CSAC have favorable performance in actual network environment.
PFED: A Prediction-Based Fair Active Queue Management Algorithm
Gao Wenyu, Wang Jianxin, and Chen Songqiao
2006, 43(2):  204-210. 
Asbtract ( 595 )   HTML ( 2)   PDF (393KB) ( 691 )  
Related Articles | Metrics
A novel active queue management algorithm named PFED is proposed, which is based on network traffic prediction. The main properties of PFED are: (1) stabilizing queue length at a desirable level with consideration of future traffic, and anMMSE (minimum mean square error) predictor is used to predict future network traffic; (2) imposing effective punishment upon misbehaving flow with a full stateless method; and (3) maintaining queue arrival rate at or below queue service rate through more reasonable calculation of packet drop probability. To verify the performance of PFED, PFED is implemented in NS2 and is compared with RED and CHOKe with respect to different performance metrics. Simulation results show that PFED outperforms RED and CHOKe in stabilizing instantaneous queue length and in fairness. It is also shown that PFED enables the link capacity to be fully utilized by stabilizing the queue length at a desirable level, while not incurring excessive packet loss ratio.
TBSF: A Two-Phase Bluetooth Scatternet Formation Algorithm
Li Xiang and Yang Xiaozong
2006, 43(2):  211-217. 
Asbtract ( 559 )   HTML ( 3)   PDF (346KB) ( 579 )  
Related Articles | Metrics
Bluetooth is an emerging low-power, low-cost short-range radio technology, which enables portable devices to form short-range wireless networks feasibly, and it is considered a promising platform for constructing low-cost mobile ad hoc network. In this paper, an asynchronous and completely distributed two-stage algorithm—TBSF is proposed to construct a scatternet over Bluetooth. First, all random distributed Bluetooth nodes form a series of isolated piconets; next, interconnect these piconets into a scatternet. The election of the Bluetooth masters or bridges is based on its number of neighbors, every bridge is assigned an exact role by the role transition diagram, and any two adjacent piconets pair is connected by one route. The final formed scatternet is connected, and master and bridge nodes constitute a connected dominating set of scatternets. The simulation results confirm the good functionality of the created Bluetooth scatternet by algorithm TBSF.
A Bidirectional Path Re-Selection Based Load-Balanced Routing Protocol for Ad-Hoc Networks
Zhang Xiangquan, and Guo Wei
2006, 43(2):  218-223. 
Asbtract ( 482 )   HTML ( 0)   PDF (298KB) ( 482 )  
Related Articles | Metrics
In order to select the best cost route and balance the traffic loads distributed in ad hoc networks, a cross-layer load-aware and bidirectional path re-selection based load-balanced routing (CLBLR) algorithm combines the total path average estimated delay with the total path traffic loads as the primary metric for route selection and route adjustment, and re-selects route bi-directionally during the route discovery as well as route maintenance periods. Besides, the protocol utilizes the updated load information during the route discovery period by forbidding the intermediate nodes to reply the route request packets and avoid the heavy load nodes to be the intermediate nodes of new routes by dropping the route request packets, which can endue the protocol with capability of congestion control and admission control. With the above properties, the protocol can bring down the congested nodes and bottlenecks in the networks, and improve the network performance. Simulation results show that the CLBLR results in good performance of packet delivery ratio, average end-to-end delay and routing overhead, exhibiting many attractive features of distributed control to adapt to the dynamic ad hoc networks.
Modeling of Autonomous Network Information Service
Dong Wenyu, Sun Donghong, Xu Ke, and Li Xuedong
2006, 43(2):  224-230. 
Asbtract ( 531 )   HTML ( 2)   PDF (369KB) ( 536 )  
Related Articles | Metrics
As autonomous-ness is inevitable in network information services to achieve more intelligent human-machine interaction, a situation calculus-based modeling methodology is proposed for this purpose. Based on Reiter's situation calculus, a validity theory is introduced to enrich common-knowledge-based validity computing, a hierarchical knowledgebase is introduced to make situation calculus easier for technicians and developers to understand, and an XML-based ad-hoc script language, ScML, is developed to make situation calculus processable by computers as constructed texts. ScML's syntactical verification can be easily achieved with XML schema, and program generation with XML XSL. This methodology is demonstrated by an autonomous calendaring service.
An Algorithm Based on Mobility Prediction and Probability for Energy-Efficient Multicasting in Ad Hoc Networks
Luo Yuhong, Chen Songqiao, and Wang Jianxin
2006, 43(2):  231-237. 
Asbtract ( 446 )   HTML ( 0)   PDF (426KB) ( 636 )  
Related Articles | Metrics
Untethered nodes in mobile ad hoc networks strongly depend on the efficient use of their batteries. A new metric is proposed, the power cost function (PCF), which denotes the value of the remaining battery capacity and using condition of an active node. A distributed algorithm called M-REMiT(an algorithm based on mobility prediction and probability for refining energy-efficient multicast tree) is proposed for building an energy-efficient multicast tree in mobile ad hoc networks. The M-REMiT takes into account the energy-efficiency, not only reducing the total energy consumption (TEC) for multicasting in a source-based tree but also extending lifetime of each node and system lifetime (SL) in multicast tree. The simulations show that it offers better performance results than the previous proposals for energy-efficient multicasting for moving nodes.
A Cluster Based Security Scheme in Wireless Ad Hoc Networks
Zhang Xiaoning and Feng Dengguo
2006, 43(2):  238-243. 
Asbtract ( 370 )   HTML ( 0)   PDF (359KB) ( 498 )  
Related Articles | Metrics
Wireless ad hoc networks are a kind of networks that do not need communication infrastructure. Such networks are being used in more and more areas. The security of ad hoc networks has become a hot research point in the latest years. The security of cluster based ad hoc networks is analyzed and a new scheme is proposed to solve the security problems existing in such networks. The main contributions of the scheme are as follows: ① presenting a security mechanism to secure the cluster heads; and ② proposing a mechanism to remove the security defects in the roaming of mobile nodes, and providing a related algorithm. Simulation is performed to test the scheme. The experiment results show that the scheme is feasible. Compared with related security schemes for cluster based ad hoc networks, the proposed scheme is more secure, and has a better network performance.
Blind Source Separation Based on Genetic Algorithm
Yi Yeqing, Lin Yaping, Lin Mu, Li Xiaolong, and Wang Lei
2006, 43(2):  244-252. 
Asbtract ( 502 )   HTML ( 2)   PDF (558KB) ( 588 )  
Related Articles | Metrics
Recovering the unobserved source signals from their mixtures is a typical problem in array processing and data analysis. Independent component analysis (ICA) is a new and powerful method to solve this problem. JADE (joint approximate decomposition of eigen matrices) based on 4th-order cumulants is a common approach for ICA. However, it can only get approximate solution when k>2 and the results are not accurate. In this paper, a blind source separation based on genetic algorithm is proposed. The analysis and simulations suggest that the scheme is feasible and effective.
Spatial Reasoning Combining Topological and Cardinal Directional Relation Information
Sun Haibin and Li Wenhui
2006, 43(2):  253-259. 
Asbtract ( 501 )   HTML ( 0)   PDF (461KB) ( 517 )  
Related Articles | Metrics
RCC8 calculus (RCC8) and cardinal direction calculus (CDC) based on regions in qualitative spatial reasoning are combined and the interaction tables for the two calculi in two directions, i.e. RCC8-To-CDC and CDC-To-RCC8, are presented. The path consistency algorithm for constraint satisfaction problems combining RCC8 and CDC knowledge is proposed, which is the adaptation of Allen's famous algorithm. The path consistency algorithm is implemented using two queues, which can be operated in parallel. In this algorithm, the interaction operation based on the interaction tables is embedded into it to enforce the whole consistency for the CSPs combining RCC8 and CDC knowledge. The computational complexity of this algorithm is polynomial.
Perceptron for Language Modeling
Yu Hao, Bu Fenglin, and Gao Jianfeng
2006, 43(2):  260-267. 
Asbtract ( 575 )   HTML ( 2)   PDF (498KB) ( 737 )  
Related Articles | Metrics
Perceptron is one type of neural networks (NN) which can acquire the ability of pattern recognition by supervised learning. In this paper, two perceptron training rules for language modeling (LM) are introduced as an alternative to the traditional training method such as maximum likelihood estimation (MLE). Variants of perceptron learning algorithms are presented and the impact of different training parameters on performance is discussed. Since there is a strict restriction on the language model size, feature selection is conducted based on the empirical risk minimization (ERM) principle before modeling. The model performance is evaluated in the task of Japanese kana-kanji conversion which converts phonetic strings into the appropriate word strings. An empirical study on the variants of perceptron learning algorithms is conducted based on the two training rules, and the results also show that perceptron methods outperform substantially the traditional methods for LM.
Fusion of Clustering Trigger-Pair Features for POS Tagging Based on Maximum Entropy Model
Zhao Yan, Wang Xiaolong, Liu Bingquan, and Guan Yi
2006, 43(2):  268-274. 
Asbtract ( 658 )   HTML ( 0)   PDF (407KB) ( 661 )  
Related Articles | Metrics
Part-of-speech (POS) information is demanded before constructing more complex analysis. Traditional POS tagger is based on hidden Markov model (HMM), however the HMM can't include the long-distance lexical features which can help to predict the right POS. A kind of “W\-A→W\-B?T\-B” trigger-pair, which contains the long-distance lexical information, is proposed to solve this problem firstly, and then a better correlation measure—average mutual information (AMI) instead of mutual information (MI) is used to extract trigger pairs from the training corpus. To cope with the sparseness problem of trigger word “W\-A”, word clustering is made to build clustering trigger-pairs by semantic similarity calculation which is provided by the vector space model. Finally, the high-quality clustering trigger-pairs are added to the POS tagging system as a new kind of features under the maximum entropy frame-work. The experiment shows that tagging error of the new model is reduced by 34%, compared with the HMM. The idea of the paper can be applied to Pinyin-to-character conversion and word sense disambiguation problem too.
A Web Search Result Clustering Based on Tolerance Rough Set
Yi Gaoxiang and Hu Heping
2006, 43(2):  275-280. 
Asbtract ( 344 )   HTML ( 0)   PDF (387KB) ( 541 )  
Related Articles | Metrics
Most of Web clustering algorithms considered classes of mutually exclusive concepts, few took the fact of overlap concept between clusters into account, so the cluster result is not very good. In fact, a single page usually falls into several categories. That is to say, there exit indiscernible relation between clusters. Rough sets theory was first presented by Pawlak professor in 1982, which was a prefect tool that denoted indiscernible relation between sets. A k-mean algorithm for Web search results clustering based on tolerance rough set is proposed. Firstly, Web document are denoted by vector space model with terms. Then the value of term co-occurrence is utilized for the description of tolerance class of term, which extends the capability of term to document. Finally, a Web search result clustering algorithm is implemented, in which the similarity between documents is described by the term tolerance class, and a simple and intuitionistic T criterion for estimating cluster precision is also presented. The proposed solution is evaluated in search results returned from actual Web search engines and compared with other recent methods. Finally, apprehensible class labels and a good improvement are gained by using tolerance classes in Web result clustering.
An Efficient Algorithm for Computing an Irreducible Rule Set in Active Database
Hao Zhongxiao, and Xiong Zhongmin
2006, 43(2):  281-287. 
Asbtract ( 495 )   HTML ( 0)   PDF (362KB) ( 549 )  
Related Articles | Metrics
Termination decision in active database is an important problem and becomes a focus for many researchers. Several works suggest proving termination by using triggering and activation graphs at compile-time, and computing an irreducible rule set is the key technique. But due to the various conservativeness of the existing approaches, the irreducible rule set computed by them can be reduced again. This defect impairs not only the correctness of termination decision at compile-time but also the efficiency of run-time rule analysis. In this paper, the characteristic of nontermination for an activation rule is analyzed in detail and such concepts as activation path, inhibited activation cycle and inhibited activation rule are proposed. Based on these concepts, an efficient algorithm for computing an irreducible rule set is presented, which can make the irreducible rule set, computed by the existing approaches, to be reduced again.
Using Histograms to Estimate the Selectivity of XPath Expression with Value Predicates
Wang Yu, Meng Xiaofeng, and Wang Shan
2006, 43(2):  288-294. 
Asbtract ( 408 )   HTML ( 0)   PDF (455KB) ( 456 )  
Related Articles | Metrics
Selectivity estimation of path expressions is the basis of XML query optimization and also intense research interest. A path expression may contain multiple branches with value predicates. Some of the values and the nodes of the XML data are highly correlated. Previous methods of selectivity estimation rarely take that relation into consideration, and assume, instead, that the selectivity of attribute values on different nodes and structures is independent and uniform. In this paper, a novel value histogram is proposed, which captures the correlation between the structures and the values in the XML data. Also defined are six operations on the value histograms as well as on the traditional histograms that capture nodes positional distribution. Based on such operations, the selectivity of any node (or branch) in a path expression can be estimated. Experimental results indicate that the method provides accuracy especially in cases where the distribution of the value or structure of the data exhibit a certain correlation without any independent assumption.
ASGT: An Approach to Concurrency Control in Mobile Transaction Management Based on Prediction and Adaptation
Li Xiaorong, and Shi Baile
2006, 43(2):  295-300. 
Asbtract ( 470 )   HTML ( 0)   PDF (382KB) ( 450 )  
Related Articles | Metrics
Mobile transaction management is one of the most important fields in the research of mobile database. Though disconnection will never be a main problem in the high quality of wireless network nowadays, the high instability of bandwidth still produces a larger fluctuation of transaction executing time which leads to a higher blocking rate. Furthermore, due to high density of MUs, the database server will work under a high workload circumstance and meet the thrashing in a larger probability. A new scheme, called ASGT (active serialization graph technique), is developed to overcome these problems. In the ASGT, reading never blocks writing, thus it can substantially reduce the blocking rate. The ASGT can detect and break some non-serializable scheduling in advance, which can greatly shorten the suspending time of transactions involved. Due to an explicit serializable sequence maintained in running time, the scheduler, integrating an aborting method called MWDL, can improve the throughput and reduce the scheduling cost. The theoretical analysis and execution of a simulation based on C++SIM show that the performance of the ASGT overmatches an improved 2PL in most conditions of a mobile environment.
Modeling Irregular Dimensions in OLAP
Li Zehai, Sun Jigui, Zhao Jun, and Yu Haihong,
2006, 43(2):  301-306. 
Asbtract ( 464 )   HTML ( 2)   PDF (370KB) ( 431 )  
Related Articles | Metrics
Pre-aggregation is one of the most effective techniques in OLAP to ensure quick responses to user queries in data warehouses. The technique usually requires dimension hierarchies to be onto, covering and self-onto. However, in real-world applications, many irregular dimensions do not meet the requirements. To solve the problem, a novel dimension model is proposed to support the modeling of irregular dimensions by defining a partial mapping from a child level to a parent level. Based on the four cases of the partial mappings among different levels, the model can express the special relationships among the levels in irregular dimensions, such as non-covering and non-onto relationships. Furthermore, a cube model is defined based on the dimension model, along with some typical OLAP operations. The mapping from the multidimensional model to relational tables shows that the model provides a feasible way to overcome the difficulties caused by irregular dimensions.
A Norm-Driven Grid Workflow State Machine Model
Zhang Shichao, Xu Yinjun, Gu Ning, and Shi Baile
2006, 43(2):  307-313. 
Asbtract ( 442 )   HTML ( 0)   PDF (393KB) ( 540 )  
Related Articles | Metrics
The dynamic nature of grid environment brings some new characteristics to grid workflow, which increases the uncertainty and complexity of the working flow. A norm-driven grid workflow state machine model named GridWSM is offered, which utilizes norm to describe the dynamic scheduling of the tasks in the grid workflow systems. Its advantages are that the norm can express different semantics exactly and is competent for depicting complicated systems. The ratiocinative ability of the norm can validate and perfect the description of norms about grid workflow systems, check the semantic conflicts among the norms, ensure the completeness of the description of norms and reflect the dynamic changes of the working flow. GridWSM can also picture real-time states of grid workflow systems, which will be the theoretical foundation for the emulation of systems. Finally, the prototype system named NormTools validates the norm description of grid sorting flow, and finds all errors.
A Negotiation-Based Approach for Software Process Collaboration
Zhao Xinpei, Li Mingshu, Chan Keith, and Wang Qing
2006, 43(2):  314-320. 
Asbtract ( 418 )   HTML ( 2)   PDF (370KB) ( 458 )  
Related Articles | Metrics
Large-scale software development typically requires participation of multiple people. One motivation of the participants to collaborate with others is to maximize the profit they may gain from the software development. Therefore, the collaborative relations between the participants should be established through negotiation in order to ensure that all the participants can gain profit. Traditional software process modeling approaches model software collaboration as a set of rules or transactions. When entry criteria are satisfied or operations are explicitly invoked, the collaborations will take place necessarily and are performed in a predefined manner. Negotiation issues are mostly overlooked by these approaches. A negotiation-based approach for software process collaboration is proposed, In this approach, software process is modeled as a group of independent, autonomous, rational, and collaborative process agents. The collaborative relations between the process agents are established through negotiation.Using this approach, software organizations can carry out software development more efficiently and effectively.
A Simplified Method for Generating Test Path Cases in Branch Testing
Mao Chengying and Lu Yansheng
2006, 43(2):  321-328. 
Asbtract ( 291 )   HTML ( 2)   PDF (519KB) ( 609 )  
Related Articles | Metrics
Structural testing is an effective way to test either procedural programs or object-oriented programs, and the branch coverage criterion has been proved to be the best cost-effective one of its all criteria. Through deeply investigating the properties of DD-graph and analyzing the shortcomings of the algorithm FTPS, an approximate algorithm (called Find_SemiUE) for solving the set of unconstrained arcs is presented, which is simple, quick and very suitable for the large-scale programs. Based on the forward (backward) breadth (depth) spanning trees, a simplified algorithm (called Generate_PathSet) for generating path cases which are executed in branch testing, is also proposed. This algorithm gains a dramatic improvement not only in time complexity but also in space complexity. Furthermore, the obtained conclusions are helpful for further research about DD-graph.
Static Data-Race Detection for Multithread Programs
Wu Ping, Chen Yiyun, and Zhang Jian
2006, 43(2):  329-335. 
Asbtract ( 841 )   HTML ( 0)   PDF (431KB) ( 1026 )  
Related Articles | Metrics
Multithreaded concurrent programs are finding wide application, which brings more detrimental data race errors. Traditional static race detection methods are bothered by false positives caused by conservative analysis of concurrent semantics and alias info. In this paper, a precise and effective analysis framework is proposed. The framework applies precise alias analysis and simulates the happen-before order statically. To improve efficiency, an object-based race checker is proposed and compact equality-class-based alias representation is designed. The framework is implemented in a Java compiler—JTool. Through empirical results, the precision and effectiveness of the proposed algorithm are demonstrated.
Mapping Cobol Data to Java Type System with Functional Equivalence
Shi Xuelin, Zhang Zhaoqing, and Wu Chenggang
2006, 43(2):  336-342. 
Asbtract ( 907 )   HTML ( 10)   PDF (384KB) ( 634 )  
Related Articles | Metrics
Migrating the Cobol code to a new platform such as Java is an effective method to alleviate the burden of maintenance of the Cobol code. How to migrate Cobol data to the new platform is one of the basic problems to be solved. Most research work directly casts Cobol data to primitive data type such as int or float in modern programming language. However, this simple data mapping method doesn't keep the Cobol semantic and thus makes the translated code run inconsistently with the original Cobol code. In this paper, a pure Java encapsulating method is presented using the data modeling technology to map the Cobol data to Java type system with functional equivalence. Based on this method, a Cobol2Java translator is implemented. The translator is also applied to a real business system of about 4 million lines of Cobol code. Test results show that this tool could translate the Cobol system to Java successfully and without any human interaction.
A BIST Scheme Based on Selecting State Transition of Folding Counters
Liang Huaguo, Fang Xiangsheng, Jiang Cuiyun, Ouyang Yiming, and Yi Maoxiang
2006, 43(2):  343-349. 
Asbtract ( 455 )   HTML ( 0)   PDF (404KB) ( 409 )  
Related Articles | Metrics
In this paper, a BIST scheme based on selecting state transition of folding counters is presented. On the basis of folding counters, LFSR is used to encode the seeds of the folding counters, where folding distances are stored to control deterministic test pattern generation, so that the generated test set is completely equal to the original test set. This scheme solves compression of the deterministic test set as well as overcomes overlapping and redundancy of test patterns produced by the different seeds. Experimental results prove that it not only achieves high test data compression ratio, but efficiently reduces test application time, and the average test application time is only four percent of the same type scheme.
Approach to Analyze the Relationship of High-Level Fault Models
Yang Xiutao, Lu Wei, and Li Xiaowei
2006, 43(2):  350-355. 
Asbtract ( 468 )   HTML ( 2)   PDF (294KB) ( 345 )  
Related Articles | Metrics
With the development of integrated circuit design, the traditional test done at gate level is proved to be time-consuming. It's necessary to test circuit at high level. Unfortunately, there are no effective fault models defined at high level. To solve this problem, two kinds of relationships between different fault models are analyzed. The relationship between a high level fault model and the stuck-at fault model(defined at gate level) is analyzed, followed by the analysis of relationship between two high level fault models. These relationships are expected to found one or a set of effective high-level fault models. High-level fault models founded are expected to direct ATPG(automatic test pattern generation) and DFT(design for test) more effectively than traditional ones. Two rules are defined to evaluate the high-level fault models. The induction is used to find these relationships in theory. According to the method presented, if test patterns generated by one high-level fault model can detect more detectable stuck-at fault models defined at gate level than other fault models, it is proved to be more effective. The experiment conducted on benchmark of ITC99 demonstrates this approach. Three kinds of high-level fault models are analyzed, which are the transfer fault model, the states fault model, and the branch fault model.
An Algorithm for Collision Detection and Response Between Human Body Model and Cloth in 3D Garment Simulation
Mao Tianlu, Wang Zhaoqi, and Xia Shihong
2006, 43(2):  356-361. 
Asbtract ( 462 )   HTML ( 0)   PDF (331KB) ( 545 )  
Related Articles | Metrics
Garment simulation can augment the fidelity of 3D human animation, but the collision detection and response it needs is too complex to be applied in a practical system. In this paper, a fast algorithm for collision detection and response between an active human body model and cloth is presented. The algorithm reduces the collision detection between human skin surface and cloth model to a simple distance measurement from particles on cloth model to human skeleton, via reconstructing 3D human body model. It significantly reduces the time complexity to O(n), where n denotes the amount of vertex on cloth. In order to ensure high accuracy of collision detection and high stability of the whole simulation system, cushion regions are established to forecast collision. Additionally, reasonable and efficient solutions for collision response are presented. Finally, a fast and stable 3D garment simulation system is realized and achieves realistic result.
R-tree Method of Matching Algorithm for Data Distribution Management
Jiang Xiajun, Wu Huizhong, and Li Weiqing
2006, 43(2):  362-367. 
Asbtract ( 579 )   HTML ( 2)   PDF (338KB) ( 586 )  
Related Articles | Metrics
Data distribution management (DDM) is one of the services defined in the interface specification of high level architecture (HLA). Matching algorithm is the most important problem in the process of DDM implementation. The existing algorithms can only be used in certain condition, and the efficiency is not well enough for large-scale distributed simulations. The R-tree method is a new matching algorithm based on space index technologies. Rectangles of DDM regions are organized as an R-tree in the method, and operations related to matching can be done through the rectangle querying in the R-tree. To improve the efficiency of query, the method also uses a hashing index to point the leaf-node of the R-tree. The performance of the R-tree method is deeply studied in this paper. Experiments show that the R-tree method is better than other matching algorithms. It can efficiently save the cost of matching process of dynamic DDM, and can meet the need of real-time distributed interactive simulations. Changing the entry number of R-tree nodes can further optimize the matching algorithm.
Human Motion Recognition and Simulation Based on Retrieval
Zhao Guoying, Li Zhenbo, Deng Yu, and Li Hua
2006, 43(2):  368-373. 
Asbtract ( 423 )   HTML ( 1)   PDF (395KB) ( 492 )  
Related Articles | Metrics
Motion recognition and simulation are important in human motion analysis. In this paper, an experimental motion analysis system based on retrieval is implemented. Wavelet moments are translation, rotation and scale invariant, and can extract local multi-level features, so they are applied as features to describe motion sequence and actions. Motion recognition is implemented based on the similarity of motion sequences, while action match is addressed using dynamic time warping (DTW). Resulted motion simulation is represented in 3D models. Experiments show the algorithm based on retrieval is efficient for motion recognition and simulation. Retrieval results can simulate human motion, or provide the coarse human motion configuration data for further analysis.