ISSN 1000-1239 CN 11-1777/TP

Table of Content

15 September 2006, Volume 43 Issue 9
BH_GRAPH: A Real Time 3D Graphics Platform
Zhao Qinping, Hao Aimin, Wang Lili, and He Bing
2006, 43(9):  1491-1497. 
Asbtract ( 826 )   HTML ( 7)   PDF (662KB) ( 943 )  
Related Articles | Metrics
BH_GRAPH, developed by the Virtual Reality Laboratory of Beihang University, is a high-performance real time 3D graphics platform for simulation system developers. Not only does this platform support extendable software architecture and standard scene management mechanism, but also it provides high efficient scene rendering method and convenient application programming interfaces. BH_GRAPH can greatly accelerate the development of 3D graphics application system and provide comprehensive technique supports for its efficient running. BH_GRAPH is composed of three core components: 3D graphics rendering engine, 3D object model modeling tool and 3D scenes editor. Current dominant techniques about rendering and modeling are involved and implemented in this platform, including real time shadow, high dynamic range material, large-scale terrain rendering and management, GPU based particle system, swingable tree and the real time rendering of tens of millions of surfaces. An overview of BH_GRAPH is given and then the architecture, main functions and relating key techniques of the three component tools are introduced in detail.
An Accelerating Rendering Method of Hybrid Point and Polygon for Complex Three-Dimensional Model
Hao Aimin and Tian Guifen
2006, 43(9):  1498-1504. 
Asbtract ( 397 )   HTML ( 0)   PDF (705KB) ( 624 )  
Related Articles | Metrics
Presented in this paper is an accelerating hybrid rendering method for complex three-dimensional model using both point and polygon. On current PC hardware, the rendering method, which integrates advantages of both GBMR(graphics-based modeling and rendering) and PBMR(point-based modeling and rendering),works well with complex geometric models, which consist of millions of small triangles. In pre-processing phase, model faces are segmented into regions, then triangles and vertex point clouds of each region are stored together, and simultaneously all vertices are sorted in ascending order according to their important degree, and serialized to linear structure. In real-time rendering phase, view-dependent frustum culling and backface clipping are performed; different regions are rendered using point or triangle according to their distance from viewpoint. The above two phases sufficiently utilize the parallelism nature of GPU(graphic process unit) and efficiently implement a continuous multi-resolution rendering for HPPO (hybrid point and polygon object).
Pre-Computed Radiance Transport All-Frequency Shadows Algorithm on GPU
Wang Jing, Wang Lili, and Li Shuai
2006, 43(9):  1505-1510. 
Asbtract ( 528 )   HTML ( 0)   PDF (836KB) ( 512 )  
Related Articles | Metrics
In this paper, the problem of real-time rendering of objects with all-frequency shadows is discussed. Current techniques are limited to memory inefficient, CPU consuming algorithms. In the approach proposed, a wavelet transform based pre-computed radiance transport (PRT) algorithm is used to generate the PRT matrix, which is encoded later into a sparse form that is easy to be utilized by the GPU. Rendering is performed by a fast, highly parallel relighting algorithm written in GPU fragment shader. The algorithm fully utilizes the parallelism nature of the GPU, and achieves one order increasing of the scene relighting speed compared with the current best CPU technique. In the same time, the runtime algorithm decreases the memory usage and gets a better load balance between the CPU and GPU.
A LIC Method for Visualization of Unstructured Vector Field
He Bing, Chen Kai, and Yu Zhaohai
2006, 43(9):  1511-1515. 
Asbtract ( 474 )   HTML ( 0)   PDF (471KB) ( 551 )  
Related Articles | Metrics
There are two drawbacks for the traditional LIC (line integral convolution) methods to visualize data of unstructured vector field: one is its restriction to a fixed resolution and the other is its poor ability of revealing vector magnitude. In this paper, a new method is presented for computing LIC images with unstructured vector data. In this method, numerical integrators are utilized to allow computation at arbitrary resolution, and a mask method is employed for masking vector data by the region of DDA curve. This method can reduce computational cost by minimizing the total number of DDA lines. Because the size and shape of a DDA region are decided by the vectors' magnitude on the DDA line, as a result, in LIC images the vectors with big magnitude are surrounded with dense DDA lines while the vectors with small size are surrounded with sparse DDA lines. The inner structure of the field as well as the magnitude of vectors can be depicted more clearly compared with the traditional LIC.
Low Complexity Mode Decision for H.264 Inter Frame Encoding
Zhang Dongming, Shen Yanfei, Lin Shouxun, and Zhang Yongdong
2006, 43(9):  1516-1522. 
Asbtract ( 403 )   HTML ( 0)   PDF (648KB) ( 582 )  
Related Articles | Metrics
The new video coding standard, H.264 gives a better encoding performance than previous video standards at the cost of expensive computation since it allows motion estimation performing on tri-tree structured macroblock partitioning and multiple reference frames. In this paper, a fast algorithm FIMDA is proposed to accelerate mode decision of inter macroblock. FIMDA makes full use of valuable cues provided by the previous searching reference frames, such as mode, rate distortion cost etc., to eliminate unnecessary modes and reference frames in the following searching. FIMDA can effectively reduce encoding complexity and the quality degradation compared with full search can be ignored. Simulation results show that average complexity reduction exceeds 85% and average quality degradation is only about 0.07dB.
Video Texts Tracking and Segmentation Based on Multiple Frames
Mi Congjie, Liu Yang, and Xue Xiangyang
2006, 43(9):  1523-1529. 
Asbtract ( 420 )   HTML ( 1)   PDF (523KB) ( 869 )  
Related Articles | Metrics
Superimposed texts bring important semantic clues for video indexing and retrieval. Texts in videos often span tens or even hundreds of frames and many researchers have exploited the temporal redundancy of video text to improve the text detection accuracy and the text region quality. Described in this paper is a novel approach to track and segment static superimposed texts by utilizing multiple video frame information. For text detection, multiple frames are used to verify the appearance of the text regions which have been detected on a single frame. A binary-search based text tracking method is proposed, which can track the static text object efficiently by utilizing the features of the edge bit map. In order to refine the text regions, text detection is performed again on a synthesized image, which is produced by minimum/maximum pixel search on consecutive tracked frames. In text segmentation, edge features are exploited to further remove complex background in addition to traditional gray-value integration. Experimental results show the effectiveness of the proposed method.
Cell Nucleus Segmentation Based on Fuzzy Growing Snake
Hu Min, Ping Xijian, and Ding Yihong
2006, 43(9):  1530-1536. 
Asbtract ( 539 )   HTML ( 1)   PDF (740KB) ( 675 )  
Related Articles | Metrics
In this paper an automated cell nucleus segmentation method is proposed based on a novel fuzzy growing snake model for color cell images from esophageal smears. After estimating the ellipse parameters for each cell nucleus, the image point is mapped into several fuzzy domains which measure tinctorial uniformity and positional proximity of the pixel to the nucleus. Based on these fuzzy measures, a growing active contour model is developed to track the nucleus boundary. To help the model overcome local minimum, a self-adaptive growing energy is added to inflate the contour until it expands out of the nucleus. The utilization of ellipse estimation gives the model the strong edge reconstruction ability for the overlapped nucleus. Experiments show that the method has excellent segmentation results and stable performance.
Adaptive Many-Knot Splines Image Interpolation Based on Local Gradient Features
Zhao Qianjin, Hu Min, and Tan Jieqing
2006, 43(9):  1537-1542. 
Asbtract ( 435 )   HTML ( 0)   PDF (578KB) ( 587 )  
Related Articles | Metrics
In order to obtain an interpolated image with a superior quality, a new C\+2 cubic many-knot spline interpolation kernel function with compact support (-2,2) is presented. The cubic many-knot spline interpolation formula is constructed via degrees of freedom from inserting knots. The approximation order of the interpolation with the appropriate boundary conditions and constraints is analyzed. The one-dimensional many-knot splines interpolation algorithm is extended to that of two dimensions, which is applied to image processing. In general, images interpolated by the bicubic many-knot spline interpolation are blurred in the edge regions as a result of ignoring the local features of the images. In order to solve the problem of blurring in the edge regions, an adaptive interpolation method is presented based on applying an inverse gradient to the above bicubic many-knot spline interpolation formula. The experiment results are presented to show that the adaptive interpolation algorithm produces a reconstructed image with a superior quality than the conventional bicubic convolution method and the adaptive bicubic convolution method in terms of error and PSNR.
An Efficient Data Transfer Protocol for P2P-Based High Performance Computing
Jin Hai, Luo Fei, Zhang Qin, and Zhang Hao
2006, 43(9):  1543-1549. 
Asbtract ( 478 )   HTML ( 3)   PDF (624KB) ( 591 )  
Related Articles | Metrics
For the high performance distributed computing platform named P2HP, the design and implementation of a fast data transfer protocol (FDTP) is presented in this paper. Based on a permanent connected pipeline, a network transfer pipe is constructed during each transformation between the client and the server, including a message channel for control and a data channel for data. The transfer process in FDTP is divided into 3 steps, such as pipe constructing, data transferring and pipe closing. The style of queries and the format of data stream are defined, and the management of the data pipe and the control of the fault tolerance during the transformation are also described. Through simplifying the control and reducing the connections, the communication latency is reduced and then the transformation efficiency of FDTP is enhanced, so as to boost the P2HP for its high performance computing.
Enhanced Survivability of Time Synchronization Network
Zhang Yong, Fang Binxing, Ye Jianwei, Tian Zhihong, and Bao Xiuguo
2006, 43(9):  1550-1556. 
Asbtract ( 450 )   HTML ( 0)   PDF (563KB) ( 483 )  
Related Articles | Metrics
The survivability problem of large scale time synchronization network is studied. With autonomous configuration of the nodes, the network topology is not static and hierarchical but dynamic and self-organizing. A new concept of search world is presented, and the BA model of time synchronization network based on it is built. The model is also optimized to make the network more survivable. After failure of some nodes, other effected nodes can be self-healing. The self-healing algorithm is given to mitigate the load of the network caused by self-healing nodes.
An Algorithm of Constructing ALM Tree Based on User Behavior Analysis
Luo Jianguang, Zhao Li, and Yang Shiqiang
2006, 43(9):  1557-1563. 
Asbtract ( 416 )   HTML ( 0)   PDF (701KB) ( 849 )  
Related Articles | Metrics
Application layer multicast (ALM) which uses end hosts to relay data has a prosperous prospect in one-to-many media content distribution. Compared with IP multicast, ALM is more flexible and deployable. But data delivery in ALM tree can be easily interrupted by departure of end hosts, which may lead to degradation of QoS in time-sensitive applications such as live streaming. Proposed in this paper is a new ALM tree construction algorithm based on the analysis of over 10,000,000 real log traces from a popular live streaming system in China. The algorithm can significantly reduce the interruption times of ALM tree in live streaming and its effectivity has been verified through simulation.
A Distributed Trust Model Based on Vector Space in P2P Networks
Guo Leitao, Yang Shoubao, Wang Jing, and Zhou Jinyang
2006, 43(9):  1564-1570. 
Asbtract ( 354 )   HTML ( 2)   PDF (715KB) ( 565 )  
Related Articles | Metrics
Peers in P2P networks share resources voluntarily. They may arbitrarily terminate services, and some peers may provide fake services, so the QoS of services are not reliable. Moreover, peers are self-interest and rationality. Some peers always abuse resources but seldom contribute their own resources, which induces the free-riding in P2P networks. Traditional security schemes can hardly solve the fake services providing and free-ridings in P2P networks. The trust model based on reputation can restrain these kinds of malicious behaviors. However, it suffers much from strategically altering behaviors and dishonest feedbacks from malicious peers. Recurring to the trust model of society network, a distributed trust model based on vector space is proposed. This trust model uses ‘time-sensitive factor’ to improve the sensitiveness of detecting peers' behaviors, and recommendation trust based on vector space model to prevent collusion and bad-mouthing among peers. Furthermore, a distributed implementation schema based on R-Chain is proposed. Simulation and analysis show that this trust model is effective and has favorable feasibility of implementation.
Research of a Load Balancing Model Based on Mobile Agent
Tian Junfeng, Liu Yuling, and Du Ruizhong
2006, 43(9):  1571-1578. 
Asbtract ( 341 )   HTML ( 0)   PDF (668KB) ( 491 )  
Related Articles | Metrics
In order to make the implementation of the load-balance more effectively in distributed system, in this paper, associating with the self-designed distributed database server system extracting the three-layer management-frame of the distributed redundant system and studying the objective description and definition of load index, the active load balancing idea is introduced into the system and a load balancing model based on mobile agent is presented. Finally, the performance of the system is also analyzed. Analysis indicates that the model of MMA has the advantages of little cost, and being more intelligent and more accurate in scoring the load indexes than some existing systems.
A Load Balancing Algorithm for DHT-Based P2P Systems
Li Zhenyu, and Xie Gaogang
2006, 43(9):  1579-1585. 
Asbtract ( 642 )   HTML ( 2)   PDF (556KB) ( 548 )  
Related Articles | Metrics
In DHT-based P2P systems, DHT and the heterogeneity of node capacities can cause a load imbalance problem. Existing load balancing approaches have two limitations. First, they do not take the link latency into account when moving loads. Second, they heavily rely on some nodes of fixed locations in the system. A distributed load balancing algorithm is presented in this paper. Every node gathers local load balancing information periodically, and chooses a physically close node close to transfer the load. This algorithm relies on all the nodes in the system to solve a single-point failure problem. Meanwhile, loads are moved between the nodes with smaller link latency. Simulation experiments show that the algorithm can achieve a good load balance in terms of different system utilization and the load movement cost reduction rate is above 45%.
MFPQT:A Streaming Media Scheduler Algorithm During IPv4-IPv6 Transition
Wan Bin, Miao Zhongliang, Xia Shilin, Jin Weijian, Zhang Sabing, and Wu Jieyi
2006, 43(9):  1586-1592. 
Asbtract ( 449 )   HTML ( 0)   PDF (560KB) ( 513 )  
Related Articles | Metrics
An optimized algorithm and scheme is proposed to resolve the streaming media scheduling problem in IPv4-IPv6 transition because present scheduling schemes are based on IPv4. Since pure IPv4 nodes and pure IPv6 nodes exist in network, the proposed algorithm subdivides these nodes and multi-case channels, and deals with them respectively. A theoretical proof and modeling test show that the programs' scheduling efficiency and the QoS are improved by applying the proposed scheme.
A New Friendly Worm Propagation Strategy Based on Diffusing Balance Tree
Wang Bailing, Fang Binxing, Yun Xiaochun, Zhang Hongli, Chen Bo, and Liu Yixuan
2006, 43(9):  1593-1602. 
Asbtract ( 425 )   HTML ( 0)   PDF (892KB) ( 548 )  
Related Articles | Metrics
Worms is a serious and growing threat to network and the traditional anti-virus technologies don't currently scale to deal with the worm threat. Friendly worm becomes a new active countermeasure. A diffusing balance-tree-based friendly worm propagation strategy is proposed, and the auto-generation rules and survivability enhancements are also given to account for the expansibility and the stability. As a case study, a simulation on worms cross-spreading is given. The results prove that the balance-tree-based propagation strategy is much more effective. Compared with the traditional ones, it accelerates the propagation speed and reduces the traffic impact on network.
A Distributed Dynamic Description Logic
Jiang Yuncheng, Shi Zhongzhi, Tang Yong, and Wang Ju
2006, 43(9):  1603-1608. 
Asbtract ( 478 )   HTML ( 0)   PDF (455KB) ( 550 )  
Related Articles | Metrics
The current research progresses and the existing problems of description logic(DL), especially the insufficiency of using dynamic description logic(DDL) to act as logical foundation for the semantic Web, are analyzed in this paper. According to the characteristics and requirement of the semantic Web, a kind of new description logic, i.e., distributed dynamic description logic(D3L), is presented. The syntax and semantics of D3L are given. The reasoning mechanism of D3L is mainly studied, and two kinds of reasoning methods, the direct reasoning method and the transform reasoning method, are presented. Compared with DDL, the D3L provides more reasonable logic foundation for the semantic Web, and overcomes the insufficiency of using DDL to act as logical foundation for the semantic Web.
Information Geometric Analysis of Pruning Algorithm
Liu Yunhui, Luo Siwei, Huang Hua, and Li Aijun
2006, 43(9):  1609-1614. 
Asbtract ( 455 )   HTML ( 1)   PDF (455KB) ( 773 )  
Related Articles | Metrics
Pruning algorithm is an important method to set up and optimize the structure of neural network model. The research on pruning nowadays mostly focuses on the algorithm description while less effort is spent on its immanent mechanism. Research on its mechanism can provide theoretical basis for pruning strategy. The immanent mechanism of pruning is analyzed based on information geometry and a set of theoretical explanation of pruning is given. The pruning process is depicted as a series of information projections from the current model manifold to its submanifolds utilizing the hierarchical structure of neural manifold parameter space. A new pruning algorithm is presented based on the theoretical analysis and its validity and the efficiency is verified by experiments.
Improving Optimization for Genetic Algorithms Based on Level Set
Li Qinghua, Yang Shida, and Ruan Youlin
2006, 43(9):  1624-1629. 
Asbtract ( 432 )   HTML ( 1)   PDF (412KB) ( 531 )  
Related Articles | Metrics
Most genetic algorithms now available do not give a convergence rule, and there are difficult problems about premature and slow convergence. To solve these problems, a new kind of genetic algorithms is presented in this paper. Firstly, proceeding from dependent variables of optimized function, a new concept “level set” is introduced.This method can classify population of each generation and arrange all the aim relevant information effectively, so as to obtain the algorithm with high searching speed. Secondly, through the improvement of mutation operator, the diversity of population can be also improved, which avoids the premature convergence effectively. Meanwhile, the new mutation operator is also proved to be able to improve the diversity of population and the new algorithm can converge on the optimum solution. Finally a convergence rule of the algorithm is given. Computer simulations show that its convergence and search speed is faster and its efficiency is higher than other algorithms.
Inference Based Security Database Auditing Framework
Yan Heping, Wang Zhengfei, Wang Wei, and Shi Baile
2006, 43(9):  1630-1638. 
Asbtract ( 505 )   HTML ( 1)   PDF (794KB) ( 563 )  
Related Articles | Metrics
The development of information technology brings security database new challenges. Privacy principles are even being mandated internationally through legislations and guidelines, and this requires the security database to verify that it adheres to its declared data disclosure policy. An auditing system satisfies the above desiderata well, but only auditing the result of query is not enough, because malicious user can access sensitive information by inferring the result of query. This requires the auditing system to have the basic inference capacity. In this paper, a security database inference audit framework is proposed, which have the ability of MVD inference, FFD inference, and FD inference. At the same time, the auditing system keeps the properties of fast, precise, and fine-grained.
An Efficient Data Stream Outliers Detection Algorithm Based on k-Means Partitioning
Ni Weiwei, Lu Jieping, Chen Geng, and Sun Zhihui
2006, 43(9):  1639-1643. 
Asbtract ( 513 )   HTML ( 0)   PDF (376KB) ( 663 )  
Related Articles | Metrics
Outliers detection is an important issue in data mining. It is difficult to find outliers in data streams because data streams are dynamic, one pass readable and of large amount of data. In this paper, a data stream outliers detection algorithm based on k-means partioning—DSOKP is proposed, which applies k means clustering on each partition of the data stream to generate mean reference point set, and subsequently picks out those potential outliers of each periods according to the definition of outliers. Theoretic analysis and experimental results indicate that DSOKP is effective and efficient.
Web Information Extraction Based on HTML Pattern Algebra
Li Shijun, Yu Junqing, and Ou Weijie
2006, 43(9):  1644-1650. 
Asbtract ( 416 )   HTML ( 2)   PDF (486KB) ( 498 )  
Related Articles | Metrics
Generating wrapper efficiently for extracting Web data has broad application prospect, but is also a difficult problem that is not yet solved efficiently till now. To tackle this problem, a pattern algebra for HTML documents is introduced, which includes key concepts, such as consistent pattern set, and the addition operation of pattern, and based on it a new approach to extract Web information is presented. It induces the consistent pattern set which represents identifying rules of each attribute by exploring the whole samples, and then extracts data by the consistent pattern set with multiple patterns. It can apply Web pages with tabular structure, in which there are missing attributes or attributes with multiple values or different order and hierarchical structure, and has been validated experimentally in the prototype.
A Bottom-Up Distance-Based Index Tree for Metric Space
Liu Bing, Yan Heping, Duan Jiangjiao, Wang Wei, and Shi Baile
2006, 43(9):  1651-1657. 
Asbtract ( 432 )   HTML ( 0)   PDF (587KB) ( 544 )  
Related Articles | Metrics
Similarity search is of importance in many new database applications. These operations can generally be referred as similarity search in metric space. A new index construction algorithm is proposed for similarity search in metric space. The new data structure, called bu-tree (bottom-up tree), is based on constructing the index tree from bottom-up, rather than the traditional top-down approaches. The construction algorithm of bu-tree and the range search algorithm based on it are given. And the update of bu-tree is also discussed. The experiments show that bu-tree is better than sa-tree in search efficiency, especially when the objects are not uniformly distributed or the query has low selectivity.
Phrase Alignment Based on Head-Phrase Extending
Zhang Chunxiang, Li Sheng, and Zhao Tiejun
2006, 43(9):  1658-1665. 
Asbtract ( 385 )   HTML ( 0)   PDF (624KB) ( 564 )  
Related Articles | Metrics
Phrase equivalence pair is very useful for bilingual lexicography, machine translation and crossing-language information retrieval. In this paper, a new method of phrase alignment is proposed, where translation head-phrase is obtained according to dictionary-based word alignment which is very reliable, and statistical translation boundary is determined based on the translation extending reliability. All candidate translations of source language phrase are extracted by combining translation head-phrase with statistical translation boundary. A linear combination model is applied to evaluate all candidate translations of source language phrase and the most probable one is selected. At the same time, a greedy algorithm is used to eliminate the crossing-conflicts between translation boundaries of source language phrases. Experimental results show that the new method achieves 82.76% at precision, which is better than other approaches in open test.
Fast Block-Matching Motion Estimation Based on a Dual-Cross Search Algorithm
Liu Haihua, Lei Yi, and Xie Changsheng
2006, 43(9):  1666-1673. 
Asbtract ( 615 )   HTML ( 0)   PDF (1040KB) ( 531 )  
Related Articles | Metrics
In block motion estimation, search patterns with different shapes and/or sizes have a large impact on the searching speed and quality of performance. By statistical analysis of motion vector probabilities distribution, directional characteristic is found besides cross center-biased characteristic. A novel dual-cross search algorithm (DCS) is proposed. The proposed algorithm first employs the small cross search pattern (SCSP) and large cross search pattern (LCSP) to find small motion vectors with fewer search points based on cross-center-biased property. In addition, the algorithm uses no-full-symmetrical cross search pattern (NFSCSP) in the subsequent steps based on direction characteristic of motion vector probabilities distribution for large motion vectors. The improvement of DCS over DS and CDS can be a 70% and 40% gain on speedup, respectively, while maintaining comparable search quality. Experimental results show that the DCS is much more effective and robust.
Adaptive Skin-Color Detection in Still Images
Zhang Mingji, Wang Weiqiang, Zheng Qingfang, and Gao Wen
2006, 43(9):  1674-1680. 
Asbtract ( 524 )   HTML ( 3)   PDF (721KB) ( 710 )  
Related Articles | Metrics
In this paper a new skin detection method based on adaptive thresholds is proposed. Compared with the fixed threshold histogram method used widely, this method can find optimal thresholds to the different complex backgrounds. Four clues are summarized from the skin probability distribution histogram (SPDH) to help search candidates of optimum thresholds, and an ANN classifier is trained to select the final optimum threshold. A novel image relation operation is also proposed to eliminate confusing backgrounds. The method is fast and thus appropriate for real-time applications since no iterative operation is involved. Experimental results show that the proposed method can achieve better performance than the fixed threshold histogram method.