Loading...
ISSN 1000-1239 CN 11-1777/TP

Table of Content

15 March 2006, Volume 43 Issue 3
Paper
On the Deployment Approach of IPSec and IP Filter in Routers
Wang Li, Xu Mingwei, and Xu Ke
2006, 43(3):  375-380. 
Asbtract ( 316 )   HTML ( 3)   PDF (310KB) ( 429 )  
Related Articles | Metrics
IPSec and IP Filter are among the most important security modules of IPv6 routers. Similar to the function of IP Filter, the security-association query engine of IPSec also needs filtering and matching the IP packages. The IP packages flowing inside the router could be filtered by IP Filter and IPSec for more than once. Thus, the method of deployment between the two modules will have direct influence on the processing performance of IP packages. In this work, the inter-relationship between the two security modules is given in a perspective of router global security. Moreover, a novel deployment approach is proposed. Compared with the open-source IPv6 protocol stack KAME, the improved processing performance of IPSec is obtained and the negative influence of IP Filter on the IPSec is reduced. Meanwhile, the duplicated IP package filtering within the routers is reduced to improve the processing performance of IP package.
A Cluster-Based Multipath Dynamic Source Routing in MANET
An Huiyao, Lu Xicheng, Peng Wei, and Gong Zhenghu
2006, 43(3):  381-388. 
Asbtract ( 432 )   HTML ( 3)   PDF (451KB) ( 458 )  
Related Articles | Metrics
Numerous studies have shown the difficulty for a routing protocol to scale to large mobile ad hoc networks. A scheme called a cluster-based multipath dynamic source routing (CMDSR) in MANET is proposed, which is designed to be adaptive according to network dynamics. It uses the hierarchy to perform route discovery and distributes traffic among diverse multiple paths. The CMDSR is based on a 2-level hierarchical scheme: the 1-cell cluster and 2-server cluster. The main idea of the proposition is to transfer the route discovery procedure to the 1-server level to prevent the network flooding due to the DSR route discovery. Thus, route discovery does not require flooding mechanism and the overhead is minimized, thus improving the network scalability. Furthermore, the CMDSR solves reliability problem by selectively choosing more reliable paths and by providing end-to-end QoS soft guarantees. By implementing the algorithm in the OPNET environment, the results show that the CMDSR can balance the load of the network and deals with the frequent change of the network topology effectively, thus improving the reliability and robustness of the network efficiently.
Self-Similar Traffic Synthesizing Using Gaussian Mixture Model in Wavelet Domain
Ji Qijin and Dong Yongqiang
2006, 43(3):  389-394. 
Asbtract ( 443 )   HTML ( 0)   PDF (379KB) ( 759 )  
Related Articles | Metrics
It has been recognized that self-similarity of the Internet traffic significantly affects the performance of networks, and traffic modeling and generation is a primary step of network performance evaluation. An algorithm for self-similar traffic modeling and generation based on a mixture Gaussian model in wavelet domain is proposed in this paper. The approximate Karhunen-Lo`eve transformation inherence endows wavelet with the power of decorrelating long-range dependence, and the mixture Gaussian model exactly captures the non-Gaussian distribution of wavelet coefficients. Both statistical analysis and queueing performance simulation are conducted to evaluate the proposed method. Numerical results suggest that this method can model and synthesize actual network traffic more accurately and has the advantages of low computation complexity of traffic generation in particular.
Small World Based Adaptive Probabilistic Search (SWAPS) for Unstructured Peer-to-Peer File Systems
Feng Guofu, Mao Yingchi, Lu Sanglu, and Chen Daoxu
2006, 43(3):  395-401. 
Asbtract ( 381 )   HTML ( 3)   PDF (409KB) ( 527 )  
Related Articles | Metrics
One of the essential problems in P2P is the strategy for resource discovery. Related methods in unstructured P2P systems either depend on the flooding and its variations or utilize various indices, which results in too much traffic load to forward messages or too expensive cost to maintain the indices. Presented in this paper is an adaptive, bandwidth-efficient and easy to maintain search algorithm for unstructured P2P file systems—small world based adaptive probabilistic search (SWAPS). In SWAPS, the users' access interest attributes are mined based on ontology tree. And following the behavior patterns of users, interest attributes based small world overlay network is spontaneously constructed. The key factors influencing the locating performance in SWAPS are also analyzed and efficient routing algorithm (interest rank based, ontology distance based and interest breadth based) is designed. And the final simulation experiment shows that the small world based locating algorithm in unstructured P2P can remarkably improve the search efficiency with the small average path length, high success rates, very low bandwidth consumption and the eminent adaptability to access behaviors of the users.
A Best-Effort Adaptive Sampling Method for Flow-Based Traffic Monitoring
Yang Jianhua, Xie Gaogang, and Li Zhongcheng
2006, 43(3):  402-409. 
Asbtract ( 326 )   HTML ( 1)   PDF (499KB) ( 444 )  
Related Articles | Metrics
Flow-based traffic monitoring and analysis is widely used in usage-accounting, QoS monitoring, attack detection and network traffic engineering. Accurate and efficient sampling technology is required by implementation of high-speed network traffic analysis based on flow. The packets dealing ability of the monitoring system is a necessary parameter needed to be considered due to the limitation of hardware and software designation. In this paper, one best-effort adaptive sampling method is proposed according to the characteristics of traffic flow and stratified sampling technology. The goal of the method is to take as more as sampled points within the process ability of the monitoring system. The experiments show that the method can adjust the sampling probability very well.
Privacy Protection in the Relative Position Determination for Two Spatial Geometric Objects
Luo Yonglong, Huang Liusheng, Jin Weiwei, and Xu Weijiang,
2006, 43(3):  410-416. 
Asbtract ( 426 )   HTML ( 2)   PDF (363KB) ( 649 )  
Related Articles | Metrics
Privacy-preserving computational geometry is a special secure multi-party computation problem. It can be defined as the problem of several users computing a cooperative task of their geometric input in a distributed network, where no user is willing to disclose his secret inputs to anyone else. This problem may be applied in the field of research and exploitation of the outer space. Private determination of whether two sets of data are proportional correspondingly is a basic problem of secure multi-party computation, and it also plays an important role in the relative position determination for two spatial geometric objects. In this paper, a protocol for determining whether two sets of data are proportional correspondingly is developed, its correctness, security and efficiency are analyzed, and the corresponding algorithms for determining the relative position of spot, line and plane in space are also presented.
Research of a Network Scan Detection Algorithm Based on the FSA Model
Liu Lijun and Huai Jinpeng
2006, 43(3):  417-422. 
Asbtract ( 396 )   HTML ( 1)   PDF (294KB) ( 558 )  
Related Articles | Metrics
Network scan is often the prelude of the network intrusion. Thus precise detection of the network scan plays an important role in the pre-alert of the network intrusion. But the current scan detection technologies are too simple and may be evaded by attackers easily. In this paper, based on the analysis of both the scan and detection technologies, a detection algorithm called SBIPA(FSA-based intrusion pre-alert algorithm) is proposed based on the FSA(finite state automata) model and the key implementation technology is analyzed. The state transfer diagram is used to illustrate the network scan packet series, and three different mechanisms are designed to detect the scan event based on FSA. Experiment reveals that this algorithm not only can detect the single type scan activity more precisely, but also can detect the unobvious scan such as distributed and multi-type mixed scan very well, which can't be detected by other detection technologies. It is believed that it eliminates the limitations of the current scan detection technology and has an important research and practice value.
Design and Implementation of a Multi-Layered Privilege Control Mechanism
Shen Qingni, Qing Sihan, and Li Liping
2006, 43(3):  423-428. 
Asbtract ( 415 )   HTML ( 0)   PDF (374KB) ( 469 )  
Related Articles | Metrics
As an important component of high-level secure operating systems, the privilegecontrol mechanism can provide an appropriate level of security assurance for the system. It presents a multi-layered privilege control mechanism implemented in Ansheng OS V4.0, a copyrighted security operating system that satisfies all the specified requirements of criteria class 4, “Structured-Protection”, in GB17859-1999 (equally, the B2 level in TCSEC). This mechanism enforces privilege control and management in the user-level, function-level and program-level of the system, and it can make the system implementation responsibility separation with roles defined in the role-based access control policy, dynamic functionality separation with domains defined in the domain and type enforcement policy, least privilege principle required by the POSIX standard and therefore ensure the security of the system with the use of privilege in such a controlled manner.
A New Data Visualization Algorithm Based on SOM
Shao Chao and Huang Houkuan
2006, 43(3):  429-435. 
Asbtract ( 641 )   HTML ( 3)   PDF (382KB) ( 511 )  
Related Articles | Metrics
Due to the topology-preserving nature, the SOM(self-organizing map)algorithm can be used to visualize the high-dimensional data. However, due to the fixed regular lattice of neurons, the distance information between the data is lost, and thus the structure of the data may often appear in a distorted form. In order for the map to visualize the structure of the data more naturally, the distance information or the similarity information between the data should be preserved as much as possible on the map directly through the positions of the neurons, along with the topology. To do this, the positions of the neurons should be adjustable on the map. In this paper, a novel position-adjustable SOM algorithm, i.e., DPSOM (distance-preserving SOM), is proposed, which can adaptively adjust the positions of the neurons on the map according to the corresponding distances in the data space and thus can visualize the structure of the data naturally. What's more, the DPSOM algorithm can automatically avoid the excess contraction of the neurons without any additional parameter, thus greatly improving the controllability of the algorithm, and the quality of data visualization.
Improvements of Particle Swarm in Binary CSPs with Maximal Degree Variables Ordering
Yang Qingyun, Sun Jigui, and Zhang Juyang,
2006, 43(3):  436-441. 
Asbtract ( 498 )   HTML ( 1)   PDF (354KB) ( 504 )  
Related Articles | Metrics
Constraint satisfaction problems is an important research area in artificial intelligence. People now pay more attention to particle swarm intelligence to solve CSPs. But the calculation of evaluation in particle swarm of CSPs is to determine whether the conflict is zero in one variable with its related variables. This way treats each variable equally. Adding max-degree static variable ordering of variables to fitness function is proposed, and now each variable is treated differently. Thus certain variables' instantiation satisfies some constraints firstly with high probability and affects the direction of the whole swarm by selecting the global best particle and local best particles. Random generated constraints satisfaction problems show that this improvement is efficient, which has better capacity in searching and could converge to global solution faster.
SVM Fast Training Algorithm Research Based on Multi-Lagrange Multiplier
Ye Ning, Sun Ruixiang, and Dong Yisheng
2006, 43(3):  442-448. 
Asbtract ( 456 )   HTML ( 0)   PDF (383KB) ( 563 )  
Related Articles | Metrics
A multi-Lagrange multiplier support vector machine fast training method (MLSVM) based on the coordinated optimization of multi-Lagrange multipliers is proposed and the formula to define the feasible field of each multiplier is presented. The algorithm approaches to the most optimization more precisely and quickly due to the analytic expressions adopted in the optimization process of each multiplier. The SMO algorithm is proved to be an instance of MLSVM. Three individual lgorithms, i.e., MLSVM1, MLSVM2 and MLSVM3, are presented under the theoretical guidance of this method according to different learning strategies. The learning speed of MLSVM1 and MLSVM2 is about the same as that of SMO when the test data set is small (<5000). However, they will fail when the test data set becomes larger. MLSVM3 is an improved algorithm of the former two algorithms and the SMO algorithm. It not only overcomes the failure of MLSVM1 and MLSVM2, but also performs faster than the SMO algorithm with an improvement of 7.4% to 4130% on several test data sets.
Effectively Implementing a Pattern Learning Method in the Question Answering System
Du Yongping, Huang Xuanjing, and Wu Lide
2006, 43(3):  449-455. 
Asbtract ( 394 )   HTML ( 1)   PDF (449KB) ( 445 )  
Related Articles | Metrics
Open domain question answering (QA) represents a challenge of natural language processing, aiming at returning exact answers in response to natural language questions. A novel pattern learning method for QA is developed. The key idea is to get answers using answer patterns learned from the Web. Although many other QA systems use the pattern based method, the method in this paper is implemented automatically and it can handle the problems other systems fail, such as the weakness of pattern restriction and so on. The experiment result on the TREC data indicates that the method is effective. It solves not only the questions relying on simple patterns, but also the questions that need complex patterns for answer extraction. The question number of the latter is about 80% in the question set of the TREC.
Frequent Subtree Mining Based on Projected Branch
Zhao Chuanshen, Sun Zhihui, and Zhang Jing
2006, 43(3):  456-462. 
Asbtract ( 522 )   HTML ( 0)   PDF (391KB) ( 616 )  
Related Articles | Metrics
Discovering frequent subtrees from ordered labeled trees is an important research problem in data mining with broad applications in bioinformatics, web log, XML documents and so on. In this paper, A new concept of projected branch is introduced, and a new algorithm FTPB (frequent subtrees mining based on projected branch) is proposed. This algorithm does the work of distinguishing isomorphism while computing projected branch, which decreases the complexity of algorithm, improving the efficiency of the algorithm. Theoretical analysis and experimental results show that the FTPB algorithm is efficient and effective.
Locality Preserving Clustering for Image Database
Zheng Xin and Lin Xueyin
2006, 43(3):  463-469. 
Asbtract ( 563 )   HTML ( 2)   PDF (444KB) ( 427 )  
Related Articles | Metrics
It is important and challenging to make the growing image repositories easy to search and browse. Image clustering is a technique that helps in several ways, including image data preprocessing, the user interface design, and search result representation. Spectral clustering method has been one of the most promising clustering methods in the last few years, because it can cluster data with complex structure, and the (nearly) global optimum is guaranteed. However, the existingspectral clustering algorithms, like normalized cut (NCut), are difficult to use to handle data points out of training set. In this paper, a clustering algorithm named LPC (locality preserving clustering) is proposed, which shares many of the data representation properties of nonlinear spectral method. Yet the LPC provides an explicit mapping function, which is defined everywhere, on both training data points and testing points. Experimental results show that LPC is more accurate than both “direct Kmeans” and “PCA+Kmeans”. It is also shownhat LPC produces comparable results with NCut, yet is more efficient than NCut.
Audio-Visual Bimodal Speaker Identification Using Dynamic Bayesian Networks
Wu Zhiyong and Cai Lianhong
2006, 43(3):  470-475. 
Asbtract ( 443 )   HTML ( 4)   PDF (370KB) ( 603 )  
Related Articles | Metrics
Studied in this paper is the use of dynamic Bayesian networks (DBNs) for the task of text prompt audio-visual bimodal speaker identification. The task is to determine the identity of a speaker from a temporal sequence of audio and visual observations obtained from the acoustic speech and the shape of the mouth respectively. According to the hierarchical structure of audio-visual bimodal modeling, a new DBN is constructed to describe the natural audio and visual state asynchrony as well as their conditional dependency over time. The experimental results show that the dynamic Bayesian network is a powerful and flexible methodology for representing and modeling the audio-visual correlations and the proposed DBN can improve the accuracy of audio-only speaker identification at all levels of acoustic signal-to-noise ratio (SNR) from 0 to 30dB.
A Medium Vocabulary Visual Recognition System for Chinese Sign Language
Zhang Liangguo, Gao Wen, Chen Xilin, Chen Yiqiang, and Wang Chunli
2006, 43(3):  476-482. 
Asbtract ( 624 )   HTML ( 0)   PDF (441KB) ( 575 )  
Related Articles | Metrics
As one of the most important parts of human-computer interaction, the research and implementation of sign language recognition (SLR) has significant academic value as well as broad application prospect. In this paper, a framework of tied-mixture hidden Markov models (TMHMM) is proposed for vision-based SLR. TMHMM can efficiently speed up the vision-based SLR without significant loss of recognition accuracy compared with the continuous hidden Markov models (CHMM) due to its excellent properties, i.e., the modeling resolution of TMHMM approximates that of CHMM and the computation cost of TMHMM is greatly reduced by tying similar Gaussian mixture components. For the sign feature extraction, an effective hierarchical feature characterization scheme is proposed, which is more suitable to realize medium or larger vocabulary SLR, through employing principal component analysis to characterize the finger area distribution feature more elaborately. Further, by integrating the techniques of robust hands detection, background subtraction and pupils detection to extract the feature information precisely with the aid of simple colored gloves against the unconstrained background, a medium vocabulary vision-based recognition system for Chinese sign language (CSL) is implemented. Experimental results show that the proposed methods can work well for the medium vocabulary CSL recognition in the environment without special constraints.
A Novel Robust Estimation Algorithm Based on Linear EIV Model
Hu Yusuo and Chen Zonghai
2006, 43(3):  483-488. 
Asbtract ( 438 )   HTML ( 0)   PDF (330KB) ( 407 )  
Related Articles | Metrics
Robust estimation of multiple-structured data is a fundamental problem in computer vision. Based on the linear EIV model, a novel robust growing algorithm is proposed to estimate the inherent model parameters from the contaminated data. The algorithm starts from an initial subset in the area of structured data. Then it adds the model data points to the subset and updates the parameter estimate iteratively. At each iteration, the C-Step method originated from the MCD estimator is adopted to adjust the subset and to ensure the robustness of the algorithm by ejecting outliers. Based on the structured density assumption that the gross errors should be no denser than the structured data, otherwise the structured data would be indistinguishable from the gross errors; the mean shift algorithm is adopted to ensure a good initialization for the robust growing algorithm. Experiments show that the proposed algorithm can deal with contaminated data, which contain multiple structures and high percentage of gross errors efficiently, and has higher robustness and accuracy than the existing robust estimation algorithms.
Research on Hierarchical Topic Detection in Topic Detection and Tracking
Yu Manquan, Luo Weihua, Xu Hongbo, and Bai Shuo
2006, 43(3):  489-495. 
Asbtract ( 514 )   HTML ( 1)   PDF (387KB) ( 872 )  
Related Articles | Metrics
Topic detection and tracking (TDT) aims to develop a series of technologies for event based information organization, and hierarchical topic detection (HTD) is a new task of it. Through a series of large-scale evaluations, TDT has become a hot problem for worldwide research in the fields of natural language processing, especially in information retrieval. In this paper, an effective method of topic detection focusing on the features of events is proposed, and an arithmetic named MLCS is also offered to organize topics into hierarchical structures. The methods proposed are very effective, and score second in the HTD evaluation of TDT2004.
Paper
A Mode for Developing OpenMP Programs Based on Dynamic Parallel Region
Li Jianjiang, Shu Jiwu, Chen Yongjian, Wang Dingxing, and Zheng Weimin
2006, 43(3):  496-502. 
Asbtract ( 474 )   HTML ( 2)   PDF (455KB) ( 312 )  
Related Articles | Metrics
Generally, developing OpenMP programs is separated from correctness test and performance analysis. Therefore, the concept of dynamic parallel region and a mode for developing OpenMP programs based on it are proposed, which combines the development of OpenMP programs with correctness testing and performance analysis. At every stage of developing OpenMP programs, the correctness of OpenMP programs is insured. At the same time, the performance of OpenMP programs is improved step by step through refined performance analysis and nice performance tuning during the development of OpenMP programs. The test results for NPB2.3 OpenMP Fortran version, which is developed according to the mode based on dynamic parallel region, show that this mode is feasible.
A Software Reliability Growth Model Considering Differences Between Testing and Operation
Zhao Jing, Liu Hongwei, Cui Gang, and Yang Xiaozong
2006, 43(3):  503-508. 
Asbtract ( 470 )   HTML ( 1)   PDF (338KB) ( 545 )  
Related Articles | Metrics
The testing and operation environment may be essentially different, and thus the fault detection rate of testing is different from that of the operation phase. Based on the G-O model, the representative of non-homogeneous Poisson process (NHPP), the fault detection rate from testing to operation is transformed considering the differences of profile of these two phases, and then a more precise NHPP model (TO-SRGM) considering the differences of fault intensity of testing and operation phases is obtained. Finally, the unknown parameters are estimated by the least-squares method based on normalized data set. Experiments show that the goodness-of-fit of the TO-SRGM is better than those of the G-O model and PZ-SRGM on a data set.
Algorithms for Incremental Aggregation over Distributed Data Stream
Wang Yongli, Xu Hongbing, Dong Yisheng, Qian Jiangbo, and Liu Xuejun,
2006, 43(3):  509-515. 
Asbtract ( 465 )   HTML ( 3)   PDF (417KB) ( 544 )  
Related Articles | Metrics
Many stream-oriented systems are inherently geographically distributed, so distributed processing is a very promising route towards a more effective and adaptive data stream processing model. Aggregation over data streams is an important class of continuous operators for distributed processing. Because aggregation queries need be continuously computed and the result need be continuously transmitted, significant communication overhead is incurred for this model. Unfortunately, the continual transmission of a large number of rapid data streams can be impractical or expensive. So a new approximately incremental aggregation technique is proposed with provable guarantees on the approximation error for reducing the overhead. A new structure called the VSB-tree is introduced, which can effectively incorporate and store aggregation of all child stations. The VSB-tree also can incrementally transmit change of aggregation value to father station. The theory analysis and experimental results show the feasibility and effectiveness of the algorithm.
Cube Navigation Based on Exceptional Distribution
Yu Hui, Tang Shiwei, Yang Dongqing, and Ma Xiuli,
2006, 43(3):  516-521. 
Asbtract ( 392 )   HTML ( 0)   PDF (320KB) ( 379 )  
Related Articles | Metrics
Although OLAP (on-line analytical processing) provides various kinds of explorational and analytical functions, the analysts may ignore important information based on hypothesis-driven exploration in a large search space. Moreover, the existing discovery-driven exploration is based on exceptional cells which can be easily affected by noise. Cube navigation is an effective method which can induce an analyst to the most surprising parts of the cube. To overcome this problem, a new navigation method is proposed, which regards dimensions and dimensional members as skeleton of the data cube. Through extracting the distribution feature of the corresponding data set, the dimensions and their members are assigned proper surprising values as the navigation light. Experiments prove that the method is practical and effective.
Real-Time Multiversion Concurrency Control Based on Validation Factor
Hao Zhongxiao, and Han Qilong
2006, 43(3):  522-527. 
Asbtract ( 348 )   HTML ( 0)   PDF (347KB) ( 475 )  
Related Articles | Metrics
To solve the problem of system overload that is caused by unnecessary transaction restarts with optimistic concurrency control in real-time database systems, avalidation factor (VF) concept and a new method called real-time multiversion concurrency control based on VF (MVOCC-VF) are proposed. By checking the VF, the transaction with higher finished degree is scheduled preferentially. By combining multiversion mechanism, the unnecessary transaction numbers are decreased, especially ensuring the near-to-completed transactions to be accomplished. Theoretical analysis and experimental results demonstrate that the new method can outperform the previous ones.
A Topology Complexity Based Method to Approximate Isosurface with Trilinear Interpolated Triangular Patch
Liang Xiuxia, Zhang Caiming, Liu Yi, and Zhang Aiwu
2006, 43(3):  528-535. 
Asbtract ( 445 )   HTML ( 1)   PDF (667KB) ( 502 )  
Related Articles | Metrics
To approximate isosurface with triangular patch, the selection of sample points is pivotal to the topology correctness and approximation accuracy. In the marching cubes method and its variations, the topology of original surface is not taken into account, and only the same kind of isopoints is selected, and thus these methods can't guarantee correct topology of approximated isosurface. In this paper, Morse theory is incorporated into the study of triangular approximation, and a new method based on topology complexity is presented to approximate the isosurface patch inside a cell. According to the topology complexity of the original isosurface, the approximated isosurfaces can be adaptively constructed by triangulating two kinds of isopoints: critical points and the isopoints on cell edges. Because critical points are the key isopoints defining the surface topology, the new method can guarantee correct topology and high accuracy of the approximated isosurface without adding much computation and data. Examples are given for comparing the approximated isosurface generated from the new method with those from other methods.
An Efficient Spatial Error Concealment for Video Transmission
Xu Zhiliang, Xie Shengli, and Zhou Zhiheng
2006, 43(3):  536-541. 
Asbtract ( 411 )   HTML ( 2)   PDF (472KB) ( 448 )  
Related Articles | Metrics
The encoded bit streams of video transmitted in error-prone channel are vulnerable to transmission errors, which usually result in missing blocks at the decoder. In this paper, an efficient spatial error concealment method for video transmission is proposed. The missing blocks are classified into uniform blocks and edge blocks by using the edge information extracted from the surrounding correctly received blocks. For the missing blocks, which are uniform blocks, the error can be concealed by simple linear interpolation. For the missing blocks, which are edge blocks, the initial values of the missing blocks can be obtained by prediction based on gradient adaptive prediction (GAP). The maximum a posteriori (MAP) method is used to implement the optimization with the initial value of the missing block. The experimental results demonstrate that the proposed algorithm not only can obtain excellent image quality, but also can be applied for real-time video transmission because of low computational complexity.
A Assembly Constraint Semantic Model in Distributed Virtual Environment
Sui Aina, Wu Wei, Chen Xiaowu, and Zhao Qinping
2006, 43(3):  542-550. 
Asbtract ( 463 )   HTML ( 0)   PDF (566KB) ( 477 )  
Related Articles | Metrics
Assembly constraint is the key information for supporting the assembly operation in distributed virtual environments. The abstract of assembly constraint is related to the support capability of the assembly unit's actions. And the structure mechanism of assembly constraint determines the efficiency of the implementation of distributed virtual assembly. To solve the existing issues on the abstract and application of assembly constraint, firstly the semantic abstract and expression of assembly constraint are studied and used to capture the knowledge from the application field of mechanical product assembly and induce the basic semantics of assembly constraint. Secondly, an extended object semantic modeling method is brought forward, which structures the assembly constraint with the function and behavior characters, and constructs assembly constraint semantic models. The experimental result of the VEADAM system shows that the assembly constraint semantic models can effectively support the implementation of distributed virtual assembly and can adapt to the variety of applications.
An Efficient and Complete Method for Solving Mixed Constraints
Ji Xiaohui, and Zhang Jian
2006, 43(3):  551-556. 
Asbtract ( 480 )   HTML ( 0)   PDF (314KB) ( 441 )  
Related Articles | Metrics
Constraints involving Boolean and numerical variables are used widely, but are difficult to solve especially when they contain nonlinear numerical expressions. Many existing methods for solving such constraints are incomplete. A new method is presented in this paper to solve Boolean combinations of the nonlinear numerical constraints completely. This method combines the numerical methods and interval analysis together. It has been implemented in a prototype tool, and some experiments are made. The experimental results show that this method is effective, efficient, and complete.
Emulation and Forecast of HPL Test Performance
Zhang Wenli, Chen Mingyu, and Fan Jianping
2006, 43(3):  557-562. 
Asbtract ( 861 )   HTML ( 0)   PDF (376KB) ( 552 )  
Related Articles | Metrics
HPL is a Linpack benchmark package widely used in performance test of massively parallel systems. A law to determine block size NB theoretically, which breaks through the dependence on trial-and-error experiments, is found based on algorithm analysis and real test of HPL. According to that law, estimated execution time can be obtained through algorithm complexity. An emulation model is constructed to estimate the HPL execution time in more detail. Verified by real system test, the model is used to forecast the benefits of Linpack test brought by the intending system improvement. It is expected to afford reference to system improvement on optimizing HPL test in the future.
A Statistical Admission Control Algorithm for Storage Systems with Mixed Multimedia Workloads
Li Zhong, Wang Gang, and Liu Jing
2006, 43(3):  563-570. 
Asbtract ( 401 )   HTML ( 0)   PDF (440KB) ( 279 )  
Related Articles | Metrics
Retrieving data from the storage systems with mixed multimedia workloads is a complex process. Different classes of multimedia applications need the storage systems to guarantee different requirements of QoS (quality of service). In order to support as many multimedia applications as possible, the storage systems need an effectual admission control algorithm. A systematic research on this problem is carried on in this paper: the QoS parameters of the multimedia applications are clearly defined, the workload model of aggregate multimedia applications is established, and the constraint condition of admission control is derived. Based on the workload model and the constraint condition, the statistical admission control algorithm is implemented. The simulation experiments show that the novel algorithm is efficient and accurate enough.