Loading...
ISSN 1000-1239 CN 11-1777/TP

Table of Content

15 November 2011, Volume 48 Issue 11
Paper
Stationarity and Correlation Test of Image Sequences Based Classification on Scenes with Different Weather Conditions
Zhao Xudong, Liu Peng, Liu Jiafeng, and Tang Xianglong
2011, 48(11):  1973-1982. 
Asbtract ( 489 )   HTML ( 0)   PDF (2101KB) ( 579 )  
Related Articles | Metrics
Classification on different weather conditions provides first step support for outdoor scene modeling, which is a core component in many different applications of image sequence analysis and computer vision. In this paper, an objective classification method on scenes with different weather conditions is presented with two steps based on stationarity and correlation test of image sequences. First of all, scenes with various weather conditions are considerably described, on which an objective classification standard is accentuated. Secondly, based on the stationarity test on sub-sequences of intensity averages with counter order, the corresponding expectation and deviation of patterns are formulated and proved. Therefore, scenes with different weather conditions are primarily classified into stationary and nonstationary ones. Finally, a correlation test on autocorrelation function of intensity values in image sequences with different weather conditions is organized. Moreover, descriptions on sudden change of the autocorrelation function are established. Consequently, a classification on static or dynamic scene is ultimately accomplished. The two-step method needs no parameters, which avoids estimating the parameters of population distribution, when the inference of classification standard is in progress. This method is demonstrated to be effective using experiments on seven videos with different weather conditions, which contributes to latter applications such as scene modeling with different weather conditions.
Background Modeling and Object Ddetection Based on Two-Model
Zhang Shuifa, Ding Huan, and Zhang Wensheng
2011, 48(11):  1983-1990. 
Asbtract ( 503 )   HTML ( 2)   PDF (1890KB) ( 773 )  
Related Articles | Metrics
The traditional background models based on pixels can not interpret the background motion efficiently although fast in computation. Optical flow can represent object motion accurately, but can not meet the requirements of real time application for computational complexity. In this literature, the traditional background models based on pixels and optical flow are fused with the purpose of combining their advantages, which are used to formulate a novel two model background modeling approach for detecting moving objects fast in computation and accurate in detection. The traditional background models based on pixels are used to model static backgrounds using statistics of pixel intensity, while statistics on intensity, spatial and temporal information of pixels are extracted to generate the optical flow field, which is utilized to model moving ones. Then we can use the two models for moving objects detection fast and accurately. The advantage is that the intensity background model can discriminate foreground from static background fast and accurately, so global optical flow field is not necessary and computational complexity is reduced; the optical flow background model for moving backgrounds can represent background motion very well, mitigate noise caused by background motion remarkably and detect moving objects accurately and then is superior to the previous two methods. This two model-based background modeling strategy can reduce the noise generated by background motion significantly and detect moving objects fast and robustly, as illustrated in our experiments.
Texture Classification Based on Multiresolution Co-occurrence Matrix
Zhong Hua,Yang Xiaoming, and Jiao Licheng
2011, 48(11):  1991-1999. 
Asbtract ( 713 )   HTML ( 0)   PDF (1522KB) ( 473 )  
Related Articles | Metrics
Gray level co-occurrence matrix (GLCM) is widely used for the description of different textures because of its advantage of representing the texture structure. In order to hold the multiresolution property and spectrum information simultaneously, GLCM is often computed from each wavelet subband, which, however, leads to much higher dimension of feature. To overcome this problem, a new feature extracting algorithm is proposed named multiresolution co-occurrence matrix (MCM), whose feature vector has a comparative low dimension. For the task of dimension reduction, though the MCM is computed from each subband of undecimated wavelet transformation as GLCM does, the parameters for the MCM are well chosen taking into consideration the scale and orientation of wavelet subbands. In addition, the feature dimension of the MCM can be further reduced through a new feature selection method based on the correlation analysis of inter-subbands and intra-subbands. Performance analysis is made in detail for the statistics of the MCM and wavelet energy feature. The proposed MCM is measured through the classification test on the Brodatz album. Experimental results demonstrate that MCM statistics outperforms other methods such as wavelet energy, GLCM, and the catenation of these two features. Extensive experiments on feature selection show the effectiveness of the proposed method, which can reduce the dimension successfully without any loss of classification performance.
Adaptive Spatially Neighborhood Information Gaussian Mixture Model for Image Segmentation
Zhu Feng, Luo Limin, Song Yuqing, Chen Jianmei, and Zuo Xin
2011, 48(11):  2000-2007. 
Asbtract ( 574 )   HTML ( 0)   PDF (2200KB) ( 533 )  
Related Articles | Metrics
One of the important characteristics of an image is that neighborhood pixels are highly correlated. In other words, these neighboring pixels possess similar feature values and the probability that they belong to the same cluster is great. Unfortunately,the application of Gaussian mixture model to image segmentation has not been taken into account spatial information except for intensity values, which could lead to misclassification on the boundaries and inhomogeneous regions with noise. In order to solve this problem, a new image segmentation method using adaptive spatially neighborhood information Gaussian mixture model without any control paremeters is proposed in this paper. Firstly, an adaptive spatial information function is defined to deal with the neighbour pixel of spatial correlatation,which is not only effective to deal with noise, but also to reserve well edge property. Secondly, it designs the neighbour information weighted class probabilities of every pixel according to Bayesian rules and proves that these class probabilities satisfy two norms of polarity and spatial continuity. Finally, an expectation maximization algorithm is used to obtain iterative formula of E-step and M-step as an optimization method. The experiments by synthetic images and real images demonstrate that the proposed method can obtain a better classification result and less effect on the noise.
Image Segmentation Based on Non-Parametric Mixture Models of Chebyshev Orthogonal Polynomials of the Second Kind
Liu Zhe, Song Yuqing, Chen Jianmei, Xie Conghua, and Song Wenshan
2011, 48(11):  2008-2014. 
Asbtract ( 565 )   HTML ( 0)   PDF (1882KB) ( 518 )  
Related Articles | Metrics
To solve the problem of over-reliance on priori assumptions of the parameter methods for finite mixture models, a nonparametric mixture model of Chebyshev orthogonal polynomials of the second kind for image segmentation method is proposed in this paper. Firstly, an image nonparametric misture model based on Chebyshev orthogonal polynomials of the second kind is designed. The mixture identification step based on the maximisation of the likelihood can be realised without hypothesis on the distribution of the conditional probability density function(PDF). In this paper, we intend to give some simulation results for the determination of the smoothing parameter, and use mean integrated squared error (MISE) estimation of the smoothing parameter for each model. Secondly, the stochastic expectation maximum (SEM) algorithm is used to estimate the Chebyshev orthogonal polynomial coefficients and the model of the weight. This method does not require any priori assumptions on the model, and it can effectively overcome the “model mismatch” problem. The algorithm finds the most likely number of classes and their associated model parameters and generates a segmentation of the image by classifying the pixels into these classes. Compared with the segmentation methods of other orthogonal polynomials, this new method is much more fast in speed and better segmentation quality. The experimental results about the image segmentation show that this method is better than the Gaussian mixture model segmentation results.
An HD Video Motion Estimation Coprocessor Supporting Multiple Coding Standards
Gu Huitao, Chen Shuming, and Sun Shuwei
2011, 48(11):  2015-2022. 
Asbtract ( 417 )   HTML ( 0)   PDF (2090KB) ( 423 )  
Related Articles | Metrics
Motion estimation is one of the most important parts of video coding standards, and it can remove most of temporal redundancy. In order to satisfy the real-time computational complexity and the flexibility requirement of motion estimation, a motion estimation coprocessor supporting multiple coding standards for real-time high definition video is presented in this paper. The motion estimation coprocessor is designed based on very long instruction words architecture, and can effectively perform various motion estimation algorithms. In the proposed hardware architecture, a two-dimension data-reused processing element array, an SAD tree structure, and a multiple modes cost comparator are employed. The processing element array and the SAD tree structure can efficiently meet the huge computational complexity of motion estimation, and the multiple modes cost comparator is used to support different block partition modes of various video coding standards. With a 0.13 μm CMOS technology, the coprocessor is implemented with 145.5 K gates and 4.25 KB memory at 550 MHz. For validating the proposed hardware architecture and evaluating the performance, a fast full search algorithm modified from the H.264 reference software JM10.2 is performed on it. The experimental results show that when encoding high definition video sequences with 1 920×1 080 frame size and 32×32 search window, the frame rate is up to 60 fps.
Multi-Window Target Tracking
An Guocheng, Zhang Fengjun, Wang Hongan, and Dai Guozhong
2011, 48(11):  2023-2030. 
Asbtract ( 575 )   HTML ( 0)   PDF (2502KB) ( 350 )  
Related Articles | Metrics
Because of the movement under complex environment, the characteristic of tracking target is influenced by illumination, camera angle. And to make the matter worse, the target may be occluded by some part of itself or by some similar objects in the scene in the tracking process. Thus the tracking accuracy is greatly influenced by those elements. To solve those problems, a multi-window tracking method is proposed, which uses some unfixed windows, with each window corresponding to a tracker that may be mean shift algorithm, to represent the tracking target in varied circumstance. It is worth noting that there is no restriction on selecting these windows. That is to say, there could be overlaps among the windows, or the windows could be selected within the target, or the windows could include the object around the tracking target. For an easy example, in the face tracking, some windows could contain the color of the neck or the frock. Thus, the target position is easily estimated by the relation of the windows and the similarity in the successive tracking procedure. Experimental results show that the novel tracking method is robust to the partial occlusion and the change of illumination in the complex background.
A Method for Object Reference in Collaborative Modeling System
Jing Shuxu, He Fazhi, Cai Xiantao, Cheng Yuan
2011, 48(11):  2031-2038. 
Asbtract ( 401 )   HTML ( 1)   PDF (1847KB) ( 582 )  
Related Articles | Metrics
In order to support natural and free multi-user interaction, a replicated architecture and optimistic concurrency control are adopted in collaborative 3D modeling system. On the one hand, topological elements in the geometric model are referred as operation parameters to define a completed modeling operation which is exchanged and executed among replicated sites. Only when topological objects of the geometric model are correctly referred, can the modeling operations be executed with a correct result. On the other hand, optimistic concurrency control will lead to geometric model inconsistency between the moment of reference generation in local site and moment of reference use in remote sites. The inconsistency will lead to incorrect naming and consequently the failure of object reference. In this paper, the functions of referred topological elements are analyzed and referred objects are categorized into replaceable ones and irreplaceable ones. For replaceable object reference, temporary geometric model is constructed to guarantee the correct object reference by providing the geometric information needed for positioning purposes. For irreplaceable object reference, geometric model is restored and object reference applied, finally redo all the concurrency operations undone during model restoration procedure. The proposed method has been implemented in a prototype system and it works well.
A Method of Combining Multi-Aspect Information for Qualitative Spatial Reasoning
Song Xiaohua, and Ouyang Dantong,
2011, 48(11):  2039-2046. 
Asbtract ( 390 )   HTML ( 0)   PDF (851KB) ( 439 )  
Related Articles | Metrics
Qualitative spatial reasoning has been an important context in the area of artificial intelligence. Spatial information includes topology, size, shape, distance, etc. Single-aspect spatial information has been studied for many years. But how to combine the single-aspect information in a frame for representation and reasoning is an important problem. In this paper, we propose a new method for combining multi-aspect information using an operation symbol which is called “combine”. By “combine” operator, one can represent new relations using the single-aspect relation set which is joint exclusive and pair-wise disjoint, and get the rough composition table very easily. Then we give two models. The first one combines the topology and size information and the second one combines the topology and far-near information. We propose a new concept called “neighborhood partition graph”, which could present the relationship among the atom relation in relation set which is joint exclusive and pair-wise disjoint. One can convert the neighborhood partition graph of a new model which combines multi-aspects information into its concept neighborhood graph very easily. We solve the problem proposed by Galton in 1994:“why the case of the line-of-sight relations differs interestingly from the standard spatial and temporal relations in that the result of composing two relations does not always form a conceptual neighborhood graph”.
An Anytime Coalition Structure Generation Based on the Grouping Idea of Cardinality Structure
Li Shaofang, Hu Shanli, and Shi Chunyi
2011, 48(11):  2047-2054. 
Asbtract ( 397 )   HTML ( 0)   PDF (851KB) ( 399 )  
Related Articles | Metrics
Coalition formation is a key topic in multi-agent system. However, finding the optimal coalition structure is NP-complete. Sandholm and Larson et al. showed that it was necessary and sufficient to search the lowest two levels of the coalition structure graph in order to establish a worst-case bound k. How to do further search after the lowest two levels of the coalition structure graph is a problem which hasnt been resolved well for a long time. In actual problem such as task assignment, the different coalitions have the characteristics of the same cardinality and same value, or the value of two coalitions with the same cardinality differs a bit. This paper studies the problem about the optimal cardinality structure generation, analyzes the grouping thought of cardinality structures and presents a new anytime algorithm of coalition structure generation. The algorithm gives further search that can decrease bound to 2. After the minimal search, the complement search from the bottom to top in the process is also discussed that declines bound from 2 to 1. It is obviously better than Sandholm et al. and Dang et al. s in the searching number of cardinality structure and coalition structure, or attaining bound, which is a important progress in the problem of coalition structure generation based on cardinality structure.
Minimized Upper Bound for #3-SAT Problem in the Worst Case
Zhou Junping, Yin Minghao, Zhou Chunguang, Zhai Yandong, and Wang Kangping
2011, 48(11):  2055-2063. 
Asbtract ( 561 )   HTML ( 0)   PDF (854KB) ( 370 )  
Related Articles | Metrics
Propositional model counting or #SAT is the problem of computing the number of models for a given propositional formula. The rigorous theoretical analysis of algorithms for solving #SAT have been proposed in the literature. The time complexity is calculated based on the size of the #SAT instances, depending on not only the number of variables, but also the number of clauses. Researches on the worst case upper bound for #SAT with the number of clauses as the parameter can not only measure the efficiency of the algorithms, but also correctly reflect their performance. Therefore, it is significant to exploit the minimized upper bound of #SAT problem in the worst case using the number of clauses as the parameter. In this paper, we firstly analyze the CER algorithm which we have proposed for solving #SAT and prove an upper bound O(2\+m) regarding the number of clauses as the parameter. In order to increase the efficiency, an algorithm MCDP based on Davis-Putnam-Logemann-Loveland (DPLL) for solving #3-SAT is presented. By analyzing the algorithm, we obtain the worst-case upper bound O(1.8393\+m) for #3-SAT, where m is the number of clauses in a formula.
Knowledge Compilation Using Extension Rule Based on MCN and MO Heuristic Strategies
Gu Wenxiang, Wang Jinyan, and Yin Minghao
2011, 48(11):  2064-2073. 
Asbtract ( 558 )   HTML ( 3)   PDF (1497KB) ( 475 )  
Related Articles | Metrics
The key idea of theorem proving using extension rule is to use the inverse of resolution and the inclusionexclusion principle to circumvent the problem of space complexity. Knowledge compilation using extension rule, called KCER, is a new method for knowledge compilation, in which both the compilation and the querying are based on the extension rule. So KCER can be considered as a counterpart of other existing methods for knowledge compilation. After deep research on the method, two heuristic strategies MCN and MO are proposed. They utilize the information contained in the set of clauses to choose respectively relevant clauses and variables, in order to reduce the times of using extension rule, and further decrease the size of the compiled knowledge base. Furthermore, we apply MCN, MO and both of them to KCER, respectively. The algorithms MCN_KCER, MO_KCER and MCN_MO_KCER, are designed and implemented. Experimental results indicate that the MCN and MO play a great role in minimizing the size of the compiled knowledge base. When MCN and MO are used together, the efficiency becomes better. The sizes gained by MCN_MO_KCER are 1/3—1/39 times larger the sizes gained by knowledge compilation using extension rule without any heuristic strategy. Consequently, the efficiency will be advanced largely in the online reasoning phase.
Trajectory Outlier Detection Based on Semi-Supervised Technology
Huang Tianqiang, Yu Yangqiang, Guo Gongde, and Qin Xiaolin
2011, 48(11):  2074-2082. 
Asbtract ( 590 )   HTML ( 4)   PDF (3557KB) ( 437 )  
Related Articles | Metrics
With the increasing progress on GPS, RFID and wireless technologies, moving objects are becoming increasingly attractive to data mining community. Trajectory outlier detection has many practical applications, so it has been a popular data mining task. Existing trajectory outlier detection algorithms are sensitive to some parameters and need to manually adjust the parameters, which lead to some drawbacks of unstable and less robustness. Furthermore, most of existing algorithms detect outlying trajectories only according to self-defined dimension and use less information of known trajectories to assist in detecting more potential outlying trajectories. A new trajectory outlier detection algorithm based on semi-supervised technology is proposed in this paper. It makes full use of the information of this little known trajectory, which include the distribution of the known data and the user’s understanding of abnormal data. We use them to calculate sensitive key parameters by semi-supervised technology, and overcome the instability of existing algorithms which comes from sensitive key parameters. The algorithm detects valuable outlying trajectories with new dimension at global and local views. Experimental results show that the proposed algorithm can discover more valuable outlying trajectories with less parameter adjustment.
A Performance Model of k-Ary n-Cube Under Communication Locality
Hu Kai, Wang Zhe, Jiang Shu, and Yin Baolin
2011, 48(11):  2083-2093. 
Asbtract ( 587 )   HTML ( 2)   PDF (2103KB) ( 533 )  
Related Articles | Metrics
Interconnection network is a key component of parallel computer which influences the parallel application’s efficiency significantly. k-ary n-cube network topology is widely adopted in many current large scale parallel computers such as Blue Gene/L and Cray XT4. The method of spatial communication locality is implied in many real parallel programs, but as we know the exiting analytical models just gave some rough and fragmented results on network performance under communication locality. The binary parameters, local message fraction and local domain’s radius, are proposed to measure parallel application’s spatial communication locality. Then based on queueing theory, an analytical model of the k-ary n-cube is established to evaluate communication locality’s impact on message latency and network throughput. The situation that the message’ length is shorter than the network radius is discussed, which has not been considered in the existing analytical models. The impacts of different parameters on network performance are compared carefully. At last, an improved network simulator is used to validate the analytical model, and it is demonstrated that the analytical results close to the simulation experiments well. The proposed model gives an efficient method to predict message latency and network throughput of large scale parallel applications whose communication patterns have local character.
An Adaptive Scheduling Method of Weight Parameter Adjustment on Virtual Machines
Wang Kai, and Hou Zifeng,
2011, 48(11):  2094-2102. 
Asbtract ( 621 )   HTML ( 2)   PDF (1736KB) ( 670 )  
Related Articles | Metrics
In the virtual machine architecture including Service OS and Guest OSs, Guest OS can access the real hardware by the aid of Service OS. But the optimizations of the current scheduling algorithms are focused on I/O-intensive domains. They neglect CPU-intensive domains and the effect of the I/O processing capacity of Service OS on the whole performance of virtual machines. Aiming at these problems, an adaptive scheduling method of parameters on virtual machine based on credit scheduling algorithm is given in this paper. It takes the I/O processing capacity of Service OS as an important parameter and pays attention to I/O-intensive domain and CPU-intensive domain, which is advantageous to make system resources get used more reasonably. The experiment results show that the proposed method can make the performance of CPU processing and accessing real hardware of Guest OS better than that of some given ones, such as Credit scheduling algorithm, through the reasonable adjustment of virtual machines’ weight according to the current I/O requests of Guest OSs and the processing capacity of Service OS. The scheduling algorithm of virtual machines has direct effect on the performance, and it is a significant job to go into scheduling algorithms of virtual machines.
Computation Accelerator Virtualization for Domain Specific Applications
Chen Lili, Shen Li, Wang Zhiying, Xiao Nong, and Yao Yiping
2011, 48(11):  2103-2110. 
Asbtract ( 543 )   HTML ( 0)   PDF (2353KB) ( 503 )  
Related Articles | Metrics
In recent years, many embedded systems must meet the increasing demand of performing computation intensive tasks, such as video decoding, signal processing and so on. General purpose processors (GPPs), although inexpensive, may fail to meet the performance and power cost demands of embedded applications. Thus, it is increasingly common to use application specific instruction set processors (ASIPs) in embedded system designs. These ASIPs can customize hardware computation accelerators for an application domain. Along with instruction set extensions (ISEs), the customized accelerators can significantly improve the performance of embedded processors, which has already been exemplified in previous research work and industrial products. However, these accelerators in ASIPs can only accelerate the applications that are compiled with ISEs. Those applications compiled without ISEs can not benefit from the hardware accelerators at all. Software dynamic binary translation (DBT) is a technique typically used in virtual machines (VMs) to solve the binary compatibility problem and improve the performance. In this paper, we propose using software DBT to overcome this problem, i.e. computation accelerator virtulization. Unlike a static approach, dynamically utilizing accelerator poses many new problems. This paper comprehensively explores the techniques and design choices for dynamic accelerator utilization, and demonstrates the effectiveness by the experimental results.
Branch Obfuscation: An Efficient Binary Code Obfuscation to Impede Symbolic Execution
Jia Chunfu, Wang Zhi, Liu Xin, and Liu Xinhai
2011, 48(11):  2111-2119. 
Asbtract ( 583 )   HTML ( 2)   PDF (1267KB) ( 540 )  
Related Articles | Metrics
Symbolic execution can collect branch conditions along a concrete execution trace of a program and build a logical formula describing the program execution path. Then the logical formula is used to reason about the dependence of control flow on user inputs, network inputs and other inputs from the execution environment, which can be used to effectively direct dynamic analysis to explore execution path space of the program. Symbolic execution has been widely used in vulnerability detection, code reuse, protocol analysis and so on. But it can be also used for malicious purposes, e.g., software cracking, software tampering and software piracy. The reverse engineering based on symbolic execution is a new threat to software protection. This paper proposes a novel binary code obfuscation scheme that obfuscates branch conditions to make it difficult for symbolic execution to collect branch conditions from the execution trace. It conceals branch information by substituting conditional jump instructions with conditional exception codes and uses exception handler to transfer control. It also introduces opaque predicates into the obfuscated program to impede statistical analysis. Furthermore, this paper provides insight into the potency, resilience and cost of the branch obfuscation. Experimental result shows that branch obfuscation is able to protect various branch conditions and reduces the leakage of branch information at run-time that impedes reverse engineering based on symbolic execution to analyze programs internal logic.
QPi: A Calculus to Enforce Trustworthiness Requirements
Fu Ning, Zhou Xingshe, and Zhan Tao
2011, 48(11):  2120-2130. 
Asbtract ( 442 )   HTML ( 0)   PDF (1055KB) ( 389 )  
Related Articles | Metrics
As service becomes the core concept for the abstraction and wrapping of diverse resources in open environment, a new software development paradigm which constructs applications based on services and service compositions becomes the mainstream technology and the direction of distributed computing. With the related researches and applications developing in depth, more and more people have realized that we must pay more attention to trustworthiness under the precondition of functional implementation when constructing service application in open, dynamic computing environment. Software theories and formal methods are regarded as the key for assuring the correctness and trustworthiness of software. Trustworthy computing requires the theories for modeling and analyzing business process in terms of both behavior and trustworthiness. A calculus for assuring satisfaction of trustworthiness requirements in service-oriented systems is proposed in this paper. We investigate a calculus called QPi, for representing both behavior and trustworthiness property of processes. QPi is the combination of Pi calculus and constraint semi-ring, which has a unique advantage when problems with multiple trustworthiness dimensions must be tackled. The notion of quantified bisimulation on process provides us a measure on the degree of equivalence of process based on bisimulation distance. QPi related properties of bisimulation and bisimilarity are also discussed. Demonstrative examples reveal the effectiveness of the calculus.
Trust Model Based on Trust Area and Evaluation Credibility
Cai Hongyun, Tian Junfeng, Li Zhen, and He Lihui
2011, 48(11):  2131-2138. 
Asbtract ( 433 )   HTML ( 1)   PDF (1038KB) ( 599 )  
Related Articles | Metrics
With the development of network technologies, such as P2P and grid, system environment becomes more open and the relationship of the entities is not so intense. Hence, trust model becomes focus of intense research. Trust model establishes a management framework of trust relationship between entities, but its open nature makes it easy to attack by every malicious evaluation. To minimize the influence of the malicious evaluation to the trust model, a novel trust model based on trust area and evaluation credibility is proposed, which is suit for the open distributed network environment. At the same time, the flow sheet of the model and the relevant definitions are given in the paper. The model is built based on the trust area, and the reputation value of a node is composed of the service credibility and feedback credibility. The distinguishing algorithm of the colluding clique is presented based on the similarity of peer’s evaluation. A new method on how to determine the feedback weight is proposed, which depends on the evaluation support and evaluation consistent factor. Simulation and analysis show that the proposed model can distinguish and resist the malicious evaluation effectively.
Software Aging Pattern Analysis of the Video on Demand System
Du Xiaozhi, Qi Yong,, Lu Huimin, Hou Di, Xu Chongan, Chen Ying, and Zhong Xiao
2011, 48(11):  2139-2146. 
Asbtract ( 563 )   HTML ( 3)   PDF (1324KB) ( 554 )  
Related Articles | Metrics
Video-on-demand (VOD) system is an essential and widely used multimedia application. The importance of software reliability and availability has been well recognized and demanded. The phenomenon of software aging refers to the exhaustion of operating system resource, fragmentation and accumulation of errors, which results in progressive performance degradation or transient failures or even crashes of applications. The software aging patterns of a real VOD system are investigated. Firstly, the data on several system resource usage and application server are collected. Then, the Mann-Kendall test method is adopted to detect aging trend, and Sen’s slope estimator is applied to estimate the aging degree in the data sets. Finally, radial basis function (RBF) network models are constructed to model the extracted data series of systematic parameters and to predict software aging of the VOD system. In order to reduce the complexity of RBF networks and to improve its efficiency, principal component analysis (PCA) is used to reduce the dimensionality of input variables of the networks. The experimental results show that the software aging exists in the VOD system and the software aging prediction model based on RBF network is superior to the time series models in the aspects of prediction precision. Based on the models employed here, software rejuvenation policies can be triggered by actual measurements.
A Volume Molecular Solution for the Maximum Matching Problem on DNA-Based Computing
Zhou Xu, Li Kenli, Yue Guangxue, and Yang Zhibang
2011, 48(11):  2147-2154. 
Asbtract ( 671 )   HTML ( 0)   PDF (905KB) ( 498 )  
Related Articles | Metrics
DNA computing makes use of biomolecules as information storage materials and biological laboratory experiments as information processing operators. At present, DNA computing has been employed to many different decisions or combinatorial optimization problems and it has led to an important breakthrough in computing. The power of parallel, high density computation by molecules in solution also allows DNA computer to solve hard computational problems such as NP-complete problems. The maximum matching problem is a famous NP-complete problem. For the objective to decrease the solution volume of the DNA computing algorithm for the maximum matching problem, the pruning strategy is introduced into the DNA supercomputing and a DNA computing algorithm is proposed. The new algorithm consists of a graph composer, a resolvent generator, a parallel matching generator and a maximum matching searcher. Compared with the existing molecular algorithms for maximum matching problem, the DNA strands of maximum number required is reduced from O(2\+m) to O(1.618\+m) on the condition of not varying the time complexity. Therefore, the cardinal number of the exact cover problem that can be theoretically resolved in a test tube may be enlarged from 60 to 86, thus the proposed algorithm is an improved result over the past researches.This algorithm is space-efficient and error-tolerant compared with conventional brute-force searching, and thus it can be scaled-up to solve large and hard maximum clique problems.
Measurement-Based Quantum Circuits Model
Xi Zhengjun and Li Yongming
2011, 48(11):  2155-2160. 
Asbtract ( 557 )   HTML ( 0)   PDF (647KB) ( 497 )  
Related Articles | Metrics
Quantum circuits are still widely used as a convenient formalism for describing quantum computation, and they provide a framework for the structure of quantum algorithms and the physical realization of quantum computers. Measurement-based quantum computation has emerged from the physics community as a new apporach to quantum computation. This paper attempts to address a measurement-based quantum circuits model in terms of the measurement calculus and distributed quantum computation. We consider how the measurement results influence the unitary evolution and their relations. We discuss that measurement-based quantum circuits act on the pure states and mixed states, where the maixed states from some pure states ensemble. Then, we define a union for two primitive actions in terms of the pure states, and prove that the union of any two primitive actions is a primitive action, and it can be generalized to the mixed states. Since the union and the connection are closed for the primitive actions, then we can define the promitive actions of the measurement-based quantum circuits medel. Finally, based on these discussions, we prove that any measurement-based quantum circuits model is equivalent to quantum operation, and give an example to explain it. It is shown that quantum operation can describe any measurement-based quantum computation.
XML Structural Clustering Based on Cluster-Core
Zhang Chong, Tang Jiuyang, Xiao Weidong, and Tang Daquan
2011, 48(11):  2161-2176. 
Asbtract ( 394 )   HTML ( 0)   PDF (3312KB) ( 502 )  
Related Articles | Metrics
With the increasing applications and developments of XML, XML structural clustering plays an important role both in management and in mining of XML documents. Although many XML structural clustering algorithms are proposed, they are ineffective, inefficient and sensitive to input order in practice. In addition, they can’t satisfy incremental clustering under some certain background. This paper addresses these problems by proposing a novel concept——cluster-core, and points out that incremental clustering can be supported if the cluster-cores are mantained correctly in dynamic environment. An effective XML structural clustering algorithm, COXClustering, is presented, which covers static clustering and incremental clustering. In static clustering, COXClustering extracts sub-trees to measure similarity between XML structures, and it utilizes classification to improve clustering efficiency and reduces sensitivity to input order by the orthogonality of cluster-cores. In incremental clustering, it dynamically adjusts cluster-cores based on current added XML documents, and then guides incremental clustering through both instant adjustment and batch adjustment adaptively. Finally, a comprehensive experiment on both synthetic and real dataset is conducted to show that COXClustering is capable of improving clustering efficiency and quality, as well as being insensitive to input order in static clustering. The experiment also shows that incremental clustering highly speeds up clustering and the quality of incremental clustering is close to that of static clustering.
Temporal Indexing Technique Based on Structural Summary
Guo Huan, Tang Yong, and Ye Xiaoping
2011, 48(11):  2177-2186. 
Asbtract ( 407 )   HTML ( 0)   PDF (1314KB) ( 460 )  
Related Articles | Metrics
With extensive applications of temporal data management technology, temporal index has become one important way to make efficient query and management for temporal data. Now, B+-tree is still the most widely used index structure in commercial databases. So, in order to make effective operation on temporal data in existing databases, it is necessary to research B+-tree based temporal index structure. A structural summary based temporal indexing structure—CMap-tree is proposed, which maps time interval into one-dimensional point when reserving the lexicographic order of time intervals and uses B+-tree as its basic storage structure. Firstly, a main memory structural summary is introduced, and by storing necessary structural summary information of nodes, invalid nodes visited during temporal operations (temporal inserting, querying and updating) have been reduced in greatest degree. Secondly, the concept of temporal matrix for time intervals is proposed, and then based on the temporal matrix, detailed analysis of the resulting set corresponding to each temporal relation is discussed. Then, based on the structural summary, temporal inserting, querying and updating algorithms of CMap-tree are discussed. Finally, the performance of CMap-tree is systemically analyzed and compared with that of other competitors in terms of space utilization, querying and updating performance. Extensive experiments show that CMap-tree has obvious advantages.