Loading...
ISSN 1000-1239 CN 11-1777/TP

Table of Content

15 September 2010, Volume 47 Issue 9
Paper
A Normalized Structure Selection Algorithm Based on Coupling for Gaussian Mean Fields
Chen Yarui and Liao Shizhong
2010, 47(9):  1497-1503. 
Asbtract ( 413 )   HTML ( 0)   PDF (687KB) ( 361 )  
Related Articles | Metrics
Gaussian Markov random field is a probabilistic model with multivariate Gaussian distribution and conditional independence assumptions. Gaussian mean field is a basic variational inference method on the Gaussian Markov random field, which computes the lower bound of the objective function through variational transformation with free distribution of the variables factorized into clusters. The structure selection of free distribution plays an important role in variational inference, and it is critical to the tradeoff between the variational accuracy and the computational complexity. This paper deals with the structure selection criterion and algorithm issues for the Gaussian mean field, and then provides a new structure selection criterion and an efficient structure selection algorithm. First, the concepts of coupling and quasi-coupling are proposed to measure the dependence among variable clusters of the Gaussian Markov random field model, and the coupling-accuracy theorem is proved for the Gaussian mean field, which provides the quasi-coupling as the new structure selection criterion. Then a normalized structure selection algorithm is designed based on the quasi-coupling criterion and the normalization technique for Gaussian mean field, which avoids unbalanced computational complexity among clusters through cluster normalization. Finally, numerical comparison experiments are presented to demonstrate the validity and efficiency of the normalized structure selection algorithm.
A Pareto Coevolutionary Algorithm Integrated with Dimension Extraction
Yang Liping, and Huang Houkuan
2010, 47(9):  1504-1513. 
Asbtract ( 329 )   HTML ( 1)   PDF (1384KB) ( 461 )  
Related Articles | Metrics
Coevolution offers an adaptive evaluation method for problems where performance can be measured using tests. How to ensure the reliability and efficiency of evaluation is a central challenge in coevolution research. Recent studies have shown that within coevolutionary problem domains[0], there exist a set of implicit dimensions that can structure the evolution information of problem domains, on which the performance of each individual can be accurately evaluated. Therefore, by means of dimension information of problems, reliable evaluation can in principle be provided using only a possible small subset of all tests. Based on the above studies, the characteristic outcome relationships between individuals possessed by dimension structures are first analyzed, and then an online dimension extraction approach based on the characteristic outcome relationships is proposed. Furthermore, a coevolutionary algorithm integrated with the dimension extraction approach is designed. The algorithm synchronously extracts the dimensions of the problem during execution and utilizes this information to provide accurate evaluation for individuals, and to guide selection and reservation so as to guarantee monotonic progress. Experimental results on abstract test problems demonstrate the feasibility of the proposed algorithm, and show that it outperforms other existing similar algorithms in both performance and accuracy of dimension extraction.
Finite Basis for gfp-Model of Description Logic FLε
Tang Suqin, Cai Zixing, Wang Ju, and Jiang Yuncheng,
2010, 47(9):  1514-1521. 
Asbtract ( 536 )   HTML ( 0)   PDF (802KB) ( 420 )  
Related Articles | Metrics
Description logics (DLs) are a class of knowledge representation formalisms in the tradition of semantic networks and frames, which can be used to represent the terminological knowledge of an application domain in a structured and formally well-understood way. The finite basis problems in DLs, especially the fundamentality and the current research progresses of the finite basis problems in DLs are analyzed in this paper. The attribute implication and the Duguenne-Guigues basis in the formal concept analysis (FCA) are studied. Based on the existence of the Duguenne-Guigues basis in FCA and the works of F Baader, a new context, namely description context and the implications of description logic formulas are defined. It is proved that there exists a unique greatest fixed-points(gfp) model in the terminological cycles of the description logic system FLε. Based on the gfp-model, the existence of the finite basis (a finite set of implications) in the terminological cycles in the description logic system FLε is proved. Moreover, the soundness and the completeness of the finite basis are proved, too. Such a finite basis provides the knowledge engineers with interesting information on the application domain described by the description context. The knowledge engineers can use these implications as starting point for building an ontology describing this application domain.
Theorem Proving Algorithm Based on Semi-Extension Rule
Zhang Liming, Ouyang Dantong, and Bai Hongtao
2010, 47(9):  1522-1529. 
Asbtract ( 459 )   HTML ( 0)   PDF (839KB) ( 441 )  
Related Articles | Metrics
The classical NP-complete problem of ATP (automated theorem proving) has seen much interest in not just the theoretical computer science community, but also in areas where practical solutions to this problem which enable significant practical applications. However, NP-completeness does not exclude the possibility of finding algorithms that are efficient enough for solving many interesting satisfiability instances. These instances arise from many diverse areas many practical problems in AI planning, circuit testing and verification for instance. ATP has always been one of the central concerns of AI, and resolution-based TP is to try to deduce the empty clause to check satisfiability. While the extension-rule-based TP proceeds inversely to resolution, which checks the satisfiability by deducing the set of clauses consisting of all the maximum terms. Scholars have investigated the extension-rule method. For example, Murray has applied the extension rule to the generation of the target language based on the knowledge compilation, and achieved good results. After a deep investigation on the extension rule, the concept of semi-extension rule is proposed. Based on the above work, a semi-extension-rule-based theorem proving algorithm so-called SER is present. Moreover, the approachs soundness, completeness and complexity are analyzed and proved. Finally, results show that the efficiency of algorithm SER is highly improved, which obviously outperforms both the directional resolution algorithm DR and the extension-rule-based algorithms IER and NER.
A Novel Neural Network Ensemble Method and Its Application in Precision Fertilization
Yu Helong, Liu Jie, Jin Di, Yang Yupeng, and Liu Dayou,
2010, 47(9):  1530-1538. 
Asbtract ( 507 )   HTML ( 1)   PDF (1175KB) ( 392 )  
Related Articles | Metrics
In order to solve the problem of precise fertilization rate determination, a novel neural network ensemble method is introduced in this paper. In this method, the method of sampling with replacement is used to produce neural network individual set. A novel formula measuring the network similarity is given and Freys clustering algorithm AP is used to select the networks with high precision and greater diversity. Then by the Lagrange multiplier ensemble(LME) and forecasting effective measure mensemble(FEME) method, these selected networks are combined. The experiment on the standard dataset shows that, LME algorithm has higher accuracy and stronger generalization than the single neural network. Furthermore, as a linear weighted ensemble method, LME is better than FEME. Generally, the root mean squared error (RMSE) decreases when the subnets number and cluster number increases, but when the number reaches a level, the decreasing trend of RMSE becomes slow. Then according to the fertilizer effect data on maize field in black soil, LM ensemble is used to build the precision fertilization model, where soil nutrient and target yield are taken as inputs and fertilization rate is taken as output. The practice shows that LM ensemble based fertilization model is better than traditional fertilization models and the existing neural network based fertilization models.
VOTCL and the Study of Its Application on Cross-Selling Problems
Zhou Guangtong, Yin Yilong, Guo Xinjian, and Dong Cailing
2010, 47(9):  1539-1547. 
Asbtract ( 471 )   HTML ( 0)   PDF (1608KB) ( 379 )  
Related Articles | Metrics
Cross-selling is regarded as one of the most promising strategies to make profits. The authors first describe a typical cross-selling model, followed by analysis showing that class-imbalance and cost-sensitivity usually co-exist in the data sets collected from this domain. In fact, the central issue in real-world cross-selling applications focuses on the identification of potential cross-selling customers. However, the performance of customer prediction suffers from the problem that class-imbalance and cost-sensitivity are arising simultaneously. To address this problem, an effective method called VOTCL is proposed. In the first stage, VOTCL generates a number of balanced training data sets by combining under-sampling and over-sampling techniques; then a base learner is trained on each of the data set in the second stage; finally, VOTCL obtains the final decision-making model by using an optimal threshold based voting scheme. The effectiveness of VOTCL is validated on the cross-selling data set provided by PAKDD 2007 competition where an AUC value of 0.6037 is achieved by using the proposed method. The ensemble model also outperforms a single base learner, which to some extent shows the efficacy of the proposed optimal threshold based voting scheme.
Research on a Common Feature Selection Method for Multiple Supervised Models
Wang Bo, Huang Jiuming, Jia Yan, and Yang Shuqiang
2010, 47(9):  1548-1557. 
Asbtract ( 712 )   HTML ( 0)   PDF (1152KB) ( 558 )  
Related Articles | Metrics
Feature selection is one of the most important problems in pattern recognition, machine learning and data mining areas, as a basic pre-processing step of compressing data. Most of the current algorithms were proposed separately for some special domain, which limited their extension. Especially, different applications are often under different supervised models, such as supervised, semi-supervised and unsupervised model. A concrete feature selection algorithm is always designed for a given environment. When the setting is changed, the original algorithm, which was running fluently and efficiently, turns to be inefficient, or even useless. Hence a new algorithm should be explored in this condition.This paper presents a common feature selection method based on Hilbert-Schmidt Independence Criterion, evaluating the correlation between feature subset and target concept. Intrinsic properties of feature selection are exploited in this method, under multiple supervised models, like supervised, semi-supervised and unsupervised. And a uniform format is applied. Furthermore, some existing algorithms can be explained from the viewpoint of kernel-based methods, which brings a deeper understanding. And a novel algorithm is derived from this method. It can solve a challenging problem, known as interactive feature selection. The experimental results not only demonstrate the efficiency and stability of the algorithm, but also infer that the method can give a considerable guidance for the production of novel feature selection algorithms.
Information Pattern Analysis of Maneuvering Target Interceptor
Fan Hongqi, Lu Zaiqi, Wang Sheng, and Fu Qiang
2010, 47(9):  1558-1566. 
Asbtract ( 436 )   HTML ( 1)   PDF (1001KB) ( 357 )  
Related Articles | Metrics
An interception scenario of a highly maneuvering target is formulated as a stochastic pursuit-evasion differential game, the information pattern of which has some essential effects on the solution structure of problems. Through analyzing the information pattern of a highly maneuvering target interceptor, the solution structure of the differential games and its relationship with the information pattern are studied and revealed in this paper. Firstly, the definition of information pattern and its related concepts are regularized, and the information pattern of the interceptor is analyzed, too. The conclusion shows that the interceptor has a no sharing pattern under the typical scenario of highly maneuvering target interception. Therefore, the estimation problem of the interception is a hybrid estimation. Then the equivalence between no sharing pattern and control-delayed sharing pattern is proved when the maneuver of evader is satisfied with the observability condition. Finally, the theorems about the information pattern and the solution of the differential games under control-delayed sharing pattern are also proved. Especially, the theorem, which shows the value of differential games has the monotonic increasing relationship with the delay of the control-delayed sharing pattern and called monotonic theorem in the paper, is proved, too. This work reveals the direction for improving the homing accuracy.
Extended Domain Model Based Named Attribute Extraction
Wang Yu, Tan Songbo, Liao Xiangwen, and Zeng Yiling,
2010, 47(9):  1567-1573. 
Asbtract ( 566 )   HTML ( 4)   PDF (855KB) ( 554 )  
Related Articles | Metrics
Web information extraction is an important task of Web mining. Various applications could benefit from the advancement in this area. These applications include semantic Web, vertical search, sentiment analysis, etc. Current techniques require lots of human interaction which preclude the universal application of Web information extraction. To automate the extraction process, recent research works identify specific features of special domains and extract information by machine learning techniques. However, because of the dependence on specific features, it is very difficult to extend such methods to other domains. In this paper, the Web information extraction problem is analyzed and a subtask is proposed. This new subtask is called named attribute extraction task. Statistics results from multiple datasets prove that named attribute extraction task covers more than 60% attributes in these domains, which show the importance of this subtask. Named attributes are attributes of objects which are encoded in the name-value pair form. That is, the names and values of attributes are settled nearby in the Web pages. Therefore, once the names of attributes are located, the values can be extracted automatically. In this paper, an extended domain model is proposed to summarize attribute names of a domain. And an information extraction method based on this model is developed. Experiments show that the method can extract named attributes at the precision 80%, and at the recall higher than 90%.
Research on Network-Warning Model Based on Dynamic Peer-to-Peer Overlay Hierarchy
Xu Jia, Feng Dengguo, and Su Purui
2010, 47(9):  1574-1586. 
Asbtract ( 427 )   HTML ( 1)   PDF (1945KB) ( 540 )  
Related Articles | Metrics
The increasing array of invasions against Internet, which are implemented through the distributed platform fabricated by rapid diffusion of malwares, such as worm and botnet, has become a hotspot of network security research. Traditional network warning models have inefficient infrastructure to integrate widely scattering data and computational resources, leading to incompetence in detecting and preventing Internet-scale threats. In this paper, the notion of “collaborative security” is addressed to be an inevitable approach to resist Internet-scale attacks originated from widely spreading malwares. Therefore, a novel network-warning model based on dynamic peer-to-peer overlay hierarchy has been proposed. The infrastructure of this model has a two-level dynamic P2P overlay hierarchy, which consists of four roles of peers from the top downward and endues the global network defense framework with the ability of self-adaptive adjustment and collaboration across various security domains. As a fundamental characteristic of this model, a compatible XML-based distributed message sharing method is also presented, which effectively integrates the data resources of heterogeneous network security facilities. The result of preliminary experiments based on a proof-of-concept prototype system demonstrates that this model not only carries out alert message aggregation, correlated analysis, attack scenario generation and implementation of active defense mechanism with improved performance and accuracy, but also has prominant robustness, scalability and manageability.
Certificateless Signcryption Scheme Without Pairing
Zhu Hui, Li Hui, and Wang Yumin
2010, 47(9):  1587-1594. 
Asbtract ( 634 )   HTML ( 2)   PDF (795KB) ( 720 )  
Related Articles | Metrics
Signcryption is a cryptographic primitive that fulfills both the functions of the public key digital signature and the public key encryption in a logical single step, at a cost in the computational and communication significantly lower than that required by the traditional “signature then encryption” approach. Based on discrete logarithm, a new certificateless signcryption scheme without using the bilinear pairings is presented in this paper, and the method to build certificateless signcryption scheme without using the bilinear pairings seems to have never been addressed in the existing literatures. The security analysis of the proposed scheme in the implementation plan with the random oracle model is presented in this paper, and the results show that the proposed scheme is secure on the assumption that the compute Diffle-Hellman problem and the discrete logarithm problem are the difficult problems. The proposed scheme also has the security properties of confidentiality, non-forgeability, publicly verifiability, non-repudiation and perfect forward security, etc. Moreover, the implementation of the proposed scheme only requires three times exponent operations and without any bilinear pairing operation. Compared with other existing certificateless signcryption schemes in the computational complexity, the proposed scheme is more efficient.
Reversible Data Hiding Using Full-Context Prediction and Error Expansion
Zeng Xiao, Chen Zhenyong, Chen Ming, and Xiong Zhang
2010, 47(9):  1595-1603. 
Asbtract ( 397 )   HTML ( 0)   PDF (1089KB) ( 462 )  
Related Articles | Metrics
Proposed in this paper is a data hiding scheme featuring high capacity and low distortion. It is a reversible data hiding scheme, in which secret data is embedded into the host image by expanding the prediction errors of pixels, and the original host image can be exactly recovered once the embedded data is extracted. In this paper, three strategies are proposed to improve the performance. Different from most existing predictors, wherein only a partial prediction context is available, a predictor with full context for every pixel is proposed to enhance the accuracy. An expansion methods derived from difference expansion is presented for embedding secret data into the prediction errors. And a new boundary map strategy is proposed to avoid the overflow problem, which only introduces a little overhead data. Experimental results demonstrate that the proposed full-context predictor has better accuracy than other predictors and works out a more centralized histogram; and with the proposed expansion method as well as boundary map, the proposed scheme can provide a larger embedding capacity and a better image quality than those reported in other state-of-the-art literature. In addition, the proposed scheme is able to adjust the payload to embed different amount of bits into one prediction error.
Identity-Based Authenticated Key Agreement Protocols in the Standard Model
Ren Yongjun, Wang Jiandong, Wang Jian, Xu Dazhuan, and Zhuang Yi
2010, 47(9):  1604-1610. 
Asbtract ( 573 )   HTML ( 2)   PDF (782KB) ( 514 )  
Related Articles | Metrics
There has been a recent rapid growth of interest in efficient cryptographic protocols that carry proofs in the standard model. Avoiding the random oracle model is to be preferred, given the known problems with instantiating these models in practice. However, among the existing authenticated key agreement protocols, some protocols were based on the identity based encryption schemes which are not prove secure, so the protocols can not be guarantee security; the others have been proven secure just hold in relatively weak models which do not fully support the session-state reveal or ephemeral-key reveal query, that lead to poor secure protocols. In this paper the idea of the MTI protocols is adopt to devise a new identity based authenticated key agreement protocol for two-party in standard model, which based on the truncated decisional augmented bilinear Diffie-Hellman exponent and decisional bilinear Diffie-Hellman assumptions. The formal proof is provided to show that the proposed scheme is provably secure in the enhanced Canetti-Krawczyk (eCK) model, which better supports the adversarys queries. To our best of our knowledge, the scheme is the first identity based authenticated key agreement in the eCK model and standard model. Moreover the proposed protocol has more performances in computational and communication efficiencies compared with all known protocols in standard model.
The Trust Expansion and Control in Social Network Service
Kang Le, Jing Jiwu, and Wang Yuewu
2010, 47(9):  1611-1621. 
Asbtract ( 412 )   HTML ( 0)   PDF (2499KB) ( 445 )  
Related Articles | Metrics
Social network service(SNS) is a new emerging Web application form. With the growth of SNS in application, the trust that plays the role of connecting people brings both good user experience and threats. Trust expansion is not only the means that SNS users construct their online social network with, but also exploited by the attackers to collect victims. Hence, it is desirable to detect the malicious trust expansion behaviors to prevent subsequent attacks. By analyzing the forming process of SNS complex network via a growth model (SNDM), it is discovered that the malicious users are quite possible to adopt some measures to avoid being exposed. This will bring in unavoidable difference in behavior features, so the difference is a weak point that can be exploited to identify the malicious users. In this paper the detailed analysis about the above issue is given, and a practical evaluation-based filter is designed to detect the attackers. Based on the filter a resilient trust control strategy is proposed to restrict and weaken the malicious users, and the normal users will not be bothered. The analysis and conclusions are positively supported by simulation or experiment in a real SNS scenario.
Performance Evaluation of Adaptive Spray Routing for Opportunistic Networks
Xu Jia, Li Qianmu, Zhang Hong, and Liu Fengyu
2010, 47(9):  1622-1632. 
Asbtract ( 436 )   HTML ( 1)   PDF (1287KB) ( 474 )  
Related Articles | Metrics
Opportunistic networks are sparse wireless networks where most of the time there does not exist complete path from the source to the destination. The messages can be forwarded to destination ultimately in the manner of asynchronous transmission which relies on the contact opportunities between nodes in opportunistic networks. This characteristics of opportunistic networks can greatly extend the space-time metric of information collection and processing. Although opportunistic network often shows large delay, it doesnt mean that it is not necessary to implement quality assurance. For examples, electronic notice in campus networks and short-term weather information in large national parks must be forwarded with specific delay. On the other hand, as the access networks, configuration and forecast for QoS are also necessary in order to provide acceptable service (such as e-mail) in opportunistic networks. Hence opportunistic networks must provide acceptable and resilient service in the face of challenging environments. According to the shortcoming of traditional spray routing in dynamic opportunistic networks, a class of adaptive spray mechanisms are proposed in this paper. Adaptive spray mechanisms use relay nodes to make spray decisions in order to apperceive and react to the change of network conditions exquisitely, and are least-cost delay-bounded routing protocols under specific spray mechanisms. Theoretic analysis of adaptive spray routing at aspects of routing cost, copy redundancy and expected delay are also given. Simulation results show that adaptive spray mechanisms show prominent superiority in routing cost, adaptability and expansibility, and are a class of correct and efficient delay-bounded opportunistic networks routing mechanisms.
dK Series Analysis on Annotated AS Topology Graph
Yang Guoqiang, Dou Qiang, and Dou Wenhua
2010, 47(9):  1633-1642. 
Asbtract ( 423 )   HTML ( 3)   PDF (997KB) ( 391 )  
Related Articles | Metrics
Topology property analysis and topology generation are important tasks for the Internet topology researchers. The topology properties of the Internet and the generated topologies with the same properties of the Internet topology are valuable for many other research fields, such as routing protocol designing, network performance analyzing and next generation network constructing. The dK series is proved to be an efficient tool for systematic topology property analysis, and the d=2 case is sufficient for most practical purposes. When using the dK series, the Internet topology is described as an undirected graph. Whereas, the AS(autonomous system) level Internet topology is better described by a graph annotated by AS relationships, as the complicated commercial relationships between ASs. In this paper, based on the definition of the dK series, a new series called dK′ series is proposed to analyze the property of the annotated AS topology graph, and a novel approach is presented to generate graphs with a given 2K′ distribution. The generation approach is based on an improved 2K graph generation algorithm which outperforms the existing 2K graph generation algorithm. By analyzing the result of experiment, it is found that the d=2 case is sufficient to describe most important properties of annotated AS graph.
The Architecture and the Programming Model of a Data-Flow-Like Driven Tiled Stream Processor
Xu Guang, An Hong, Xu Mu, Liu Gu, Yao Ping, Ren Yongqing, and Wang Fang
2010, 47(9):  1643-1653. 
Asbtract ( 327 )   HTML ( 0)   PDF (1848KB) ( 397 )  
Related Articles | Metrics
In the view of wire delay increase brought by technology development, the distributed and tiled processor architecture becomes increasingly attractive. The controlling signal dispatched by the stream controller of the conventional stream processor faces the increasing wire delay. The cluster consists of a variety of functional units in the conventional stream processor. The wire delay scalability of the centralized communication architecture among clusters is improper. In this paper, a tiled architecture of the stream processor (TPA-PD) is introduced, in which the distributed network is used to connect the tiled components to address the increasing wire delay of the controlling signal. A data-flow-like driven execution model, which is explicit data graph execution, is employed in the kernel level, the dependence relation is encoded in the instruction set, and the centralized communication model of clusters is converted into dynamic dispatching and distributed communication model which is wire-delay scalable. The instruction set, and how to map the stream programming model to the TPAD-PD and microarchitecture are described. Finally, the authors analyze the factor which has an effect on the kernel level execution time on a cycle-accurate simulator, and the TPA-PD achieves an average 20% speedup over traditional stream processor in eight benchmarks.
A Method on Analyzing Performance Sensitivity of Applications Based on Partial Derivatives of Non-linear Regression Equation
Li Shengmei, Cheng Buqi, Gao Xingyu, Qiao Lin, and Tang Zhizhong
2010, 47(9):  1654-1662. 
Asbtract ( 745 )   HTML ( 0)   PDF (2218KB) ( 460 )  
Related Articles | Metrics
Performance sensitivity reflects how sensitive the performance is to the influence factors. Analysis on performance sensitivity of different applications can guide the architects on the architecture design and help programmers on application optimization. In this paper, a performance sensitivity non-linear regression model (PS-NLRM) is set up to quantitatively analyze the performance sensitivity of different applications. In the model, principal components analysis is used to eliminate the linear correlations among influence factors which are quantified with performance events. Non-linear independent variables are introduced by curve fitting in the model. By regression analysis, a non-linear regression model is set up between cycles per instruction (CPI) and performance events. The model is implemented in SPEC CPU2006 integer benchmarks and uses the benchmarks as samples. The model is verified by t test and F test with goodness of fit over 90%. By using the partial derivatives of the non-linear regression equation of the model, performance sensitivity is obtained which is denoted by the quantitative change of CPI with the corresponding changes of the performance events. Based on performance sensitivity, performance of applications can be predicted. The average relative error of predicted performance of SPEC CPU2006 integer benchmarks is about 4.5%, which is half reduced compared with the traditional linear regression models.
Mean Shift Object Tracking Based on Adaptive Multi-Features Fusion
Yuan Guanglin, , Xue Mogen, , Han Yusheng, and Zhou Pucheng
2010, 47(9):  1663-1671. 
Asbtract ( 611 )   HTML ( 1)   PDF (1724KB) ( 561 )  
Related Articles | Metrics
Traditional mean shift tracking algorithm has achieved considerable success in object tracking due to its simplicity and robustness. But, it models the objects to be tracked with single color feature, which leads to that it is more prone to ambiguity, especially if the scene contains other objects characterized by a color distribution similar to that of the object of interest. To address this problem, a formula for target localization with mean shift based on multiple features is deduced. In addition, a method that evaluates the discriminability of each features with respect to foreground to background separability and adaptively calculates the features fusion weight by probability separability criterion is proposed. With the above deduced formula and proposed method, a novel mean shift target tracking algorithm based on adaptive multiple features fusion is presented. The proposed algorithm is run for each feature independently and the output of the mean shift algorithm for each feature is weighted based on the fusion weight. The states of the target in the current frame are computed through the integration of the outputs of mean shift. Experiments are conducted with color sequences and gray sequences, and its results show that the proposed algorithm has better performance than the classical mean shift tracking algorithm.