Loading...
ISSN 1000-1239 CN 11-1777/TP

Table of Content

01 July 2015, Volume 52 Issue 7
Mixture of Probabilistic Canonical Correlation Analysis
Zhang Bo, Hao Jie, Ma Gang, Yue Jinpeng, Zhang Jianhua, Shi Zhongzhi
2015, 52(7):  1463-1476.  doi:10.7544/issn1000-1239.2015.20140236
Asbtract ( 1286 )   HTML ( 6)   PDF (4303KB) ( 1001 )  
Related Articles | Metrics
Canonical correlation analysis (CCA) is a statistical analysis tool, which is used to analyze the correlation between two sets of random variables. A critical limitation of CCA is that it can only detect linear correlation between the two domains that is globally valid throughout both data sets. It is not enough to reveal the large amount of non-linear correlation phenomenon in the real world. To address this limitation, there are three main ways: kernel mapping, neural network and the method of localization. In this paper, a mixture model of local linear probabilistic canonical correlation analysis (PCCA) called MixPCCA is constructed based on the idea of localization, and a two-stage EM algorithm is proposed to estimate the model parameters. How to determine the number of local linear models is a fundamental issue to be addressed. We solve this problem by the framework of cluster ensembles. In addition, the theoretical framework of MixPCCA model applied in pattern recognition is put forward. The results on both USPS and MNIST handwritten image datasets demonstrate that the proposed MixPCCA model not only provides a solution to capture the complex global non-linear correlation, but also has the ability of detecting correlation which only exist in the local area, which traditional CCA or PCCA fails to discover.
Second Order Newton’s Method for Training Radial Basis Function Neural Networks
Cai Xun, Chen Zhi, Kanishka Tyagi, Yu Kuan, Li Ziqiang, Zhu Bo
2015, 52(7):  1477-1486.  doi:10.7544/issn1000-1239.2015.20140373
Asbtract ( 752 )   HTML ( 0)   PDF (2692KB) ( 821 )  
Related Articles | Metrics
A hybrid two-step second-order batch approach is presented for constructing and training radial basis function (RBF) neural networks. Unlike other RBF neural network learning algorithms, the proposed paradigm uses Newton’s method to train each set of network parameters, i.e. spread parameters, mean vector parameters and weighted distance measure(DM) coefficients and output weights parameters. For efficiently calculating the second-order equations of Newton’s method, all the optimal parameters are found out using orthogonal least squares(OLS) with the multiply optimal learning factors(MOLFs) for training mean vector parameters. The simulation results of the proposed hybrid training algorithm on a real dataset are compared with those of the recursive least square based RBF(RLS-RBF) and Levenberg-Marquardt method based RBF(LM-RBF) training algorithms. Also, the analysis of the training performance for optimization of each set of parameters has been presented. The experimental results show that the proposed hybrid optimal weighted DM training algorithm, which is based on the optimization of the mean vectors, weighted DM coefficients and spread parameters, has significant improvement on training convergence speed compared with that of RLS-RBF and has very less computation cost compared with that of LM-RBF. It confirms that Newton’s method solved by OLS is a significantly valuable method for training the RBF neural network.
An Algorithm of Robust Online Extreme Learning Machine for Dynamic Imbalanced Datasets
Zhang Jing, Feng Lin
2015, 52(7):  1487-1498.  doi:10.7544/issn1000-1239.2015.20140182
Asbtract ( 683 )   HTML ( 1)   PDF (5280KB) ( 763 )  
Related Articles | Metrics
With the coming of big data age, dynamic data has gradually appeared in various application fields, such as safety monitoring, financial forecasting, and medical diagnostics. Although existing knowledge discovery and data mining techniques have shown great success in many real-world applications, dynamic data has the features of imbalance and instability of data classes, the dynamic change of data volume, which makes it difficult for the classification of dynamic data. To solve these problems, in this paper a robust weighed online sequential extreme learning machine algorithm (RWOSELM) based on the online sequential extreme learning machine algorithm (OSELM) is presented. RWOSELM generates the local dynamic weighted matrix with the help of cost sensitive learning theory, thereby it optimizes the empirical risk of the classification model. Meanwhile, RWOSELM takes the data distribution changes which are caused by temporal properties change of dynamic data into consideration, thus it introduces the forgetting factor to enhance the sensitivity of the classifier to the change of data distribution. The method is able to deal with the data with imbalanced class distribution, and maintains the good robust on dynamic data. This paper tests on 24 datasets with different distribution, and the results show that RWOSELM gets good results on imbalanced dynamic dataset.
Word Semantic Similarity Measurement Based on Nave Bayes Model
Wang Junhua, Zuo Wanli, Yan Zhao
2015, 52(7):  1499-1509.  doi:10.7544/issn1000-1239.2015.20140383
Asbtract ( 1131 )   HTML ( 4)   PDF (2262KB) ( 761 )  
Related Articles | Metrics
Measuring semantic similarity between words is a classical and hot problem in nature language processing, the achievement of which has great impact on many applications such as word sense disambiguation, machine translation, ontology mapping, computational linguistics, etc. A novel approach is proposed to measure words semantic similarity by combining Nave Bayes model with knowledge base. To start, extract attribute variables based on WordNet; then, generate conditional probability distribution by statistics and piecewise linear interpolation technique; after that, obtain posteriori through Bayesian inference; at last, quantify word semantic similarity. The main contributions are definition of distance and depth between word pairs with small amount of computation and high degree of distinguishing the characteristics from words’ sense, and word semantic similarity measurement based on nave Bayesian model. On benchmark data set R&G(65), the experiment is conducted through 5-fold cross validation. The sample Pearson correlation between test results and human judgments is 0.912, with 0.4% improvement over existing best practice, and 7%~13% improvement over classical methods. Spearman correlation between test results and human judgments is 0.873, with 10%~20% improvement over classical methods. And the computational complexity of the method is as efficient as the classical methods, which indicates that integrating Nave Bayes model with knowledge base to measure word semantic similarity is reasonable and effective.
An Overlapping Semantic Community Detection Algorithm Based on Local Semantic Cluster
Xin Yu, Yang Jing, Tang Chuheng, Ge Siqiao
2015, 52(7):  1510-1521.  doi:10.7544/issn1000-1239.2015.20140308
Asbtract ( 1238 )   HTML ( 2)   PDF (6042KB) ( 1615 )  
Related Articles | Metrics
Since the semantic social network (SSN) is a new kind of complex networks, the traditional community detection algorithms depending on the adjacency in social network are not efficient in the SSN. To solve this problem, an overlapping community structure detecting method on semantic social networks is proposed based on the local semantic cluster (LSC). Firstly, the algorithm utilizes the Gibbs sampling method to establish the quantization mapping by which the semantic information in nodes is changed into the semantic space, with the latent Dirichlet allocation (LDA) as the semantic model; Secondly, the algorithm establishes the similarity matrix of SSN, with the relative entropy of semantic coordinate as the measurement of similarity between nodes; Thirdly, according to the character of local small-world in social network, the algorithm proposes the S-fitness model which is the local community structure of SSN, and establishes the LSC method by the S-fitness model; Finally, the algorithm proposes the semantic model by which the community structure of SSN is measured, and the efficiency and feasibility of the algorithm and the semantic modularity are verified by experimental analysis.
A Cache Approach for Large Scale Data-Intensive Computing
Zhou Enqiang, Zhang Wei, Lu Yutong, Hou Hongjun, Dong Yong
2015, 52(7):  1522-1530.  doi:10.7544/issn1000-1239.2015.20148073
Asbtract ( 766 )   HTML ( 2)   PDF (2648KB) ( 715 )  
Related Articles | Metrics
With HPC systems widely used in today’s modern science computing, more data-intensive applications are generating and analyzing the increasing scale of datasets, which makes HPC storage system facing new challenges. By comparing the different storage architectures with the corresponding approaches of file system, a novel cache approach, named DDCache, is proposed to improve the efficiency of data-intensive computing. DDCache leverages the distributed storage architecture as performance booster for centralized storage architecture by fully exploiting the potential benefits of node-local storage distributed across the system. In order to supply much larger cache volume than volatile memory cache, DDCache aggregates the node-local disks as huge non-volatile cooperative cache. Then high cache hit ratio is achieved through keeping intermediate data in the DDCache as long as possible during overall process of applications. To make the node-local storage efficient enough to act as data cache, locality aware data layout is used to make cached data close to compute tasks and evenly distributed. Furthermore, concurrency control is introduced to throttle I/O requests flowing into or out of DDCache and regain the special advantage of node-local storage. Evaluations on the typical HPC platforms verify the effectiveness of DDCache. Scalable I/O bandwidth is achieved on the well-known HPC scenario of checkpoint/restart and the overall performance of typical data-intensive application is improved up to 6 times.
Concurrent Skiplist Based Double-Layer Index Framework for Cloud Data Processing
Zhou Wei, Lu Jin, Zhou Keren, Wang Shipu, Yao Shaowen
2015, 52(7):  1531-1545.  doi:10.7544/issn1000-1239.2015.20140358
Asbtract ( 828 )   HTML ( 0)   PDF (5223KB) ( 605 )  
Related Articles | Metrics
Cloud data processing plays an essential infrastructure in cloud systems. Without efficient structures, cloud systems cannot support the necessary high throughput and provide services for millions of users. However, most existing cloud storage systems generally adopt a distributed Hash table (DHT) approach to index data, which lacks to support range-query and dynamic real-time character. It is necessary to generate a scalable, dynamical and multi-query functional index structure in cloud environment. Based on the summary and analysis of the double-layer index systems for cloud storage, this paper provides a novel concurrent skiplist based double-layer index (referred as CSD-index) for cloud data processing. Two-layer architecture, which can breakthrough single machine memory and hard drive limitation, is used to extend indexing scope. Online migration algorithm of skiplist’s nodes between local servers is used to make dynamic load-balancing. The details of the design and the implement of the concurrent skiplist are discussed in this paper. Optimistic concurrency control (OCC) technique is introduced to enhance the concurrency. Through concurrent skiplist CSD-index improves the load bearing capacity of the global index and enhances the overall throughput of the index. Experimental results show the efficiency of the concurrent skiplist based double-layer index and it has viability as an alternative approach for cloud-suitable data structures.
Object-Based Data De-Duplication Method for OpenXML Compound Files
Yan Fang, Li Yuanzhang, Zhang Quanxin, Tan Yu’an
2015, 52(7):  1546-1557.  doi:10.7544/issn1000-1239.2015.20140093
Asbtract ( 762 )   HTML ( 1)   PDF (4054KB) ( 572 )  
Related Articles | Metrics
Content defined chunking (CDC) is a prevalent data de-duplication algorithm for removing redundant data segments in storage systems. Current researches on CDC do not consider the unique content characteristic of different file types, and they determine chunk boundaries in a random way and apply a single strategy for all the file types. It has been proven that such method is suitable for text and simple contents, and it doesn’t achieve the optimal performance for compound files. Compound file is composed of unstructured data, usually occupying large storage space and containing multimedia data. Object-based data de-duplication is the current most advanced method and is the effective solution for detecting duplicate data for such files. We analyze the content characteristic of OpenXML files and develop an object extraction method. A de-duplication granularity determining algorithm based on the object structure and distribution is proposed during this process. The purpose is to effectively detect the same objects in a file or between the different files, and to be effectively de-duplicated when the file physical layout is changed for compound files. Through the simulation experiments with typical unstructured data collection, the efficiency is promoted by 10% compared with CDC method in the unstructured data in general.
File System Level Wear Leveling Mechanism for Non-Volatile Memory Based Storage
Cai Tao, Zhang Yongchun, Niu Dejiao, Ni Xiaorong, Liang Dongying
2015, 52(7):  1558-1566.  doi:10.7544/issn1000-1239.2015.20140019
Asbtract ( 723 )   HTML ( 0)   PDF (2110KB) ( 500 )  
Related Articles | Metrics
The access speed of STTRAM, MRAM and the other Non-volatile memory is close to that of DRAM. So they are very useful for high performance storage system and improving the performance of large computer system, but their limited write endurance is one of the most important limitations. We introduce file system level wear leveling technology for them. Using the Hash function to disperse files on the storage system, some blocks are avoided allocating repeatedly when creating and deleting files. The blocks with lower write count are chosen to avoid some blocks with centralized write operation when allocating space for file. An active heat data migration strategy is designed to reduce the I/O performance impaction of wear leveling mechanism. Finally, we implement the file system level wear leveling mechanism prototype based on the open source object-based storage device named Open-osd. Using Filebench, postmark and some trace are tested and analyzed, and the results show that the difference of write count between blocks is reduced to around one of the twentieth with original, and only reducing 6% system I/O performance and increasing about 05% amount of writing. It is verified that the file system level wear leveling mechanism is effective and stable.
Asyn-SimRank: An Asynchronous Large-Scale SimRank Algorithm
Wang Chunlei, Zhang Yanfeng, Bao Yubin, Zhao Changkuan, Yu Ge, Gao Lixin
2015, 52(7):  1567-1579.  doi:10.7544/issn1000-1239.2015.20140313
Asbtract ( 1250 )   HTML ( 0)   PDF (2639KB) ( 579 )  
Related Articles | Metrics
The SimRank algorithm, which exploits network structure to measure the similarity between node pairs, has been widely used in many areas, such as online social networks and link prediction. In recent years, with the development of big data, the input data set of the SimRank algorithm is constantly increasing. People are utilizing distributed computing models, such as MapReduce, to design large-scale SimRank algorithm for solving the big data problems. However, since SimRank algorithm contains a high-cost iterative process with synchronization barriers between iterations and the computational complexity is high in each iteration, the large-scale SimRank computation does not result in the satisfactory performance. In this paper, 1)We propose Asyn-SimRank by employing the iterate-cumulate approach,which asynchronously executes the core iterative computation to avoid the high-cost synchronization barriers in large-scale distributed environments, and effectively reduces the amount of computation and communication.2) We further propose the keypoint priority scheduling mechanism to accelerate convergence. 3)We prove the accuracy and convergence property of Asyn-SimRank as well as the efficiency of the keypoint priority scheduling. 4)We then implement Asyn-SimRank on Maiter, which is a distributed framework supporting asynchronous iteration. Expremental results show that, compared with the SimRank and Delta-SimRank implementing on Hadoop and Spark, the large-scale Asyn-SimRank significantly promotes the computational efficiency and accelerates the convergence.
Abstract Modeling Formalisms in Software Model Checking
Wei Ou, Shi Yufeng, Xu Bingfeng, Huang Zhiqiu, Chen Zhe
2015, 52(7):  1580-1603.  doi:10.7544/issn1000-1239.2015.20140413
Asbtract ( 1344 )   HTML ( 5)   PDF (3645KB) ( 1072 )  
Related Articles | Metrics
Abstraction is a fundamental technique for solving the state-explosion problem in software model checking. In this paper, we survey a variety of abstract modeling formalisms that have been developed for this over the years. We first provide an overview of abstract software model checking based on the theoretical framework of abstract interpretation. We then discuss in detail several abstract modeling formalisms that are represented by 1) boolean Kripke structures, supporting traditional over-approximation or under-approximation; 2) Kripke modal transition systems and mixed transition systems, respectively corresponding to 3-valued and 4-valued Kripke structures, supporting both over-approximation and under-approximation on a single model; and 3) models with hyper transitions, including generalized Kripke modal transition systems and hyper transition systems, allowing for more precise abstract model checking. We discuss the corresponding approximation relations and optimal abstract models, and highlight their shortcomings and the motivations for the development of new formalisms. We also introduce the completeness results of abstract modeling formalisms. Finally, we discuss the problems in theoretical and practical aspects of abstract models and point out future research directions.
Modeling, Verification and Test of Interactive Behaviors in Distributed Software Systems
Zhang Chen, Duan Zhenhua, Tian Cong, Yu Bin
2015, 52(7):  1604-1619.  doi:10.7544/issn1000-1239.2015.20140244
Asbtract ( 826 )   HTML ( 0)   PDF (3760KB) ( 680 )  
Related Articles | Metrics
To ensure the correctness of interactive behaviors in distributed software systems, an abstraction approach of modular interaction is proposed. First, system state machine diagrams and object state machine diagrams are utilized to describe state transitions while UML2.0 sequence diagrams are adopted to express the interactive behaviors. Further, the model checking approach with propositional projection temporal logic (PPTL) is applied to verify whether the interactive modular can satisfy the system properties. Using the algorithms, object state machine diagram is transformed into Promela model, and the desired property of the system is specified by a PPTL formula. As a property specification language, PPTL has the expressive power of both full regular and omega regular expressions. At the same time, it is proper for the specification of state sensitive properties. Within the model checker based on Spin, if the system cannot satisfy the property, a counter-example and the execution path can be found. Finally, a novel framework for testing distributed software is given. Based on the verified sequence diagram, test case generation method can be used to get the set of test cases. Experimental results show that the proposed method is useful in improving the effectiveness and reliability of interactive behaviors in distributed software systems.
Research on Flow Sensitive Demand Driven Alias Analysis
Pang Long, Su Xiaohong, Ma Peijun, Zhao Lingling
2015, 52(7):  1620-1630.  doi:10.7544/issn1000-1239.2015.20140336
Asbtract ( 854 )   HTML ( 0)   PDF (1785KB) ( 771 )  
Related Articles | Metrics
In order to improve the response efficiency of pointer alias queries in the interactive environment, researches are interested in the demand driven strategy to reduce the cost for analyzing the unrelated pointer variables with respect to the objectives. The demand driven alias analysis based on the context free grammar has been proposed. However, its precision is only limited to the flow-insensitivity. The flow-insensitivity pointer alias restricts the precision of overlying analysis, so the bug detection results in more false alarms than the one with flow-sensitive alias analysis. In this paper, we propose a demand driven pointer alias analysis based on the graph reachability and the context free grammar to provide the flow sensitive precision, which has tolerable additional overhead comparing with the flow-insensitive alias analysis. First the updates of pointer variables are discriminated by the partial single static assignment to filter out the unrelated pointer variables as early as possible. Then the sequence of control flow along these assignments is expressed in the form of level linearization code, which is used to construct the assignment flow graph. Finally, the query of alias in demand driven is formalized as the search of reachability of target nodes in the assignment flow graph to achieve the precision of flow sensitivity. The experiments demonstrate that this presented method can improve the flow sensitivity the precision of alias analysis in demand driven with overhead tolerable.
A Generalized Non-interference Based on Refinement of Interfaces
Sun Cong, Xi Ning, Gao Sheng, Zhang Tao, Li Jinku, Ma Jianfeng
2015, 52(7):  1631-1641.  doi:10.7544/issn1000-1239.2015.20140306
Asbtract ( 698 )   HTML ( 2)   PDF (2594KB) ( 400 )  
Related Articles | Metrics
In the design and implementation of complicated component-based softwares, the security requirements on the whole system are hard to achieve due to the difficulty on enforcing the compositionality of the security properties. The specification and verification of security properties are the critical issues in the development of component-based softwares. In this paper, we focus on the generalization on the security lattice over the information flow security properties enforceable on the component-based softwares. The previous definitions of the information flow security properties in the component-based design are restricted on the binary security lattice model. In this work, we extend the interface structure for security to the generalized interface structure for security (GISS). We define a refinement relation and use this relation to give a non-interference definition (SME-NI) based on the principle of secure multi-execution. This is the first application of secure multi-execution on the information flow security verification of component-based systems. The new definition of non-interference can be adapted to any finite security lattice models. The Coq proof assistant is used to implement the certified library for interface automata, as well as the decision procedure for the refinement relation. The experimental results show the characteristics of SME-NI and the validity of decision procedure.
Risk Evaluation of Complex Information System Based on Threat Propagation Sampling
Ma Gang, Du Yuge, An Bo, Zhang Bo, Wang Wei, Shi Zhongzhi
2015, 52(7):  1642-1659.  doi:10.7544/issn1000-1239.2015.20140184
Asbtract ( 750 )   HTML ( 2)   PDF (2664KB) ( 581 )  
Related Articles | Metrics
Information security has been one of the focuses of social concern in the age of Internet. There is no doubt that accurately assessing the security of information medium is becoming the focus of the present information security work. For evaluating the risk of the large-scale distributed complex information system, we propose a risk evaluation method of the complex information system based on threat propagation sampling. Firstly, when the threats propagate in the complex information system, the number of threat propagation trees (TPT) is reduced by sampling the transition states of the asset nodes and the threat propagation edges emitted by the asset nodes, then computing the expected value loss of each node in the threat propagation tree and the probability of each threat propagation tree to evaluate the risk of the complex information system. The experimental analysis not only shows the risk evaluation proposed in this paper has higher time-efficient compared with the traditional combined strategy when producing the threat propagation tree, but also can make an objective and accurate risk evaluation. Furthermore, it can give more comprehensive and rational security-guide advices for security-risk managers when developing some security protection strategies on the asset nodes of the complex information system.
Trust-Distributed-Based Authentication Mechanism Using Hierarchical Identity-Based Cryptography
Tian Junfeng, Sun Kehui
2015, 52(7):  1660-1671.  doi:10.7544/issn1000-1239.2015.20140295
Asbtract ( 829 )   HTML ( 3)   PDF (3492KB) ( 666 )  
Related Articles | Metrics
The relationship among cloud service providers is becoming more and more complex, while these service providers are integrated on a public large-scale cloud computing platform. Cooperative relation and competitive relation coexist. Although a unified authentication is necessary for integrating, providers aren’t able to totally trust in a unique central authority. Single sign-on architecture could be confronted with the problems (such as security bottleneck, mandatory dependencies, key escrow, etc.) brought by the central authority. In order to solve these problems, an authentication mechanism based on trust dispersion theory using hierarchical identity-based cryptography is proposed in this paper. The secret value of central authority will be shared by service providers, as a result, not only the unified authentication is achieved, but also providers’ ability of self control is guaranteed. The central authority hands its core work of generating private keys to the corporation among main participants in the first level. Fake public key idea and sliding window can increase the difficulty of adversarial attacking. Cross domain authentication and key exchanging method are also supported. Comparing analysis shows that our scheme has superiority on not relying on central authority, without certificates maintenance, not having key escrow, cross-domain authentication, monitoring mechanism and so on.
A Multiple Replica Possession Proving Scheme Based on Public Key Partition
Fu Wei, Wu Xiaoping, Ye Qing, Xiao Nong, Lu Xicheng
2015, 52(7):  1672-1681.  doi:10.7544/issn1000-1239.2015.20140353
Asbtract ( 713 )   HTML ( 0)   PDF (1958KB) ( 533 )  
Related Articles | Metrics
In outsourcing cloud storage environment, users cannot completely trust storage service providers. It is a challenge problem to validate whether storage service providers are faithfully maintaining enough replicas complying its promise with users. Most of existing solutions have several disadvantages, such as low efficiency, high computation overload and the absence of supporting for dynamic data updating. A multiple replica cloud storage model with Collector is presented, and a novel multiple replica possession proving scheme, namely MRP-PKP(multiple replica possession proving scheme based on public key partition), is proposed based on public key partition. In preparing phrase, a public key is divided into several private shares and distributed to corresponding storage servers. In validating phrase, only after all storage servers show their possession evidences can the challenge be admitted as success. The scheme is designed to defeat collude adversaries, and can support dynamic data updating operations at block level easily. It is the first scheme to validate all replica’s possessions with just one challenge. Both theoretical analysis and simulating experiment show that MRP-PKP scheme has higher secure guarantee, lower communication cost and computation overload than existing schemes.
Segmentation of Color Overlapping Cells Image Based on Sparse Contour Point Model
Guan Tao, Zhou Dongxiang, Fan Weihong, Liu Yunhui
2015, 52(7):  1682-1691.  doi:10.7544/issn1000-1239.2015.20140324
Asbtract ( 1095 )   HTML ( 3)   PDF (3305KB) ( 757 )  
Related Articles | Metrics
Based on the analysis of cell contour structure, a sparse contour point model, which can describe the characteristics of the cell contour, is proposed in this paper. In the sparse contour point model, cell contour is divided into 2 parts, namely light contour and dark contour, respectively; and then the cell contour is approximately described as a set of sparse contour points. Based on this model, the color and grayscale image segmentation techniques are combined to locate the basic contour, which lies between the cell and the background. Then, a circular dynamic contour searching method is proposed to search for the dark contour that lies in the overlapping cell region along the basic contour. Contour points located by the searching method are arranged to construct the initial contour of the gradient vector flow (GVF) Snake model. Then the GVF Snake model is performed to obtain the final accurate segmentation result of the cell image. Various cell images containing single cell, overlapping cells of similar colors and overlapping cells of different colors have been tested to show the validity and effectiveness of the proposed method. The proposed techniques are useful for the development of automatic cervical cell image analysis systems.
Robust Fragments-Based Tracking with Multi-Feature Joint Kernel Sparse Representation
Hu Zhaohua, Yuan Xiaotong, Li Jun, He Jun
2015, 52(7):  1692-1704.  doi:10.7544/issn1000-1239.2015.20140152
Asbtract ( 786 )   HTML ( 0)   PDF (4749KB) ( 607 )  
Related Articles | Metrics
Most existing sparse representation based trackers only use a single feature to describe the objects of interest and tend to be unstable when processing challenging videos. To address this issue, we propose a particle filter tracker based on multiple feature joint sparse representation. The main idea of our algorithm is to partition each particle region into multiple overlapped image fragments. Eevery local fragment of candidates is sparsely represented as a linear combination of all the atoms of dictionary template that is updated dynamically and is merely reconstructed by the local fragments of dictionary template located at the same position. The weights of particles are determined by their reconstruction errors to realize the particle filter tracking. Our method simultaneously enforces the structural sparsity and considers the interactions among particles by using mixed norms regularization. We further extend the sparse representation module of our tracker to a multiple kernel joint sparse representation module which is efficiently solved by using a kernelizable accelerated proximal gradient (KAPG) method. Both qualitative and quantitative evaluations demonstrate that the proposed algorithm is competitive to the state-of-the-art trackers on challenging benchmark video sequences with occlusion, rotation, shifting and illumination changes.