Loading...
ISSN 1000-1239 CN 11-1777/TP

Table of Content

15 September 2013, Volume 50 Issue 9
Deep Learning: Yesterday, Today, and Tomorrow
Yu Kai, Jia Lei, Chen Yuqiang, and Xu Wei
2013, 50(9):  1799-1804. 
Asbtract ( 4930 )   HTML ( 205)   PDF (873KB) ( 10559 )  
Related Articles | Metrics
Machine learning is an important area of artificial intelligence. Since 1980s, huge success has been achieved in terms of algorithms, theory, and applications. From 2006, a new machine learning paradigm, named deep learning, has been popular in the research community, and has become a huge wave of technology trend for big data and artificial intelligence. Deep learning simulates the hierarchical structure of human brain, processing data from lower level to higher level, and gradually composing more and more semantic concepts. In recent years, Google, Microsoft, IBM, and Baidu have invested a lot of resources into the R&D of deep learning, making significant progresses on speech recognition, image understanding, natural language processing, and online advertising. In terms of the contribution to real-world applications, deep learning is perhaps the most successful progress made by the machine learning community in the last 10 years. In this article, we will give a high-level overview about the past and current stage of deep learning, discuss the main challenges, and share our views on the future development of deep learning.
Trust Strength Aware Social Recommendation Method
Guo Lei, Ma Jun, and Chen Zhumin
2013, 50(9):  1805-1813. 
Asbtract ( 954 )   HTML ( 0)   PDF (1634KB) ( 1597 )  
Related Articles | Metrics
With the advent of social networks, trust-aware recommendation methods have been well studied. Most of these algorithms assume that trusted users will have similar tastes. However, this assumption ignores the fact that two users may establish a trust connection for the social purpose or simply for etiquette, which may not result in similar opinions on the same item. Motivated by this observation, a novel trust strength aware social recommendation method, StrengthMF, is firstly proposed. Compared with previous methods, this new approach assumes that a trust relation does not necessarily guarantee the similarity in preferences between two users. Specificly, StrengthMF learns the trust strength and distinguishes users with more similar interests through the shared user latent feature space, i.e., the user latent feature space in trust network is the same in the rating matrix. This will allow us to acquire a better understanding of the relationship between trust relation and rating similarity. To validate the learned trust strength, InfluenceMF method is then proposed, which retrains SocialMF with estimated trust relations. Experimental results on real world product rating data set Epinions show that the proposed approaches outperform the state-of-the-art algorithms in terms of RMSE and MAE, and the learned trust strength can further improve traditional recommendation methods.
A Long Tail Distribution Constrained Recommendation Method
Yin Guisheng, Zhang Yanan, Dong Hongbin, and Dong Yuxin
2013, 50(9):  1814-1824. 
Asbtract ( 681 )   HTML ( 0)   PDF (2625KB) ( 699 )  
Related Articles | Metrics
The sales of on-line shopping follow the rule of long tail distribution, therefore the total sales of unpopular goods are very large. Recommendations for unpopular goods are as important as recommendations for popular goods. However, many existing recommendation methods only focus on the recommendations for popular goods, and assign an average weight of recommendation to unpopular goods which have small number of ratings, thus it is hard to bring unpopular goods to user's attention and the sales of unpopular goods are depressed. So it is very important to improve the weight of recommendation for unpopular goods. In this paper, a long tail distribution constrained recommendation (LTDCR) method is proposed for improving the weight of recommendation for unpopular goods appropriately. The weight of recommendation in LTDCR is calculated using similarity relationship among users, where the similarity relationship is determined by the similarity of users' behaviors and is propagated under the constraint of distrust relationship. In order to improve the weight of recommendation for unpopular goods, the weight of recommendation is constrained by the long tail distribution. An accurate description of long tail distribution is also given in this paper. The experimental results in dataset containing large number of unpopular goods show that LTDCR need fewer training set to improve the effectiveness of recommendations for unpopular goods.
Extracting Relations from the Web via Weakly Supervised Learning
Chen Liwei, Feng Yansong, and Zhao Dongyan
2013, 50(9):  1825-1835. 
Asbtract ( 1419 )   HTML ( 1)   PDF (2461KB) ( 1336 )  
Related Articles | Metrics
In the time of big data, information extraction at a large scale has been an important topic discussed in natural language processing and information retrieval. Specifically, weak supervision, as a novel framework that need not any human involvement and can be easily adapted to new domains, is receiving increasing attentions. The current study of weak supervision is intended primarily for English, with conventional features such as segments of words based lexical features and dependency based syntactic features. However, this type of lexical features often suffer from the data sparsity problem, while syntactic features strongly rely on the availability of syntactic analysis tools. This paper proposes to make use of n-gram features which can relieve to some extent the data sparsity problem brought by lexical features. It is also observed that the n-gram features are important for multilingual relation extraction, especially, they can make up for the syntactic features in those languages where syntactic analysis tools are not reliable. In order to deal with the quality issue of training data used in weakly supervised learning models, a bootstrapping approach, co-training, is introduced into the framework to improve this extraction paradigm. We study the strategies used to combine the outputs from different training views. The experimental results on both English and Chinese datasets show that the proposed approach can effectively improve the performance of weak supervision in both languages, and has the potential to work well in a multilingual scenario with more languages.
Efficiently Training Ball Vector Machine in Online Way
Yang Haifeng, Liu Yuan, Xie Zhenping, and Ding Xuedong
2013, 50(9):  1836-1842. 
Asbtract ( 1176 )   HTML ( 0)   PDF (988KB) ( 727 )  
Related Articles | Metrics
In many real classification problems, labeled samples may be received step by step. Correspondingly, how to efficiently and progressively train classifiers should be seriously concerned. Ball vector machine (BVM), compared with support vector machine and core vector machine, can be more efficiently trained with approximate performance. In terms of the intrinsic features of BVM, an online BVM (OBVM) is presented in this paper, whose effectiveness is analytically trustable. OBVM transforms a binary classification problem into two single classification problems, in which every class is modeled by a hyper-center. Two hyper-centers are incrementally updated by using the same strategy presented in BVM. Moreover, the perpendicular bisector of plane of two hyper-centers is used to classify data. The experimental results on several standard classification datasets and a practical application indicate that, OBVM has remarkable advantages on the time complexity of training one samples per time, and also can gain high classification accuracy. Therefore, OBVM provides a new generally effective solution to train classifiers with online way and could be applied to many practical problems.
Acquiring Topical Concept Network from Multiple Web Information Sources
Xu Yan, Jin Zhi, Li Ge, and Wei Qiang,
2013, 50(9):  1843-1854. 
Asbtract ( 602 )   HTML ( 0)   PDF (1921KB) ( 737 )  
Related Articles | Metrics
Wikipedia provides conceptual description for specific entry and organizes these entries to form a concept category system. It has become a common information source for automatic knowledge acquisition. However, only relying on Wikipedia’s information is not enough for acquiring the relationships between the concepts, while such relationships are one of the important components of symbolic knowledge representation. Other kinds of information sources are needed for this purpose. Therefore, we propose an approach for acquiring the relationships between the concepts from multiple Web information sources. These concept relationships will form a topical concept network. This approach conducts the following steps. First, based on a provided concept, named as the topical term, it obtains a group of concepts and the links between them from the Wikipedia category system. The concept group is centered on the topical term by some kind of relevance. Secondly, it exploits the search engine for collecting the related Web information as references for discovering and verifying the relationships between the concepts in the concept group by integrating different well-established methods in the information retrieval and natural language processing fields. Finally, it produces a topical concept network, in which the nodes concepts obtained in the first step and the edges are the relationships obtained in the second step. The experiments have been conducted on several topical terms from different domains and the results shows the feasibility and the effectiveness of the proposed approach.
Three-Step Bayesian Combination of SVM on Regularization Path
Wang Mei, and Liao Shizhong
2013, 50(9):  1855-1864. 
Asbtract ( 772 )   HTML ( 0)   PDF (2547KB) ( 653 )  
Related Articles | Metrics
Model combination integrates and leverages multiple models in the hypothesis space to improve the reliability and generalization performance of learning systems. In this paper, a novel three-step method for model combination of support vector machines (SVM) based on regularization path is proposed. The Lh-risk consistency for model combination of SVM is defined and proved, which gives the mathematical foundation of the proposed method. Traditionally, model set for model combination of SVM is constructed by data sampling methods. In our method, the model set is constructed with SVM regularization path, which is trained by using the same original training set. First, the initial model set is obtained according to the piecewise linearity of SVM regularization path. Then, the average of GACV is applied to exclude models with poor performance and prune the initial model set. The pruning policy improves not only the computational efficiency of model combination but the generalization performance. In the testing or predicting phase, the input-sensitive combination model set is determined with the minimal neighborhood method, and Bayesian combination is performed. Compared with traditional model combination methods of SVM, the proposed method need not to tune the regularization parameters for each individual SVM model, thus the training procedure can be simplified considerably. Experimental results demonstrate the effectiveness and efficiency of the three-step Bayesian combination of SVM on regularization path.
Non-Redundant Multi-View Clustering Based on Information Bottleneck
Lou Zhengzheng, Ye Yangdong, and Liu Ruina
2013, 50(9):  1865-1875. 
Asbtract ( 744 )   HTML ( 0)   PDF (2645KB) ( 583 )  
Related Articles | Metrics
Typical clustering algorithms output a single partition of the data. However, in real world applications, data can often be interpreted in many different ways and has different reasonable partitions from multiple views. Instead of committing to one clustering solution, here we introduce a novel algorithm, NrMIB (non-redundant multi-view information bottleneck), which can provide several non-redundant clustering solutions from multiple views to the user. Our approach employs the information bottleneck (IB) method, which aims to maximize the relevant information preserved by clustering results, to ensure the qualities of the clustering solutions, whilst the mutual information between the clustering labels and the known data partitions is minimized to ensure that the new clustering solutions are non-redundant. By adopting the mutual information and MeanNN differential entropy to estimate the preserved information, the NrMIB can be used to analyze both co-occurrence data and Euclidean space data. Besides, our algorithm is also suitable to analyze high dimension data, and can discover both linear and non-linear cluster shapes. We perform experiments on synthetic data pattern recognition, face recognition, and document clustering to assess our method against a large range of clustering algorithms in the literature. The experimental results show that the proposed NrMIB algorithm can discover the multiple reasonable partitions resided in the data, and the performance of NrMIB is superior to three non-redundant multi-view clustering algorithms examined here.
A Weighted Self Adaptive Similarity Measure
Xiao Yu and Yu Jian
2013, 50(9):  1876-1882. 
Asbtract ( 878 )   HTML ( 0)   PDF (3087KB) ( 983 )  
Related Articles | Metrics
Cluster analysis is one of the important techniques in data mining. One of the key problems for clustering algorithm is the dissimilarity measure or similarity measure, and the clustering results are directly dependent on the dissimilarity measure or similarity measure, especially for the clustering algorithms based on similarity matrix, such as spectral clustering. Spectral clustering is a recently developed clustering algorithm. Compared with the traditional partitioning clustering algorithms, spectral clustering algorithm is not limited to spherical clusters, which can successfully discover irregular shape clusters. Gaussian kernel is most commonly used as the similarity measure for most of spectral clustering methods in the literature. In this paper, based on Gaussian kernel similarity measure and the modified Gaussian kernel similarity measures, we propose a weighted self adaptive similarity measure. The proposed similarity measure not only can describe the similarity for data sets with different densities clusters, but also can reduce the similarities between outliers (noise) and other data points. Experimental results show that the proposed similarity measure gives better description of the similarities between data points in various types of data sets, leading to better clustering results.
Planar Object Recognition Based on Characteristic Ratio
Jia Qi, Tian Xiaoyu, Fan Xin, Luo Zhongxuan, and Guo He
2013, 50(9):  1883-1892. 
Asbtract ( 604 )   HTML ( 0)   PDF (3771KB) ( 649 )  
Related Articles | Metrics
Pattern recognition is an important research field in artificial intelligence. The stability of recognition under projection and affine transformation, especially under sever deformation has long been recognized as an important and difficult problem. In this paper, a novel geometry invariant named characteristic ratio(CHR) resistant to affine deformation is proposed. A new feature descriptor is also constructed based on this, which presents an image as the relationship of collinear points by a series of straight lines cross the image. Then, characteristic ratio is calculated by the position of collinear points, and the image is represented by a sequence of characteristic ratio spectra(CHRS). Finally, dynamic time warping (DTW) algorithm is employed to compare the similarity of spectra and to find the pixel level correspondence between two planar objects. It shows that the proposed method has not only good resistance to severe affine deformation, but also good discriminating ability to similar images.
Traffic Sign Recognition Based on Parameter-free Detector and DT-CWT
Gu Mingqin, and Cai Zixing
2013, 50(9):  1893-1901. 
Asbtract ( 628 )   HTML ( 0)   PDF (2874KB) ( 550 )  
Related Articles | Metrics
For the traffic sign which is difficult to detect in traffic environment, a traffic sign detection and recognition algorithm is proposed. First, the main colors of the traffic sign are segmented, the region of interest is expanded and its edge is extract. Then the edge is roughly divided by drawing linear and removing miscellaneous points. Turing angle curvature is computed according to the relations among the curvatures of the vertices. Then the vertex type is classified. The standard shapes such as circular, triangle, rectangle, are detected by parameter-free detector. The candidate regions are sent into the shape classifier to classify the type and exclude the interference. Finally, the type of traffic sign is recognized by dual tree complex wavelet transform and two-dimensional independent component analysis. The experimental results show that the detection and recognition rate of the proposed algorithm is high for the conditions such as traffic signs obscured, uneven illumination, color distortion, and it can achieve the effect of real-time processing.
Learning Parts for Object Detection
Chen Yaodong and Li Renfa
2013, 50(9):  1902-1913. 
Asbtract ( 675 )   HTML ( 2)   PDF (4865KB) ( 750 )  
Related Articles | Metrics
Previous work on part-based models for object detection has concentrated on searching locally discriminative features representing objects based on notion of parts. There is little research on how to select parts effectively and what kind of parts could improve the object detections. This paper investigates the learning problem of object parts with weakly labeled data, and proposes an adaptive approach for part selection. Without part-level supervision, for each training example this approach first detects seed windows of parts using single-part classifiers and then localizes parts in local regions via the image-specific distribution. The selected parts, which contain discriminative and relevant features, are used to train global parameters. Addressing the partial object occlusions in training examples, a pruning strategy is introduced to reduce the proportion of noise parts during learning iterations. The experimental results on PASCAL VOC 2007 and 2010 datasets demonstrate that the proposed part learning method gets an improvement on object detections compared with three classical part models, and the pruning strategy can speed up the convergence rates of model learning.
Personalized Fusion Method Based on Finger Vein and Finger Contour
Xi Xiaoming, Yin Yilong, Yang Gongping, and Meng Xianjing
2013, 50(9):  1914-1923. 
Asbtract ( 670 )   HTML ( 0)   PDF (3208KB) ( 635 )  
Related Articles | Metrics
Finger vein is a promising biometric for the authentication due to its some advantages. However, when capturing the finger vein image, the pose variation of the finger or the illumination variation may cause failure to verification. To overcome these limitations of using a single biometric, multiple biometrics can be combined to improve the performance. Compared with the fusion of other biometrics, the advantage of the fusion of finger vein and finger contour is that acquiring the two biometric image is convenient because the finger vein image and finger contour image can be obtained by a finger vein capturing device. In this paper, we propose a personalized fusion method based on finger vein and finger contour at the score level. The samples are firstly classified based on the original matching score, and then the classification confidence score (CCS) can be obtained based on the classification result. Compared with the traditional score, CCS contains additional classification information which may provide more useful information for the final fusion. In addition, a more effective personalized weight fusion based on CCS is proposed due to the difference of different subjects. Finally, we conduct extensive experiments on our database to evaluate the effectiveness of our proposal.
Face Recognition Based on Null-Space Kernel Discriminant Analysis
Chen Dayao, Chen Xiuhong, and Dong Changjian
2013, 50(9):  1924-1932. 
Asbtract ( 899 )   HTML ( 2)   PDF (1711KB) ( 958 )  
Related Articles | Metrics
For high-dimensional data, extraction of effective features is important for pattern recognition. Null-space linear discriminant analysis (NLDA) shows desirable performance, but it is still a linear technique in nature. In order to effectively extract nonlinear features of data set, a novel null-space kernel discriminant analysis (NKDA) is proposed for face recognition. Firstly, the kernel function is used to project the original samples into an implicit space called feature space by nonlinear kernel mapping. Then, the discriminant vectors in the null space of the kernel within-scatter matrix are extracted by only one step of economic QR decomposition. Finally, one step of Cholesky decomposition is used to obtain the orthogonal discriminant vectors in the kernel space. Compared with NLDA, not only does NKDA achieve better performance, but it is applicable to the large sample size problem. Besides, based on NKDA, the incremental NKDA method is developed, which can accurately update the discriminant vectors of NKDA when new samples are inserted into the training set. Experiments on ORL, Yale face database, and PIE subset demonstrate the effectiveness and efficiency of the algorithms, and show that the algorithm can reduce the dimensions of the data and improve the discriminant ability.
Image Segmentation Based on Higher Order Markov Random Field
Liu Lei, Shi Zhiguo, Su Haoru, and Li Hong
2013, 50(9):  1933-1942. 
Asbtract ( 1075 )   HTML ( 2)   PDF (6064KB) ( 1104 )  
Related Articles | Metrics
Image segmentation is an ill-posed problem, and accurate image segmentation can be done only if users supply enough constraint information. And the constraint information is always obtained in a user interactive way, and the foreground and background brush are used to label parts of the image pixels. In recent years, due to great progress in graph cut for solving the Markov random field (MRF) problem, more and more foreign researchers have developed many interactive image segmentation tools based on graph cut. Among all these tools, segmentation from a bounding box method is really attractive for its user-friendliness and perspective application. Based on recent popular grid MRF and SuperPixel MRF model, we introduce the higher order potential to the MRF based image segmentation problem. High order potential can capture not only pixel level image segmentation accuracy, but also local range image pixel correlation information. So the image segmentation algorithm performance is greatly enhanced due to the introduction of high order potential of for Gibbs energy function. Compared with pair-wise term MRF, high order MRF model has higher image description precision. Experimental results on image database show that our method outperforms GrabCut method. Finally, we extend our image segmentation from a box method to the video object segmentation problem and get good results.
A Partial Comparative Cross Collections LDA Model
Tan Wentang, Wang Zhenwen, Yin Fengjing, Ge Bin, and Xiao Weidong
2013, 50(9):  1943-1953. 
Asbtract ( 767 )   HTML ( 0)   PDF (2417KB) ( 604 )  
Related Articles | Metrics
Comparative text mining like spatiotemporal and cross-cultural text mining is concerned with extracting common and unique themes from a set of comparable text collections. State-of-the-art cross collections topic models suffer from the important flaw that they can only analyze the common topics among document collections. We introduce a generative topic model PCCLDA(partial comparative cross collections LDA) for multi-collections CTM to detect both common topics and collection-special topics,and model text more exactly based on hierarchical dirichlet processes. We present a Gibbs sampling for model inference, and evaluate the model by a variety of qualitative and quantitative evaluations including model perplexity and log-likelihood measurements. PCCLDA discovers both common topics among collections and collection special topics, and also shows improvements on model perplexity and Held-Out likehood compared with two main CTM topic models.
Modeling and Diagnosis of the Intelligent World
Wang Nan,, Ouyang Dantong, and Sun Shanwu,
2013, 50(9):  1954-1962. 
Asbtract ( 561 )   HTML ( 1)   PDF (1632KB) ( 613 )  
Related Articles | Metrics
Unified abstraction modeling framework and the formal representation help to realize automated reasoning. With the rapid development of the Internet of Things (IoT), various smart or intelligent objects are embedded in the physical world which partly changes the characters of the composed entities. Our familiar world is accordingly changing into an intelligent world which greatly increases the modeling and reasoning complexity. In this paper, based on the unified knowledge reformulation and abstraction (KRA) model framework presented by Saitta and Zucker, we propose a distinguishable knowledge reformulation and abstraction (dKRA) model according to the characters of the intelligent world. Different from the KRA model which represents the intelligent world in a unified way, the proposed dKRA model can be formalized through three interrelated sub-models and their relationships. We give some definitions and theorems based on the correlative concepts of model-based diagnosis to show that the diagnosis process can be limited in one (or more than one) of the sub-models. By focusing on the model verification during the phase of system designing, the presented diagnosis algorithm running on the dKRA model of an intelligent world is analyzed from both theoretical and experimental perspectives to show the improvement on the diagnosis process running directly on the KRA model of the same intelligent world.
Machine Proofs in Geometry Based on Complex Number Method
Li Tao and Zhang Jingzhong
2013, 50(9):  1963-1969. 
Asbtract ( 750 )   HTML ( 1)   PDF (823KB) ( 594 )  
Related Articles | Metrics
With the rapid development of machine proofs in geometry, more and more geometric automated machine proving methods were proposed. But when dealing with some difficult geometric problems which involve a great number of symbolic computation, the existing machine proving programs failed because of the information explosion or the complexity of the algorithm. Thus a new machine proving method, the complex number method, is proposed. By choosing the software Mathematica which has powerful capability of symbolic computation, a new prover CNMP(complex number method prover) is created. The CNMP can easily solve all the 180 geometric problems on TGTP, which is a Web-based library of problems in geometry. It aims to become a comprehensive common library of problems with a significant size and unambiguous reference mechanism, easily accessible to all researchers in the automated reasoning in geometry community. The laboratory report shows that the CNMP is very effective and the readability is also satisfying. In particular, the CNMP can prove some very difficult geometric theorems, such as five circles theorem, Morley theorem, Lemoine circle theorem, Thebault theorem, Brocard circle theorem and so on.
Solving Strong Cyclic Planning in Nondeterministic Reversible Planning Domain
Tang Jie, Wen Zhonghua, Wang Quan, and Huang Wei
2013, 50(9):  1970-1980. 
Asbtract ( 489 )   HTML ( 0)   PDF (2332KB) ( 598 )  
Related Articles | Metrics
The execution of actions is deterministic under ideal conditions. But in the real world, many accidents cause uncertainty and make negative impact. In response to this status, a new nondeterministic planning model is established, and two constraints are added in nondeterministic planning: 1)The execution of all actions is reversible; 2)If an action is executed in accidents, it is only possible to deviate from the target states. A solution for strong cycle planning is proposed in the model. At the first step, the execution of all actions is supposed to occur in ideal situation. Then, planning subgraph is converted to planning subtree and seeking out reachability of each state in planning tree. And then, the accident situation that execution of actions cause is considered. If it cannot reach target state that action is executed by accident, this action is deleted and planning subgraph and planning subtree are updated, Finally, the algorithm solves strong cycle planning through traversal planning subgraph and planning subtree. Taking into account some accidents are unpredictable, the algorithm only updates the part of solution which is already invalid and needn't get repeatedly a solution. Experimental results show that the algorithm can quickly update planning solution and the problem is independent of the size.
Environment Perception for AUV in Uncertain Ocean Environment
Zhang Rubo, Yin Lili, and Gu Hengwen
2013, 50(9):  1981-1991. 
Asbtract ( 678 )   HTML ( 1)   PDF (3813KB) ( 535 )  
Related Articles | Metrics
It is necessary for AUV to real-time perceive the influence degree of current uncertain events of ocean environment, AUV status and mission executive on the task, and to provide trigger condition to re-planning during mission. An environment perception for AUV in uncertain ocean environment is proposed. Environment perception makes use of the idea that heterogeneous real-world data of different types must be processed by uncertain event detection layer and uncertain event recognition layer. The uncertain event detection method is based on ontology reasoning and fuzzy logic, and the uncertain event recognition method is based on Bayesian network .The results of uncertain event detection and uncertain event recognition are used to the update uncertain environment ontology mode extended by probability description. In this way, environment perception method for AUV in uncertain ocean environment extends the ability of uncertainty presentation and reasoning in ontology. Navigation experimental results show the effectiveness of environment perception method for AUV in uncertain ocean environment.
A Matrix-Based Approach for Maintenance of Approximations under the Variation of Object Set
Wang Lei, Li Tianrui, Liu Qing, and Li Min
2013, 50(9):  1992-2004. 
Asbtract ( 587 )   HTML ( 1)   PDF (2735KB) ( 510 )  
Related Articles | Metrics
At present, most methods for calculating upper and lower approximations of a concept are based on the premise that the information system is static. In fact, the information system usually varies with time, including the variations of the universe, the attribute set and the attributes' values. These variations all result in the corresponding change of approximations of a concept in rough sets. How to update the approximations rapidly and efficiently is one of the hot issues on rough sets based dynamic knowledge discovery. The incremental updating method, in which the pre-existing knowledge is fully utilized, is one of the effective methods for updating approximations dynamically. In this paper, a matrix-based incremental method for updating the approximations under variable precision rough sets is presented from a new viewpoint while the universe of information system evolves over time. Then the corresponding algorithms are designed and their computational time complexities are analyzed. Furthermore, the programs corresponding to the algorithms are developed on MATLAB. Finally, the experiments on UCI datasets are designed to evaluate the performance of the proposed matrix-based incremental method and the matrix-based non-incremental method. The comparison of the experimental results demonstrates the feasibility, conciseness and validity of the proposed matrix-based incremental method.
Artificial Bee Colony Algorithm Based on Inductive Pheromone Updating and Diffusion
Ji Junzhong, Wei Hongkai, Liu Chunnian, and Yin Baocai
2013, 50(9):  2005-2014. 
Asbtract ( 643 )   HTML ( 0)   PDF (1682KB) ( 744 )  
Related Articles | Metrics
Artificial bee colony (ABC) algorithm is a novel search algorithm which simulates the intelligent foraging behavior of honeybee swarm to solve the practical problems. However, there is only a behavior communication way (dancing) in the current ABC algorithm, which results in the lack and lag of collaboration among bees and influences the solving performance of ABC algorithm. Inspired by the objective fact of transinformation among real bees, a new ABC algorithm is proposed by introducing a chemical communication way based on inductive pheromone and applied to solve multidimensional knapsack problems (MKP), which is more faithful to the transmission information of real bee colony system. With the combination of the behavior communication way and the chemical communication way, the new algorithm makes the honeybees cooperate with each other better by the scheme of inductive pheromone updating and diffusion. A number of simulation experiments and comparisons on benchmark datasets of MKP demonstrate that the performance of the new algorithm is superior over the original ABC algorithm. The performances of the new algorithm have also been compared with some typical meta-heuristic search algorithms, and the computational results show that the new ABC algorithm obtains better quality solutions than all the other approaches.
Lake-Energy Optimization Algorithm for Travelling Salesman Problem
Feng Xiang, Ma Meiyi, and Yu Huiqun
2013, 50(9):  2015-2027. 
Asbtract ( 666 )   HTML ( 0)   PDF (3831KB) ( 705 )  
Related Articles | Metrics
Lake freezing is a common natural phenomenon in winter. Inspired by this phenomenon, a novel intelligent and parallel algorithm, lake-energy optimization algorithm (LEO), is proposed in this paper. The LEO algorithm simulates the process of lake freezing. With the temperature of lake reducing, the water loses energy to the environment and starts to precipitate and freeze when its energy is below the threshold of freezing point. The LEO approach consists of two major models: the lake energy model and the wind-blow model. For the lake energy model, the center energy of lake, atmospheric energy, molecular energy of lake, as well as the wind are the main factors that affect the energy of lake with different weights under each stage of freezing. Meanwhile, the wind-blow model, as a supportive role, is responsible for agitating after freezing to prevent the solution from local optimum. Furthermore, the properties of the approach, including the correctness, convergence and effectiveness of the algorithm, are verified theoretically through the Convergent Theorem and the Lyapunov Second Theorem on stability. Via numerous simulations of TSPLIB and comparison with other classical algorithms, the characters of high efficiency, low computational complexity and strong convergence of the LEO algorithm are illustrated, which are especially crucial for the functioning of large-scale distribution problems.