Abstract:
Ensemble learning has become a main research topic in the field of machine learning recently. By training and combining some accurate and diverse classifiers, ensemble learning provides a novel approach for improving the generalization performance of classification systems. Studied in this paper are the architectures and methods for combination of multiple classifiers in support vector machine (SVM) ensemble for multi-class classification. After analyzing the defects of the known architectures including multi-class-level SVM ensemble and binary-class-level SVM ensemble, a two-layer architecture is proposed to construct SVM ensemble. Then fusion methods of the measurement-level output information of SVMs are studied based on the evidence theory. Different basic probability assignment functions are defined respectively in terms of the used strategy for multi-class extension, i.e. one-against-all and one-against-one, and different evidence combination rules are adopted according to the degree of conflicts among evidence. In the case of one-against-all strategy, the classical Dempster's rule can be used while in the case of one-against-one strategy a new rule is proposed to combine the heavily conflicting evidence. The experimental results show that the two-layer architecture is better than the multi-class-level architecture. Moreover, the evidence theory based methods can effectively utilize the measurement-level output information of binary SVMs so as to gain satisfactory classification accuracies.