ISSN 1000-1239 CN 11-1777/TP

• 论文 • 上一篇    下一篇

小规模数据集的神经网络集成算法研究

李 凯1 黄厚宽2   

  1. 1(河北大学数学与计算机学院 保定 071002) 2(北京交通大学计算机与信息技术学院计算智能研究所 北京 100044) (likai@mail.hbu.edu.cn)
  • 出版日期: 2006-07-15

Study of a Neural Network Ensemble Algorithm for Small Data Sets

Li Kai1 and Huang Houkuan2   

  1. 1(School of Mathematics and Computer Science, Hebei University, Baoding 071002) 2(Institute of Computational Intelligence, School of Computer and Information Technology, Beijing Jiaotong University, Beijing 100044)
  • Online: 2006-07-15

摘要: 研究了小样本数据集的神经网络分类器集成,提出了适合于小样本数据集的神经网络分类器集成方法Novel\-NNE,通过生成差异数据提高神经网络集成中个体的差异性,从而提高集成学习的泛化性能;最后应用不同的融合技术针对UCI标准数据集进行了实验研究.结果表明,在集成算法Novel\-NNE中,使用相对多数投票与贝叶斯融合方法的性能优于行为知识空间融合方法.

关键词: 神经网络集成, 小规模数据集, 差异性, 泛化

Abstract: Ensemble learning has become a hot topic in machine learning. It dramatically improves the generalization performance of a classifier. In this paper, neural network ensemble for small data sets is studied and an approach to neural network ensemble (Novel\-NNE) is presented. For increasing ensemble diversity, a diverse data set is generated as part training set in order to create diverse neural network classifiers. Moreover, different combinational methods are studied for Novel\-NNE. Experimental results show that Novel\-NNE for both the relative majority vote method and the Bayes combinational method achieves higher predictive accuracy.

Key words: neural network ensemble, small data set, diversity, generalization