高级检索
    刘 宇, 覃 征, 卢 江, 史哲文. 多模态粒子群集成神经网络[J]. 计算机研究与发展, 2005, 42(9): 1519-1526.
    引用本文: 刘 宇, 覃 征, 卢 江, 史哲文. 多模态粒子群集成神经网络[J]. 计算机研究与发展, 2005, 42(9): 1519-1526.
    Liu Yu, Qin Zheng, Lu Jiang, Shi Zhewen. Multimodal Particle Swarm Optimization for Neural Network Ensemble[J]. Journal of Computer Research and Development, 2005, 42(9): 1519-1526.
    Citation: Liu Yu, Qin Zheng, Lu Jiang, Shi Zhewen. Multimodal Particle Swarm Optimization for Neural Network Ensemble[J]. Journal of Computer Research and Development, 2005, 42(9): 1519-1526.

    多模态粒子群集成神经网络

    Multimodal Particle Swarm Optimization for Neural Network Ensemble

    • 摘要: 提出一种基于多模态粒子群算法的神经网络集成方法,在网络训练每个迭代周期内利用改进的快速聚类算法在权值搜索空间上动态地把搜索粒子分为若干类,求得每一类的最优粒子,然后计算最优个体两两之间的输出空间相异度,合并相异度过低的两类粒子,最终形成不但权值空间相异、而且输出空间也相异的若干类粒子,每类粒子负责一个成员网络权值的搜索,其中最优粒子对应于一个成员网络,所有类的最优粒子组成神经网络集成,成员网络的个数是由算法自动确定的.算法控制网络多样性的方法更直接、更有效.与负相关神经网络集成、bagging和boosting方法比较,实验结果表明,此算法较好地提高了神经网络集成的泛化能力.

       

      Abstract: The multimodal particle swarm optimization algorithm is proposed for neural network ensemble. In every iteration, particles are dynamically partitioned into several clusters, and then the difference between the best particles of clusters in output space is calculated. When the best particles of the two clusters have little difference, the two clusters are combined. Therefore, finally several clusters with much difference not only in network weight space but also in output space are achieved. Each cluster is responsible for one component network training. The best particle in a cluster corresponds to a component network. The number of component networks in the ensemble is automatically determined. The proposed method can effectively control the diversity of the networks. Compared with evolutionary ensembles with negative correlation learning algorithm, bagging and boosting, the experimental results show that the proposed method greatly improves the generalization ability of the neural network ensemble.

       

    /

    返回文章
    返回