高级检索
    王 权 陈松灿. 基于l1-正则化的ELM回归集成学习[J]. 计算机研究与发展, 2012, 49(12): 2631-2637.
    引用本文: 王 权 陈松灿. 基于l1-正则化的ELM回归集成学习[J]. 计算机研究与发展, 2012, 49(12): 2631-2637.
    Wang Quan and Chen Songcan. Ensemble Learning of ELM Regressors Based on l1-regularization[J]. Journal of Computer Research and Development, 2012, 49(12): 2631-2637.
    Citation: Wang Quan and Chen Songcan. Ensemble Learning of ELM Regressors Based on l1-regularization[J]. Journal of Computer Research and Development, 2012, 49(12): 2631-2637.

    基于l1-正则化的ELM回归集成学习

    Ensemble Learning of ELM Regressors Based on l1-regularization

    • 摘要: 极速学习机(extreme learning machine, ELM)是近年提出的一种极其快速且具有良好泛化性保证的单隐层神经网络学习算法.然而ELM随机的设置权值带来的不足是其性能的不稳定.稀疏的ELM回归集成学习算法(sparse ensemble regressors of ELM, SERELM)通过稀疏地加权组合多个不稳定ELM学习机弥补该不足.一方面,在典型时间序列上的回归实验不仅验证了SERELM的性能优于单个ELM回归器,而且也优于其他两个最近提出的集成方法.另一方面,集成学习的优劣通常与多样性密切相关,而对回归如何定义和度量多样性仍是一个问题,这导致了目前几乎没有一个普遍认可的合适度量方法.SERELM则利用l1-正则化, 绕开了这一问题,且实验结果表明:1)l1-正则化自动地为精度高的学习机赋以大的权值;2)很大程度上,回归中常用个体间的负相关性对多样性度量无效.

       

      Abstract: Recently ELM (extreme learning machine) is proposed for single-hidden layer feedforward neural networks (SLFNs), which not only provides good generalization performance, but also maintains extremely fast learning speed. However, choosing weights randomly may inevitably leads to instable generalization performance of ELM. So SERELM (sparse ensemble regressors of ELM) is proposed for filling up this deficiency, which ensembles some instable ELM regressors sparsely. On one hand, the experimental results on some standard time series datasets show that SERELM not only provides better generalization performance than single ELM regressor, but also outperforms another two ensemble methods related. On the other hand, it is accepted generally that measuring diversity is very important to ensemble learning. Many researchers are focusing on diversity, but how to define and measure diversity is still an open problem. Many diversity measures have been proposed, but none of them is accepted generally.Taking into account this dilemma, the proposed SERELM circumvents the problem by l1-norm regularization, which abandons measuring diversity simply. The experimental results show that: 1)l1-norm regularization causes that the relatively accurate ELM regressors are assigned to relatively large weight automatically; 2)negative correlation is largely ineffective for measuring diversity in applications of regression.

       

    /

    返回文章
    返回