Ensemble Learning of ELM Regressors Based on l1-regularization
-
Graphical Abstract
-
Abstract
Recently ELM (extreme learning machine) is proposed for single-hidden layer feedforward neural networks (SLFNs), which not only provides good generalization performance, but also maintains extremely fast learning speed. However, choosing weights randomly may inevitably leads to instable generalization performance of ELM. So SERELM (sparse ensemble regressors of ELM) is proposed for filling up this deficiency, which ensembles some instable ELM regressors sparsely. On one hand, the experimental results on some standard time series datasets show that SERELM not only provides better generalization performance than single ELM regressor, but also outperforms another two ensemble methods related. On the other hand, it is accepted generally that measuring diversity is very important to ensemble learning. Many researchers are focusing on diversity, but how to define and measure diversity is still an open problem. Many diversity measures have been proposed, but none of them is accepted generally.Taking into account this dilemma, the proposed SERELM circumvents the problem by l1-norm regularization, which abandons measuring diversity simply. The experimental results show that: 1)l1-norm regularization causes that the relatively accurate ELM regressors are assigned to relatively large weight automatically; 2)negative correlation is largely ineffective for measuring diversity in applications of regression.
-
-