高级检索
    贺一笑, 庞明, 姜远. 蒙德里安深度森林[J]. 计算机研究与发展, 2020, 57(8): 1594-1604. DOI: 10.7544/issn1000-1239.2020.20200490
    引用本文: 贺一笑, 庞明, 姜远. 蒙德里安深度森林[J]. 计算机研究与发展, 2020, 57(8): 1594-1604. DOI: 10.7544/issn1000-1239.2020.20200490
    He Yixiao, Pang Ming, Jiang Yuan. Mondrian Deep Forest[J]. Journal of Computer Research and Development, 2020, 57(8): 1594-1604. DOI: 10.7544/issn1000-1239.2020.20200490
    Citation: He Yixiao, Pang Ming, Jiang Yuan. Mondrian Deep Forest[J]. Journal of Computer Research and Development, 2020, 57(8): 1594-1604. DOI: 10.7544/issn1000-1239.2020.20200490

    蒙德里安深度森林

    Mondrian Deep Forest

    • 摘要: 大多数有关深度学习的研究都基于神经网络,即可通过反向传播训练的多层参数化非线性可微模块.近年来,深度森林作为一种非神经网络深度模型被提出,该模型具有远少于深度神经网络的超参数.在不同的超参数设置下以及在不同的任务下,它都表现出非常鲁棒的性能,并且能够基于数据确定模型的复杂度.以gcForest为代表的深度森林的研究为探索基于不可微模块的深度模型提供了一种可行的方式.然而,深度森林目前是一种批量学习方法,这限制了它在许多实际任务中的应用,如数据流的应用场景.因此探索了在增量场景下搭建深度森林的可能性,并提出了蒙德里安深度森林.它具有级联森林结构,可以进行逐层处理.设计了一种自适应机制,通过调整原始特征和经过前一层变换后的特征的权重,以进一步增强逐层处理能力,更好地克服了蒙德里安森林在处理无关特征方面的不足.实验结果表明:蒙德里安深度森林在继承蒙德里安森林的增量训练能力的同时,显著提升了预测性能,并能够使用相同的超参数设置在多个数据集上取得很好的性能.在增量训练场景下,蒙德里安深度森林取得了与定期重新训练的gcForest接近的预测准确率,且将训练速度提升一个数量级.

       

      Abstract: Most studies about deep learning are built on neural networks, i.e., multiple layers of parameterized differentiable nonlinear modules trained by backpropagation. Recently, deep forest was proposed as a non-NN style deep model, which has much fewer parameters than deep neural networks. It shows robust performance under different hyperparameter settings and across different tasks, and the model complexity can be determined in a data-dependent style. Represented by gcForest, the study of deep forest provides a promising way of building deep models based on non-differentiable modules. However, deep forest is now used offline which inhibits its application in many real tasks, e.g., in the context of learning from data streams. In this work, we explore the possibility of building deep forest under the incremental setting and propose Mondrian deep forest. It has a cascade forest structure to do layer-by-layer processing. And we further enhance its layer-by-layer processing by devising an adaptive mechanism, which is capable of adjusting the attention to the original features versus the transformed features of the previous layer, therefore notably mitigating the deficiency of Mondrian forest in handling irrelevant features. Empirical results show that, while inheriting the incremental learning ability of Mondrian forest, Mondrian deep forest has a significant improvement in performance. And using the same default setting of hyperparameters, Mondrian deep forest is able to achieve satisfying performance across different datasets. In the incremental training setting, Mondrian deep forest achieves highly competitive predictive performance with periodically retrained gcForest while being an order of magnitude faster.

       

    /

    返回文章
    返回