高级检索

    多示例学习下的深度森林架构

    Deep Forest for Multiple Instance Learning

    • 摘要: 多示例学习已经广泛地应用到各个领域,如图像检索、文本分类、人脸识别等.而近年来深度神经网络也成功地运用到各个任务和问题上,MI-Nets是深度神经网络在多示例学习领域一个成功的应用.虽然MI-Nets很成功,但其主要在图像相关的任务上表现突出,而在非图像任务比如文本分类任务上的性能并不令人满意.而最近2年兴起的深度森林在非图像任务上取得了较好的成绩,并因为其相对于深度神经网络有较少的参数和较稳定的性能而受到青睐.所以用深度森林来提升多示例学习性能具有可行性.但由于深度森林结构的限制,并不能把组成深度森林的每一个森林都直接替换成包级别的森林,需要修改深度森林的结构来达到目的.提出了一种新的深度森林架构MIDF.在该架构下,为了使得中间层的输出分布可以和包中的示例拼接成功,拼接时把包里的每个示例都看作是一个包,从而使得级联结构依然有效.另外,还能自动确认深度森林的层数.实验结果表明:该方法在图像任务上的性能与擅长处理图像任务的MI-Nets相当;而在文本数据上,该方法取得了比MI-Nets和其他基线算法更好的效果.

       

      Abstract: Multi-instance learning has been applied to various tasks, such as image retrieval, text classification, face recognition, etc. Deep neural network has also been successfully applied to plenty of tasks and problems. MI-Nets are one of the successful applications to multi-instance learning of deep neural network. Although MI-Nets have obtained achievements and the main task they are good at is image task, while on non-image tasks, they show mediocre performance. Over the last two years, deep forest has achieved good performance on non-image tasks and is favored for its less parameters and steady performance compared with deep neural network. Thus it is urgent and necessary to apply deep forest to multi-instance learning. However, due to the limitation of the structure of deep forest, we cannot simply substitute the bag-level forest for each forest of deep forest. Therefore, we need to change the structure of deep forest to achieve our purpose. In this paper, we provide a new structure of deep forest, that is multiple instance deep forest (MIDF). We regard each instance from a bag as a new bag, and thus the distribution output from the middle level can concatenate the original bag to make the cascade structure valid. We can also assure the number of layers of MIDF. Experimental results show that our method has comparable performance with MI-Nets on image task, while on non-image tasks, our method outperforms MI-Nets and other baseline methods.

       

    /

    返回文章
    返回