高级检索

    一种基于在线蒸馏的轻量化噪声标签学习方法

    A Lightweight Noise Label Learning Method Based on Online Distillation

    • 摘要: 利用含有有损标签的噪声数据来训练深度学习模型是机器学习中的研究热点. 研究表明深度学习模型训练易受噪声数据的影响而产生过拟合现象. 最近,一种将元学习与标签校正相结合的方法能够使模型更好地适应噪声数据以减缓过拟合现象,然而这种元标签校正方法依赖于模型的性能,同时轻量化模型在噪声数据下不具备良好的泛化性能. 针对这一问题,本文结合元学习提出一种基于在线蒸馏的轻量化噪声标签学习方法KDMLC(knowledge distillation-based meta-label correction learning),该方法将深度神经网络与多层感知机构成的元标签校正(meta label correction,MLC)模型视为教师模型,对噪声标签进行校正并指导轻量化模型进行训练,同时采用双层优化策略训练并增强教师模型的泛化能力,从而生成更高质量的伪标签用于训练轻量化模型. 实验表明,KDMLC在高噪声水平下对比MLC方法准确率提高了5.50个百分点;同时对CIFAR10数据集使用Cutout数据增强,KDMLC在高噪声水平下对比MLC准确率提升了9.11个百分点,而在真实噪声数据集Clothing1M上的实验,KDMLC也优于其他方法,验证了KDMLC的可行性和有效性.

       

      Abstract: Training deep learning models with noisy data containing lossy labels is a hot research topic in machine learning. Studies have shown that deep learning model training is susceptible to overfitting due to noisy data. Recently, a method combining meta-learning and label correction can make the model better adapt to the noisy data to mitigate the overfitting phenomenon. However, this meta-label correction method relies on the model’s performance, and the lightweight model does not have good generalization performance under noisy data. To address this problem, we propose a knowledge distillation-based meta-label correction learning method (KDMLC), which treats the meta label correction model (MLC) composed of a deep neural network and a multilayer perceptron as a teacher model to correct the noise labels and guide the training of the lightweight model, and at the same time, KDMLC adopts a two-layer optimization strategy to train and enhance the generalization ability of the teacher model, so as to generate a higher-quality pseudo-labels for training the lightweight model. The experiments show that KDMLC improves the test accuracy by 5.50% compared with MLC method at high noise level; meanwhile, using Cutout data enhancement on the CIFAR10 dataset, KDMLC improves the test accuracy by 9.11% compared with MLC at high noise level, and the experiments on the real noisy dataset, Clothing1M, also show that KDMLC outperforms the other methods, verifying that KDMLC is better than the other methods, which verifies the feasibility and validity of KDMLC method.

       

    /

    返回文章
    返回