ISSN 1000-1239 CN 11-1777/TP

计算机研究与发展 ›› 2021, Vol. 58 ›› Issue (1): 60-69.doi: 10.7544/issn1000-1239.2021.20190838

• 人工智能 • 上一篇    下一篇



  1. (辽宁师范大学计算机与信息技术学院 辽宁大连 116081) (
  • 出版日期: 2021-01-01
  • 基金资助: 

Safe Tri-training Algorithm Based on Cross Entropy

Zhang Yong, Chen Rongrong, Zhang Jing   

  1. (School of Computer & Information Technology, Liaoning Normal University, Dalian, Liaoning 116081)
  • Online: 2021-01-01
  • Supported by: 
    This work was supported by the National Natural Science Foundation of China (61772252, 61902165), the Program for Liaoning Innovative Talents in Universities (LR2017044), and the Natural Science Foundation of Liaoning Province (2019-MS-216).

摘要: 半监督学习方法通过少量标记数据和大量未标记数据来提升学习性能.Tri-training是一种经典的基于分歧的半监督学习方法,但在学习过程中可能产生标记噪声问题.为了减少Tri-training中的标记噪声对未标记数据的预测偏差,学习到更好的半监督分类模型,用交叉熵代替错误率以更好地反映模型预估结果和真实分布之间的差距,并结合凸优化方法来达到降低标记噪声的目的,保证模型效果.在此基础上,分别提出了一种基于交叉熵的Tri-training算法、一个安全的Tri-training算法,以及一种基于交叉熵的安全Tri-training算法.在UCI(University of California Irvine)机器学习库等基准数据集上验证了所提方法的有效性,并利用显著性检验从统计学的角度进一步验证了方法的性能.实验结果表明,提出的半监督学习方法在分类性能方面优于传统的Tri-training算法,其中基于交叉熵的安全Tri-training算法拥有更高的分类性能和泛化能力.

关键词: 半监督学习, Tri-training算法, 交叉熵, 凸优化, 样本标记

Abstract: Semi-supervised learning methods improve learning performance with a small amount of labeled data and a large amount of unlabeled data. Tri-training algorithm is a classic semi-supervised learning method based on divergence, which does not need redundant views of datasets and has no specific requirements for basic classifiers. Therefore, it has become the most commonly used technology in semi-supervised learning methods based on divergence. However, Tri-training algorithm may produce the problem of label noise in the learning process, which leads to a bad impact on the final model. In order to reduce the prediction bias of the noise in Tri-training algorithm on the unlabeled data and learn a better semi-supervised classification model, cross entropy is used to replace the error rate to better reflect the gap between the predicted results and the real distribution of the model, and the convex optimization method is combined to reduce the label noise and ensure the effect of the model. On this basis, we propose a Tri-training algorithm based on cross entropy, a safe Tri-training algorithm and a safe Tri-training learning algorithm based on cross entropy, respectively. The validity of the proposed method is verified on the benchmark dataset such as UCI (University of California Irvine) machine learning repository and the performance of the method is further verified from a statistical point of view using a significance test. The experimental results show that the proposed semi-supervised learning method is superior to the traditional Tri-training algorithm in classification performance, and the safe Tri-training algorithm based on cross entropy has higher classification performance and generalization ability.

Key words: semi-supervised, Tri-training algorithm, cross entropy, convex optimization, sample label