高级检索

    基于转换学习的半监督分类

    Semi-Supervised Classification Based on Transformed Learning

    • 摘要: 近年来,基于图的半监督分类是机器学习与模式识别领域的研究热点之一. 该类方法一般通过构造图来挖掘数据中隐含的信息,并利用图的结构信息来对无标签样本进行分类,因此半监督分类的效果严重依赖于图的质量,尤其是图的构建方法和数据的质量. 为解决上述问题,提出了一种基于转换学习的半监督分类(semi-supervised classification based on transformed learning, TLSSC)算法.不同于已有的大多数半监督分类算法,此算法试图学习到一个转换空间,并在该空间上构建图,进行标签传播. 具体来说,此算法建立了一个统一的联合优化框架,其由3个部分组成:1)使用转换学习将原始数据映射到转换空间中;2)借鉴数据自表示思想,在转换空间上学习一个图;3)在图上进行标签传播. 这3个步骤交替进行、互相促进,避免低质量图导致的次优解. 对人脸和物品数据集进行实验,结果表明所提出的TLSSC算法在大部分情况下优于现有的其他算法.

       

      Abstract: In recent years graph-based semi-supervised classification is one of the research hot topics in machine learning and pattern recognition. In general, this algorithm discovers the hidden information by constructing a graph and classifies the labels for unlabeled samples based on the structural information of the graph. Therefore, the performance of semi-supervised classification heavily depends on the quality of the graph, especially the graph construction algorithm and the quality of data. In order to solve the above problems, we propose to perform a semi-supervised classification based on transformed learning (TLSSC) in this paper. Unlike most existing semi-supervised classification algorithms that learn the graph using raw features, our algorithm seeks a representation (transformed coefficients) and performs graph learning and label propagation based on the learned representation. In particular, a unified framework that integrates representation learning, graph construction, and label propagation is proposed, so that it is alternately updated and mutually improved and can avoid the sub-optimal solution caused by the low-quality graph. Specially, the raw features are mapped into transformed representation by transformed learning, then learn a high-quality graph by self-expression and achieve classification performance by label propagation. Extensive experiments on face and subject data sets show that our proposed algorithm outperforms other state-of-the-art algorithms in most cases.

       

    /

    返回文章
    返回