高级检索

    新的流形学习方法统一框架及改进的拉普拉斯特征映射方法

    A Novel Unified Manifold Learning Framework and an Improved Laplacian Eigenmap

    • 摘要: 流形学习是多个领域的重要研究课题.通过考察各种流形学习方法,提出了一种新的流形学习方法的统一框架,并在此框架下对拉普拉斯特征映射方法(Laplacian eigenmap, LE)进行了分析.进一步,基于此框架,提出了一种改进拉普拉斯特征映射方法(improved Laplacian eigenmap,ILE).它建立在LE方法和最大差异延展算法(maximum variance unfolding,MVU)的基础上,在保持流形谱图拉普拉斯特征的同时,以最大化任意两点之间的差异为目标.ILE有效地解决了拉普拉斯特征映射方法对邻域选择敏感以及MVU方法大计算量、局部限制过强等问题,且能够保持数据聚类性质,挖掘数据内蕴特征.通过实验说明了ILE的有效性.

       

      Abstract: Manifold learning is crucial in many research fields, such as pattern recognition, data mining, computer version, etc. However, there is little work focusing on developing a common framework which can unify all approaches. Meanwhile, since Laplacian eigenmap (LE) is a local manifold learning approach, it is very sensitive to the size of neighbors. Considering all kinds of manifold learning approaches, a novel unified manifold learning framework is proposed in this paper. It consists of two functional items, i.e., the maintaining item and the expecting item. Most approaches can be analyzed and improved within this framework. For illustration, LE is analyzed within the proposed framework. An improved Laplacian eigenmap (ILE) is then presented. It is mainly based on LE and maximum variance unfolding (MVU). The local character of graph Laplacian, which is referred to as maintaining item, is kept. The variances between any two points, which correspond to the expecting items, are maximized. ILE inherits the advantages of LE and MVU. Compared with LE, it is not so sensitive to the size of neighbors. And too strict local constraint of MVU is also relaxed. Moreover, ILE can also maintain the clustering property and discover the intrinsic character of original data. Several experiments on both toy examples and the real data sets are given for illustration.

       

    /

    返回文章
    返回