ISSN 1000-1239 CN 11-1777/TP

计算机研究与发展 ›› 2020, Vol. 57 ›› Issue (8): 1639-1649.doi: 10.7544/issn1000-1239.2020.20200219

所属专题: 2020数据挖掘与知识发现专题

• 人工智能 • 上一篇    下一篇



  1. 1(计算机软件新技术国家重点实验室(南京大学) 南京 210023);2(龙岩学院数学与信息工程学院 福建龙岩 364012) (
  • 出版日期: 2020-08-01
  • 基金资助: 
    国家重点研发计划项目(2017YFB0702600, 2017YFB0702601);国家自然科学基金项目(61806096);福建省中青年教师教育科研项目(科技类)(JAT170577,JAT190743);龙岩市科技计划项目(2019LYF13002)

Adaptive Neighborhood Embedding Based Unsupervised Feature Selection

Liu Yanfang1,2, Li Wenbin1, Gao Yang1   

  1. 1(State Key Laboratory for Novel Software Technology (Nanjing University), Nanjing 210023);2(College of Mathematics and Information Engineering, Longyan University, Longyan, Fujian 364012)
  • Online: 2020-08-01
  • Supported by: 
    This work was supported by the National Key Research and Development Program of China (2017YFB0702600, 2017YFB0702601), the National Natural Science Foundation of China (61806096), the Education Scientific Research Project of Young Teachers of Fujian Province (JAT170577, JAT190743), and the Science and Technology Project of Longyan City (2019LYF13002).

摘要: 无监督特征选择算法可以对高维无标记数据进行有效的降维,从而减少数据处理的时间和空间复杂度,避免算法模型出现过拟合现象.然而,现有的无监督特征选择方法大都运用k近邻法捕捉数据样本的局部几何结构,忽略了数据分布不均的问题.为了解决这个问题,提出了一种基于自适应邻域嵌入的无监督特征选择(adaptive neighborhood embedding based unsupervised feature selection, ANEFS)算法,该算法根据数据集自身的分布特点确定每个样本的近邻数,进而构造样本相似矩阵,同时引入从高维空间映射到低维空间的中间矩阵,利用拉普拉斯乘子法优化目标函数进行求解.6个UCI数据集的实验结果表明:所提出的算法能够选出具有更高聚类精度和互信息的特征子集.

关键词: k近邻, 自适应邻域, 流形学习, 特征选择, 无监督学习

Abstract: Unsupervised feature selection algorithms can effectively reduce the dimensionality of high-dimensional unmarked data, which not only reduce the time and space complexity of data processing, but also avoid the over-fitting phenomenon of the feature selection model. However, most of the existing unsupervised feature selection algorithms use k-nearest neighbor method to capture the local geometric structure of data samples, ignoring the problem of uneven data distribution. To solve this problem, an unsupervised feature selection algorithm based on adaptive neighborhood embedding (ANEFS) is proposed. The algorithm determines the number of neighbors of samples according to the distribution of datasets, and then constructs similarity matrix. Meanwhile, a mid-matrix is introduced which maps from high-dimensional space to low-dimensional space, and Laplacian multiplier method is used to optimize the reconstructed function. The experimental results of six UCI datasets show that the proposed algorithm can select representative feature subsets which have higher clustering accuracy and normalize mutual information.

Key words: k-nearest neighbor, adaptive neighborhood, manifold learning, feature selection, unsupervised learning