ISSN 1000-1239 CN 11-1777/TP

计算机研究与发展 ›› 2018, Vol. 55 ›› Issue (10): 2321-2330.doi: 10.7544/issn1000-1239.2018.20170748

• 人工智能 • 上一篇    下一篇



  1. 1(山西大学计算机与信息技术学院 太原 030006);2(计算智能与中文信息处理教育部重点实验室(山西大学) 太原 030006) (
  • 出版日期: 2018-10-01
  • 基金资助: 

A Feature Selection Method for Small Samples

Xu Hang1, Zhang Kai1, Wang Wenjian1,2   

  1. 1(School of Computer and Information Technology, Shanxi University, Taiyuan 030006);2(Key Laboratory of Computational Intelligence and Chinese Information Processing(Shanxi University), Ministry of Education,Taiyuan 030006)
  • Online: 2018-10-01

摘要: 小样本数据由于其特征维数相对于样本数目较多,且常包含不相关或冗余特征,使得常用的机器学习算法处理小样本数据时无法得到好的效果,通过特征选择来降低数据维数是解决该问题的一种有效途径.针对小样本数据,提出一种基于互信息的过滤型特征选择方法,首先定义了基于互信息的特征分组标准,该标准同时考虑特征与类别的相关性和不同特征之间的冗余性,根据该标准对特征分组后,在各组内选出与类别相关性最大的特征构成候选特征子集,保证了算法具有较低的时间复杂度,之后采用Boruta算法,在候选特征子集中自动确定最佳特征子集,从而大幅度降低数据的维数.通过与5种经典的特征选择算法比较,在标准数据集上采用3种分类器的实验结果表明提出的方法选出的特征子集具有较好的运行效率和分类性能.

关键词: 小样本数据, 特征选择, 互信息, 特征分组, 过滤型算法

Abstract: For small samples, the common machine learning algorithms may not obtain good results as the feature dimension of small samples is often larger than the number of samples and some irrelevant or redundant features are often existed. It is an effective way to solve this problem by reducing the feature dimension through feature selection. This paper proposes a filter feature selection method based on mutual information for the small samples. First, the criterion of feature grouping based on the mutual information is defined. Both the correlations between features and the class and the redundancy among different features are considered in this criterion, according to which the features are grouped. Then those features that have maximal correlation with the class in each group will be chosen to compose a candidate feature subset. Meanwhile, it is ensured that the time complexity of this algorithm is low. After that, the feature selection method based on feature grouping is combined with Boruta algorithm to determine the optimal feature subset automatically from the candidate feature subset. In this way, the feature dimension can be reduced greatly. Compared with the five classical feature selection algorithms, experimental results on benchmark data sets demonstrate that the feature subset selected by the proposed method has better classification performance and running efficiency on three kinds of classifiers.

Key words: small samples, feature selection, mutual information, feature grouping, filter algorithm