高级检索
    许行, 张凯, 王文剑. 一种小样本数据的特征选择方法[J]. 计算机研究与发展, 2018, 55(10): 2321-2330. DOI: 10.7544/issn1000-1239.2018.20170748
    引用本文: 许行, 张凯, 王文剑. 一种小样本数据的特征选择方法[J]. 计算机研究与发展, 2018, 55(10): 2321-2330. DOI: 10.7544/issn1000-1239.2018.20170748
    Xu Hang, Zhang Kai, Wang Wenjian. A Feature Selection Method for Small Samples[J]. Journal of Computer Research and Development, 2018, 55(10): 2321-2330. DOI: 10.7544/issn1000-1239.2018.20170748
    Citation: Xu Hang, Zhang Kai, Wang Wenjian. A Feature Selection Method for Small Samples[J]. Journal of Computer Research and Development, 2018, 55(10): 2321-2330. DOI: 10.7544/issn1000-1239.2018.20170748

    一种小样本数据的特征选择方法

    A Feature Selection Method for Small Samples

    • 摘要: 小样本数据由于其特征维数相对于样本数目较多,且常包含不相关或冗余特征,使得常用的机器学习算法处理小样本数据时无法得到好的效果,通过特征选择来降低数据维数是解决该问题的一种有效途径.针对小样本数据,提出一种基于互信息的过滤型特征选择方法,首先定义了基于互信息的特征分组标准,该标准同时考虑特征与类别的相关性和不同特征之间的冗余性,根据该标准对特征分组后,在各组内选出与类别相关性最大的特征构成候选特征子集,保证了算法具有较低的时间复杂度,之后采用Boruta算法,在候选特征子集中自动确定最佳特征子集,从而大幅度降低数据的维数.通过与5种经典的特征选择算法比较,在标准数据集上采用3种分类器的实验结果表明提出的方法选出的特征子集具有较好的运行效率和分类性能.

       

      Abstract: For small samples, the common machine learning algorithms may not obtain good results as the feature dimension of small samples is often larger than the number of samples and some irrelevant or redundant features are often existed. It is an effective way to solve this problem by reducing the feature dimension through feature selection. This paper proposes a filter feature selection method based on mutual information for the small samples. First, the criterion of feature grouping based on the mutual information is defined. Both the correlations between features and the class and the redundancy among different features are considered in this criterion, according to which the features are grouped. Then those features that have maximal correlation with the class in each group will be chosen to compose a candidate feature subset. Meanwhile, it is ensured that the time complexity of this algorithm is low. After that, the feature selection method based on feature grouping is combined with Boruta algorithm to determine the optimal feature subset automatically from the candidate feature subset. In this way, the feature dimension can be reduced greatly. Compared with the five classical feature selection algorithms, experimental results on benchmark data sets demonstrate that the feature subset selected by the proposed method has better classification performance and running efficiency on three kinds of classifiers.

       

    /

    返回文章
    返回