高级检索
    郝秀兰, 陶晓鹏, 徐和祥, 胡运发. kNN文本分类器类偏斜问题的一种处理对策[J]. 计算机研究与发展, 2009, 46(1): 52-61.
    引用本文: 郝秀兰, 陶晓鹏, 徐和祥, 胡运发. kNN文本分类器类偏斜问题的一种处理对策[J]. 计算机研究与发展, 2009, 46(1): 52-61.
    Hao Xiulan, Tao Xiaopeng, Xu Hexiang, Hu Yunfa. A Strategy to Class Imbalance Problem for kNN Text Classifier[J]. Journal of Computer Research and Development, 2009, 46(1): 52-61.
    Citation: Hao Xiulan, Tao Xiaopeng, Xu Hexiang, Hu Yunfa. A Strategy to Class Imbalance Problem for kNN Text Classifier[J]. Journal of Computer Research and Development, 2009, 46(1): 52-61.

    kNN文本分类器类偏斜问题的一种处理对策

    A Strategy to Class Imbalance Problem for kNN Text Classifier

    • 摘要: 类偏斜问题(class imbalance problem)是数据挖掘领域的常见问题之一,人们提出了各种策略来处理这个问题.当训练样本存在类偏斜问题时,kNN分类器会将小类中的样本错分到大类,导致分类的宏F1指标下降.针对kNN存在的这个缺陷,提出了文本训练集的临界点(critical point, CP)的概念并对其性质进行了探讨,给出了求CP,CP的下近似值LA、上近似值UA的算法.之后,根据LA或UA及训练样本数对传统的kNN决策函数进行修改,这就是自适应的加权kNN文本分类.为了验证自适应的加权kNN文本分类的有效性,设计了2组实验进行对比:一组为不同的收缩因子间进行对比,可看做是与Tan的工作进行对比,同时用来证实在LA或UA上分类器的宏F1较好;另一组则是与随机重取样进行实验对比,其中,传统kNN方法作为对比的基线.实验表明,所提的自适应加权kNN文本分类优于随机重取样,使得宏F1指标明显上升.该方法有点类似于代价相关学习.

       

      Abstract: Class imbalance is one of the problems plagueing practitioners in data mining community. First, some strategies to deal with this problem are reviewed. When training set is skewed, the popular kNN text classifier will mislabel instances in rare categories into common ones and lead to degradation in macro F1. To alleviate such a misfortune, a novel concept, critical point (CP) of the text training set, is proposed. Then property of CP is explored and algorithm evaluating the lower approximation (LA) and upper approximation (UA) of CP is given. Afterwards, traditional kNN is adapted by integrating LA or UA, training number with decision functions. This version of kNN is called self-adaptive kNN classifier with weight adjustment. To verify self-adaptive kNN classifier with weight adjustment feasible, two groups of experiments are carried out to compare with it. The first group is to compare the performance of different shrink factors, which can be viewed as comparing with Tan's work, and to prove that at LA or UA, the classifier will exhibit better Macro F1. The second group is to compare with random-sampling, where traditional kNN is used as a baseline. Experiments on four corpora illustrate that self-adaptive kNN text classifier with weight adjustment is better than random re-sampling, improving macro F1 evidently. The proposed method is similar to cost-sensitive learning to some extent.

       

    /

    返回文章
    返回