Abstract:
Text categorization is an important technique in data mining domain. Extremely high dimension of features makes text categorization processing complex and expensive, and thus effective dimension reduction methods are extraordinarily desired. Feature selection is widely used to reduce dimension. Many feature selection methods have been proposed in recent years. But to the authors’best knowledge, there is no method that performs very well on unbalanced datasets. This paper proposes a feature selection framework based on the category distribution difference of features named category distribution-based feature selection (CDFS). This approach selects features that have strong discriminative power using distribution information of features. At the same time, weights can be flexibly assigned to categories. If larger weights are assigned to rare categories, the performance on rare categories can be improved. So this framework is suitable for unbalanced data and highly extensible. Besides, OCFS and feature filter based on category distribution difference can be viewed as special cases of this framework. A number of implementations of CDFS are given. The experimental results on Reuters-21578 corpus and Fudan corpus (unbalanced datasets) show that both MacroF1 and MicroF1 by implementations of CDFS given in this paper are better than those by IG, CHI and OCFS.