高级检索
    张文钧, 蒋良孝, 张欢, 陈龙. 一种双层贝叶斯模型:随机森林朴素贝叶斯[J]. 计算机研究与发展, 2021, 58(9): 2040-2051. DOI: 10.7544/issn1000-1239.2021.20200521
    引用本文: 张文钧, 蒋良孝, 张欢, 陈龙. 一种双层贝叶斯模型:随机森林朴素贝叶斯[J]. 计算机研究与发展, 2021, 58(9): 2040-2051. DOI: 10.7544/issn1000-1239.2021.20200521
    Zhang Wenjun, Jiang Liangxiao, Zhang Huan, Chen Long. A Two-Layer Bayes Model: Random Forest Naive Bayes[J]. Journal of Computer Research and Development, 2021, 58(9): 2040-2051. DOI: 10.7544/issn1000-1239.2021.20200521
    Citation: Zhang Wenjun, Jiang Liangxiao, Zhang Huan, Chen Long. A Two-Layer Bayes Model: Random Forest Naive Bayes[J]. Journal of Computer Research and Development, 2021, 58(9): 2040-2051. DOI: 10.7544/issn1000-1239.2021.20200521

    一种双层贝叶斯模型:随机森林朴素贝叶斯

    A Two-Layer Bayes Model: Random Forest Naive Bayes

    • 摘要: 文本分类是自然语言处理领域的一项基础工作.文本数据的高维性和稀疏性,给文本分类带来了许多问题和挑战.朴素贝叶斯模型因其简单、高效、易理解的特点被广泛应用于文本分类任务,但其属性条件独立假设在现实的文本数据中很难满足,从而影响了它的分类性能.为了削弱朴素贝叶斯的属性条件独立假设,学者们提出了许多改进方法,主要包括结构扩展、实例选择、实例加权、特征选择、特征加权等.然而,所有这些方法都是基于独立的单词特征来构建朴素贝叶斯分类模型,在一定程度上限制了它们的分类性能.为此,尝试用特征学习的方法来改进朴素贝叶斯文本分类模型,提出了一种双层贝叶斯模型:随机森林朴素贝叶斯(random forest naive Bayes, RFNB).RFNB分为2层,第1层利用随机森林从原始的单词特征中学习单词组合的高层特征.然后将学习到的新特征输入第2层,经过一位有效编码后用于构建伯努利朴素贝叶斯模型.在大量广泛使用的文本数据集上的实验结果表明,提出的RFNB模型明显优于现有的最先进的朴素贝叶斯文本分类模型和其他经典的文本分类模型.

       

      Abstract: Text classification is an essential task in natural language processing. The high dimension and sparsity of text data bring many problems and challenges to text classification. Naive Bayes (NB) is widely used in text classification due to its simplicity, efficiency and comprehensibility, but its attribute conditional independence assumption is rarely met in real-world text data and thus affects its classification performance. In order to weaken the attribute conditional independence assumption required by NB, scholars have proposed a variety of improved approaches, mainly including structure extension, instance selection, instance weighting, feature selection, and feature weighting. However, all these approaches construct NB classification models based on the independent term features, which restricts their classification performance to a certain extent. In this paper, we try to improve the naive Bayes text classification model by feature learning and thus propose a two-layer Bayes model called random forest naive Bayes (RFNB). RFNB is divided into two layers. In the first layer, random forest (RF) is used to learn high-level features of term combinations from original term features. Then the learned new features are input into the second layer, which is used to construct a Bernoulli naive Bayes model after one-hot encoding. The experimental results on a large number of widely used text datasets show that the proposed RFNB significantly outperforms the existing state-of-the-art naive Bayes text classification models and other classical text classification models.

       

    /

    返回文章
    返回