ISSN 1000-1239 CN 11-1777/TP

计算机研究与发展 ›› 2021, Vol. 58 ›› Issue (9): 2040-2051.doi: 10.7544/issn1000-1239.2021.20200521

• 人工智能 • 上一篇    下一篇

一种双层贝叶斯模型:随机森林朴素贝叶斯

张文钧1,蒋良孝1,2,张欢1,陈龙1   

  1. 1(中国地质大学计算机学院 武汉 430074);2(智能地学信息处理湖北省重点实验室(中国地质大学) 武汉 430074) (wjzhang@cug.edu.cn)
  • 出版日期: 2021-09-01
  • 基金资助: 
    国家自然科学基金联合基金重点项目(U1711267);中央高校基本科研业务费专项资金项目(CUGGC03)

A Two-Layer Bayes Model: Random Forest Naive Bayes

Zhang Wenjun1, Jiang Liangxiao1,2, Zhang Huan1, Chen Long1   

  1. 1(School of Computer Science, China University of Geosciences, Wuhan 430074);2(Hubei Key Laboratory of Intelligent Geo-Information Processing (China University of Geosciences), Wuhan 430074)
  • Online: 2021-09-01
  • Supported by: 
    The work was supported by the Joint Fund Key Projects of the National Natural Science Foundation of China (U1711267) and the Fundamental Research Funds for the Central Universities (CUGGC03).

摘要: 文本分类是自然语言处理领域的一项基础工作.文本数据的高维性和稀疏性,给文本分类带来了许多问题和挑战.朴素贝叶斯模型因其简单、高效、易理解的特点被广泛应用于文本分类任务,但其属性条件独立假设在现实的文本数据中很难满足,从而影响了它的分类性能.为了削弱朴素贝叶斯的属性条件独立假设,学者们提出了许多改进方法,主要包括结构扩展、实例选择、实例加权、特征选择、特征加权等.然而,所有这些方法都是基于独立的单词特征来构建朴素贝叶斯分类模型,在一定程度上限制了它们的分类性能.为此,尝试用特征学习的方法来改进朴素贝叶斯文本分类模型,提出了一种双层贝叶斯模型:随机森林朴素贝叶斯(random forest naive Bayes, RFNB).RFNB分为2层,第1层利用随机森林从原始的单词特征中学习单词组合的高层特征.然后将学习到的新特征输入第2层,经过一位有效编码后用于构建伯努利朴素贝叶斯模型.在大量广泛使用的文本数据集上的实验结果表明,提出的RFNB模型明显优于现有的最先进的朴素贝叶斯文本分类模型和其他经典的文本分类模型.

关键词: 朴素贝叶斯, 随机森林, 特征学习, 特征表示, 文本分类

Abstract: Text classification is an essential task in natural language processing. The high dimension and sparsity of text data bring many problems and challenges to text classification. Naive Bayes (NB) is widely used in text classification due to its simplicity, efficiency and comprehensibility, but its attribute conditional independence assumption is rarely met in real-world text data and thus affects its classification performance. In order to weaken the attribute conditional independence assumption required by NB, scholars have proposed a variety of improved approaches, mainly including structure extension, instance selection, instance weighting, feature selection, and feature weighting. However, all these approaches construct NB classification models based on the independent term features, which restricts their classification performance to a certain extent. In this paper, we try to improve the naive Bayes text classification model by feature learning and thus propose a two-layer Bayes model called random forest naive Bayes (RFNB). RFNB is divided into two layers. In the first layer, random forest (RF) is used to learn high-level features of term combinations from original term features. Then the learned new features are input into the second layer, which is used to construct a Bernoulli naive Bayes model after one-hot encoding. The experimental results on a large number of widely used text datasets show that the proposed RFNB significantly outperforms the existing state-of-the-art naive Bayes text classification models and other classical text classification models.

Key words: naive Bayes (NB), random forest, feature learning, feature representation, text classification

中图分类号: