ISSN 1000-1239 CN 11-1777/TP

计算机研究与发展 ›› 2018, Vol. 55 ›› Issue (8): 1717-1725.doi: 10.7544/issn1000-1239.2018.20180197

所属专题: 2018数据挖掘前沿进展专题

• 人工智能 • 上一篇    下一篇



  1. 1(南京大学计算机科学与技术系 南京 210046);2(伊犁师范学院电子与信息工程学院 新疆伊宁 835000);3(江苏方天电力技术有限公司 南京 211102) (
  • 出版日期: 2018-08-01
  • 基金资助: 
    国家自然科学基金项目(61432008,61503178) This work was supported by the National Natural Science Foundation of China (61432008, 61503178).

A Measurable Bayesian Network Structure Learning Method

Qi Xiaolong1,2, Gao Yang1, Wang Hao1, Song Bei1, Zhou Chunlei3,Zhang Youwei3   

  1. 1(Department of Computer Science and Technology, Nanjing University, Nanjing 210046);2(Department of Electronics and Information Engineering, Yili Normal University, Yining, Xinjiang 835000);3(Jiangsu Frontier Electric Technology Co. Ltd., Nanjing 211102)
  • Online: 2018-08-01

摘要: 针对基于约束的方法存在的序依赖、高阶检验等问题,提出了一种通过互信息排序的贝叶斯网络结构学习方法,该方法包括度量信息矩阵学习和“偷懒”启发式策略2部分.其中度量信息矩阵刻画了变量间的依赖程度而且暗含了程度强弱的比较,有效地解决了检验过程中由于变量序导致的误判问题;“偷懒”启发式策略在度量信息矩阵的指导下有选择地将变量加入到条件集中,有效地降低了高阶检验而且减少了检验次数.从理论上证明了新方法的可靠性,从实验上展示了在不丢失学习结构质量的条件下,新方法的搜索比其他搜索过程显著快而且易扩展到样本量小且稀疏的数据集上.

关键词: 贝叶斯网络结构, 互信息, 条件独立性检验, 变量序, 假阳性节点, 假阴性节点

Abstract: In this paper, a Bayesian network structure learning method via variable ordering based on mutual information (BNS\+{vo}-learning) is presented, which includes two components: the metric information matrix learning and the “lazy” heuristic strategy. The matrix of measurement information characterizes the degree of dependency among variables and implies the degree of strength comparison, which effectively solves the problem of misjudgment due to order of variables in the independence test process. Under the guidance of metric information matrix, the “lazy” heuristic strategy selectively adds variables to the condition set in order to effectively reduce high-order tests and reduce the number of tests. We theoretically prove the reliability of the new method and experimentally demonstrate that the new method searches significantly faster than other search processes. And BNS\+{vo}-learning is easily extended to small and sparse data sets without losing the quality of the learning structure.

Key words: Bayesian network structure, mutual information, conditional independence test, variable order, false positive node, false negative node