高级检索
    綦小龙, 高阳, 王皓, 宋蓓, 周春蕾, 张友卫. 一种可度量的贝叶斯网络结构学习方法[J]. 计算机研究与发展, 2018, 55(8): 1717-1725. DOI: 10.7544/issn1000-1239.2018.20180197
    引用本文: 綦小龙, 高阳, 王皓, 宋蓓, 周春蕾, 张友卫. 一种可度量的贝叶斯网络结构学习方法[J]. 计算机研究与发展, 2018, 55(8): 1717-1725. DOI: 10.7544/issn1000-1239.2018.20180197
    Qi Xiaolong, Gao Yang, Wang Hao, Song Bei, Zhou Chunlei, Zhang Youwei. A Measurable Bayesian Network Structure Learning Method[J]. Journal of Computer Research and Development, 2018, 55(8): 1717-1725. DOI: 10.7544/issn1000-1239.2018.20180197
    Citation: Qi Xiaolong, Gao Yang, Wang Hao, Song Bei, Zhou Chunlei, Zhang Youwei. A Measurable Bayesian Network Structure Learning Method[J]. Journal of Computer Research and Development, 2018, 55(8): 1717-1725. DOI: 10.7544/issn1000-1239.2018.20180197

    一种可度量的贝叶斯网络结构学习方法

    A Measurable Bayesian Network Structure Learning Method

    • 摘要: 针对基于约束的方法存在的序依赖、高阶检验等问题,提出了一种通过互信息排序的贝叶斯网络结构学习方法,该方法包括度量信息矩阵学习和“偷懒”启发式策略2部分.其中度量信息矩阵刻画了变量间的依赖程度而且暗含了程度强弱的比较,有效地解决了检验过程中由于变量序导致的误判问题;“偷懒”启发式策略在度量信息矩阵的指导下有选择地将变量加入到条件集中,有效地降低了高阶检验而且减少了检验次数.从理论上证明了新方法的可靠性,从实验上展示了在不丢失学习结构质量的条件下,新方法的搜索比其他搜索过程显著快而且易扩展到样本量小且稀疏的数据集上.

       

      Abstract: In this paper, a Bayesian network structure learning method via variable ordering based on mutual information (BNS\+vo-learning) is presented, which includes two components: the metric information matrix learning and the “lazy” heuristic strategy. The matrix of measurement information characterizes the degree of dependency among variables and implies the degree of strength comparison, which effectively solves the problem of misjudgment due to order of variables in the independence test process. Under the guidance of metric information matrix, the “lazy” heuristic strategy selectively adds variables to the condition set in order to effectively reduce high-order tests and reduce the number of tests. We theoretically prove the reliability of the new method and experimentally demonstrate that the new method searches significantly faster than other search processes. And BNS\+vo-learning is easily extended to small and sparse data sets without losing the quality of the learning structure.

       

    /

    返回文章
    返回