ISSN 1000-1239 CN 11-1777/TP

计算机研究与发展 ›› 2016, Vol. 53 ›› Issue (8): 1819-1828.doi: 10.7544/issn1000-1239.2016.20160197

所属专题: 2016数据挖掘前沿技术专题

• 人工智能 • 上一篇    下一篇

SparkCRF:一种基于Spark的并行CRFs算法实现

朱继召1,2,贾岩涛2,徐君2,乔建忠1,王元卓2,程学旗2   

  1. 1(东北大学计算机科学与工程学院 沈阳 110819); 2(中国科学院计算技术研究所网络数据科学与技术重点实验室 北京 100190) (zhujzh.paper@gmail.com)
  • 出版日期: 2016-08-01
  • 基金资助: 
    国家“九七三”重点基础研究发展计划基金项目(2014CB340405,2013CB329602);国家重点研发计划基金项目(2016YFB1000902);国家自然科学基金项目(61173008,61232010,61272177,61303244,61402442);北京市自然科学基金项目(4154086)

SparkCRF: A Parallel Implementation of CRFs Algorithm with Spark

Zhu Jizhao1,2, Jia Yantao2, Xu Jun2, Qiao Jianzhong1, Wang Yuanzhuo2,Cheng Xueqi2   

  1. 1(College of Computer Science and Engineering, Northeastern University, Shenyang 110819);2(Key Laboratory of Network Data Science and Technology, Institute of Computing Technology, Chinese Academy of Sciences, Beijing 100190)
  • Online: 2016-08-01

摘要: 条件随机场(condition random fields, CRFs)可用于解决各种文本分析问题,如自然语言处理(natural language processing, NLP)中的序列标记、中文分词、命名实体识别、实体间关系抽取等.传统的运行在单节点上的条件随机场在处理大规模文本时,面临一系列挑战.一方面,个人计算机遇到处理的瓶颈从而难以胜任;另一方面,服务器执行效率较低.而通过升级服务器的硬件配置来提高其计算能力的方法,在处理大规模的文本分析任务时,终究不能从根本上解决问题.为此,采用“分而治之”的思想,基于Apache Spark的大数据处理框架设计并实现了运行在集群环境下的分布式CRFs——SparkCRF.实验表明,SparkCRF在文本分析任务中,具有高效的计算能力和较好的扩展性,并且具有与传统的单节点CRF++相同水平的准确率.

关键词: 大数据, 机器学习, 分布式计算, Spark, 条件随机场

Abstract: Condition random fields has been successfully applied to various applications in text analysis, such as sequence labeling, Chinese words segmentation, named entity recognition, and relation extraction in nature language processing. The traditional CRFs tools in single-node computer meet many challenges when dealing with large-scale texts. For one thing, the personal computer experiences the performance bottleneck; For another, the server fails to tackle the analysis efficiently. And upgrading hardware of the server to promote the capability of computing is not always feasible due to the cost constrains. To tackle these problems, in light of the idea of “divide and conquer”, we design and implement SparkCRF, which is a kind of distributed CRFs running on cluster environment based on Apache Spark. We perform three experiments using NLPCC2015 and the 2nd International Chinese Word Segmentation Bakeoff datasets, to evaluate SparkCRF from the aspects of performance, scalability and accuracy. Results show that: 1)compared with CRF++, SparkCRF runs almost 4 times faster on our cluster in sequence labeling task; 2)it has good scalability by adjusting the number of working cores; 3)furthermore, SparkCRF has comparable accuracy to the state-of-the-art CRF tools, such as CRF++ in the task of text analysis.

Key words: big data, machine learning, distributed computing, Spark, condition random fields (CRFs)

中图分类号: