高级检索
    朱继召, 贾岩涛, 徐君, 乔建忠, 王元卓, 程学旗. SparkCRF:一种基于Spark的并行CRFs算法实现[J]. 计算机研究与发展, 2016, 53(8): 1819-1828. DOI: 10.7544/issn1000-1239.2016.20160197
    引用本文: 朱继召, 贾岩涛, 徐君, 乔建忠, 王元卓, 程学旗. SparkCRF:一种基于Spark的并行CRFs算法实现[J]. 计算机研究与发展, 2016, 53(8): 1819-1828. DOI: 10.7544/issn1000-1239.2016.20160197
    Zhu Jizhao, Jia Yantao, Xu Jun, Qiao Jianzhong, Wang Yuanzhuo, Cheng Xueqi. SparkCRF: A Parallel Implementation of CRFs Algorithm with Spark[J]. Journal of Computer Research and Development, 2016, 53(8): 1819-1828. DOI: 10.7544/issn1000-1239.2016.20160197
    Citation: Zhu Jizhao, Jia Yantao, Xu Jun, Qiao Jianzhong, Wang Yuanzhuo, Cheng Xueqi. SparkCRF: A Parallel Implementation of CRFs Algorithm with Spark[J]. Journal of Computer Research and Development, 2016, 53(8): 1819-1828. DOI: 10.7544/issn1000-1239.2016.20160197

    SparkCRF:一种基于Spark的并行CRFs算法实现

    SparkCRF: A Parallel Implementation of CRFs Algorithm with Spark

    • 摘要: 条件随机场(condition random fields, CRFs)可用于解决各种文本分析问题,如自然语言处理(natural language processing, NLP)中的序列标记、中文分词、命名实体识别、实体间关系抽取等.传统的运行在单节点上的条件随机场在处理大规模文本时,面临一系列挑战.一方面,个人计算机遇到处理的瓶颈从而难以胜任;另一方面,服务器执行效率较低.而通过升级服务器的硬件配置来提高其计算能力的方法,在处理大规模的文本分析任务时,终究不能从根本上解决问题.为此,采用“分而治之”的思想,基于Apache Spark的大数据处理框架设计并实现了运行在集群环境下的分布式CRFs——SparkCRF.实验表明,SparkCRF在文本分析任务中,具有高效的计算能力和较好的扩展性,并且具有与传统的单节点CRF++相同水平的准确率.

       

      Abstract: Condition random fields has been successfully applied to various applications in text analysis, such as sequence labeling, Chinese words segmentation, named entity recognition, and relation extraction in nature language processing. The traditional CRFs tools in single-node computer meet many challenges when dealing with large-scale texts. For one thing, the personal computer experiences the performance bottleneck; For another, the server fails to tackle the analysis efficiently. And upgrading hardware of the server to promote the capability of computing is not always feasible due to the cost constrains. To tackle these problems, in light of the idea of “divide and conquer”, we design and implement SparkCRF, which is a kind of distributed CRFs running on cluster environment based on Apache Spark. We perform three experiments using NLPCC2015 and the 2nd International Chinese Word Segmentation Bakeoff datasets, to evaluate SparkCRF from the aspects of performance, scalability and accuracy. Results show that: 1)compared with CRF++, SparkCRF runs almost 4 times faster on our cluster in sequence labeling task; 2)it has good scalability by adjusting the number of working cores; 3)furthermore, SparkCRF has comparable accuracy to the state-of-the-art CRF tools, such as CRF++ in the task of text analysis.

       

    /

    返回文章
    返回