高级检索
    胡超文, 邬昌兴, 杨亚连. 基于扩展的S-LSTM的文本蕴含识别[J]. 计算机研究与发展, 2020, 57(7): 1481-1489. DOI: 10.7544/issn1000-1239.2020.20190522
    引用本文: 胡超文, 邬昌兴, 杨亚连. 基于扩展的S-LSTM的文本蕴含识别[J]. 计算机研究与发展, 2020, 57(7): 1481-1489. DOI: 10.7544/issn1000-1239.2020.20190522
    Hu Chaowen, Wu Changxing, Yang Yalian. Extended S-LSTM Based Textual Entailment Recognition[J]. Journal of Computer Research and Development, 2020, 57(7): 1481-1489. DOI: 10.7544/issn1000-1239.2020.20190522
    Citation: Hu Chaowen, Wu Changxing, Yang Yalian. Extended S-LSTM Based Textual Entailment Recognition[J]. Journal of Computer Research and Development, 2020, 57(7): 1481-1489. DOI: 10.7544/issn1000-1239.2020.20190522

    基于扩展的S-LSTM的文本蕴含识别

    Extended S-LSTM Based Textual Entailment Recognition

    • 摘要: 文本蕴含识别旨在自动判断给定的前提和假设(通常为2个句子)之间是否存在蕴含关系,是自然语言处理领域一项基础但富有挑战的任务.当前,主流的基于深度学习的模型通常分别建模前提和假设的语义表示,而没有把它们看作一个整体;另外,在捕获它们之间的语义关系时,大都没有同时利用句子级别的全局信息和短语级别的局部信息.最近提出的S-LSTM能够同时学习句子和短语的语义表示,在文本分类等任务上取得了较好的效果.基于上述情况,提出了一种基于扩展的S-LSTM的文本蕴含识别模型.一方面,把前提和假设看作一个整体,扩展S-LSTM以同时学习它们的语义表示;另一方面,在建模语义关系时,既利用句子级别的信息又利用短语级别的信息,以此获得更好的语义表示.在英文SNLI数据集和中文CNLI数据集上的实验结果表明:提出的模型取得了比基准模型更好的识别性能.

       

      Abstract: Text entailment recognition aims at automatically determining whether there is an entailment relationship between the given premise and hypothesis (usually two sentences). It is a basic and challenging task in natural language processing. Current dominant models, which are based on deep learning, usually encode the semantic representations of two sentences separately, instead of considering them as a whole. Besides, most of them do not leverage both the sentence-level global and ngram-level local information when capturing the semantic relationship. The recently proposed S-LSTM can learn semantic representations of a sentence and its ngrams simultaneously, achieving promising performance on tasks such as text classification. Considering the above, a model based on an extended S-LSTM is proposed for textual entailment recognition. On the one hand, S-LSTM is extended to learn semantic representations of the premise and hypothesis simultaneously, which regards them as a whole. On the other hand, to obtain better semantic representation, both the sentence-level and ngram-level information are used to capture the semantic relationships. Experimental results, on the English SNLI dataset and Chinese CNLI dataset, show that the performance of the proposed model is better than baselines.

       

    /

    返回文章
    返回