高级检索
    苏锦钿, 欧阳志凡, 余珊珊. 基于依存树及距离注意力的句子属性情感分类[J]. 计算机研究与发展, 2019, 56(8): 1731-1745. DOI: 10.7544/issn1000-1239.2019.20190102
    引用本文: 苏锦钿, 欧阳志凡, 余珊珊. 基于依存树及距离注意力的句子属性情感分类[J]. 计算机研究与发展, 2019, 56(8): 1731-1745. DOI: 10.7544/issn1000-1239.2019.20190102
    Su Jindian, Ouyang Zhifan, Yu Shanshan. Aspect-Level Sentiment Classification for Sentences Based on Dependency Tree and Distance Attention[J]. Journal of Computer Research and Development, 2019, 56(8): 1731-1745. DOI: 10.7544/issn1000-1239.2019.20190102
    Citation: Su Jindian, Ouyang Zhifan, Yu Shanshan. Aspect-Level Sentiment Classification for Sentences Based on Dependency Tree and Distance Attention[J]. Journal of Computer Research and Development, 2019, 56(8): 1731-1745. DOI: 10.7544/issn1000-1239.2019.20190102

    基于依存树及距离注意力的句子属性情感分类

    Aspect-Level Sentiment Classification for Sentences Based on Dependency Tree and Distance Attention

    • 摘要: 目前基于注意力机制的句子属性情感分类方法由于忽略句子中属性的上下文信息以及单词与属性间的距离特征,从而导致注意力机制难以学习到合适的注意力权重.针对该问题,提出一种基于依存树及距离注意力的句子属性情感分类模型(dependency tree and distance attention, DTDA).首先根据句子的依存树得到包含属性的依存子树,并利用双向GRU学习句子及属性的上下文特征表示;根据句子中单词和属性在依存树中的最短路径确定相应的语法距离及位置权重,同时结合相对距离构造包含语义信息和距离信息的句子特征表示,并进一步利用注意力机制生成属性相关的句子情感特征表示;最后,将句子的上下文信息与属性相关的情感特征表示合并后并通过softmax进行分类输出.实验结果表明:DTDA在国际语义评测SemEval2014的2个基准数据集Laptop和Restaurant上取得与目前最好方法相当的结果.当使用相关领域训练的词向量时,DTDA在Laptop上的精确率为77.01%,在Restaurant上的准确率为81.68%.

       

      Abstract: Current attention-based approaches for aspect-level sentiment classification usually neglect the contexts of aspects and the distance feature between words and aspects, which as a result make it difficult for attention mechanism to learn suitable attention weights. To address this problem, a dependency tree and distance attention-based model DTDA for aspect-level sentiment classification is proposed. Firstly, DTDA extracts dependency subtree (aspect sub-sentence) that contains the modification information of the aspect with the help of dependency tree of sentences, and then uses bidirectional GRU networks to learn the contexts of sentence and aspects. After that, the position weights are determined according to the syntactic distance between words and aspect along their path on the dependency tree, which are then further combined with relative distance to build sentence representations that contain semantic and distance information. The aspect-related sentiment feature representations are finally generated via attention mechanism and merged with sentence-related contexts, which are fed to a softmax layer for classification. Experimental results show that DTDA achieves comparable results with those current state-of-the-art methods on the two benchmark datasets of SemEval 2014, Laptop and Restaurant. When using word vectors pre-trained on domain-relative data, DTDA achieves the results with the precision of 77.01% on Laptop and 81.68% on Restaurant.

       

    /

    返回文章
    返回