Advanced Search
    Wei Zhenkai, Cheng Meng, Zhou Xiabing, Li Zhifeng, Zou Bowei, Hong Yu, Yao Jianmin. Convolutional Interactive Attention Mechanism for Aspect Extraction[J]. Journal of Computer Research and Development, 2020, 57(11): 2456-2466. DOI: 10.7544/issn1000-1239.2020.20190748
    Citation: Wei Zhenkai, Cheng Meng, Zhou Xiabing, Li Zhifeng, Zou Bowei, Hong Yu, Yao Jianmin. Convolutional Interactive Attention Mechanism for Aspect Extraction[J]. Journal of Computer Research and Development, 2020, 57(11): 2456-2466. DOI: 10.7544/issn1000-1239.2020.20190748

    Convolutional Interactive Attention Mechanism for Aspect Extraction

    • Attention mechanism is a common model in aspect extraction research. There are two limitations in attention mechanism towards aspect extraction: First, existing attention mechanism is mostly static attention or self attention.Self attention mechanism is a global attention mechanism, and it brings the irrelevant noises (words that are far away from the target word and unrelated to it) into attention vector; Second, existing attention mechanisms are mostly single-layer which lack interactivity. To address above two limitations, a convolutional interactive attention (CIA) mechanism is proposed in this paper. A bidirectional long short term memory network (Bi-LSTM) is exploited to obtain hidden representations of words in a target sentence, and then the convolutional interactive attention mechanism is used for representation learning. Convolutional interactive attention mechanism includes two layers: in the first layer, the number of context words for each target word is limited by a window, then the context words are used to calculate the attention vector of target word. In the second layer, the interactive attention vector is calculated by attention distribution of the first layer and all the words in target sentence. After that, we concatenate attention vectors of the first layer and second layer. Finally, conditional random field (CRF) is utilized to label aspects. This paper demonstrates the effectiveness of the proposed method over the official evaluation datasets of 2014—2016 Semantic Evaluation (SemEval).Compared with the baseline, the model proposed in this paper increases the F1 score of aspect extraction with 2.21%, 1.35%, 2.22% and 2.21% respectively on four datasets.
    • loading

    Catalog

      Turn off MathJax
      Article Contents

      /

      DownLoad:  Full-Size Img  PowerPoint
      Return
      Return