Advanced Search
    Zhou Ru, Zhu Haoze, Guo Wenya, Yu Shenglong, Zhang Ying. A Unified Framework Based on Multimodal Aspect-Term Extraction and Aspect-Level Sentiment Classification[J]. Journal of Computer Research and Development, 2023, 60(12): 2877-2889. DOI: 10.7544/issn1000-1239.202220441
    Citation: Zhou Ru, Zhu Haoze, Guo Wenya, Yu Shenglong, Zhang Ying. A Unified Framework Based on Multimodal Aspect-Term Extraction and Aspect-Level Sentiment Classification[J]. Journal of Computer Research and Development, 2023, 60(12): 2877-2889. DOI: 10.7544/issn1000-1239.202220441

    A Unified Framework Based on Multimodal Aspect-Term Extraction and Aspect-Level Sentiment Classification

    • Aspect-term extraction (AE) and aspect-level sentiment classification (ALSC) extract aspect-sentiment pairs in the sentence, which helps social media platforms such as Twitter and Facebook to mine users’ sentiments of different aspects and is of great significance to personalized recommendation. In the field of multimodality, the existing method uses two independent models to complete two subtasks respectively. Aspect-term extraction identifies goods, important people and other entities or entities’ aspects in the sentence, and aspect-level sentiment classification predicts the user’s sentiment orientation according to the given aspect terms. There are two problems in the above method: first, using two independent models loses the continuity of the underlying features between the two tasks, and cannot model the potential semantic association of sentences; second, aspect-level sentiment classification can only predict the sentiment of one aspect at a time, which does not match the throughput of aspect-term extraction model that extracts multiple aspects simultaneously, and the serial execution of the two models makes the efficiency of extracting aspect-sentiment pairs low. To solve the above problems, a unified framework based on multimodal aspect-term extraction and aspect-level sentiment classification, called UMAS, is proposed in this paper. Firstly, the shared feature module is built to realize the latent semantic association modeling between tasks, and to make the two subtasks only need to care about their upper network, which reduces the complexity of the model. Secondly, multiple aspects and their corresponding sentiment categories in the sentence are output at the same time by using sequence tagging, which improves the extraction efficiency of aspect-sentiment pairs. In addition, we introduce part of speech in two subtasks at the same time: using the grammatical information to improve the performance of aspect-term extraction, and the information of opinion words is obtained through part of speech to improve the performance of aspect-level sentiment classification. The experimental results show that the unified model has superior performance compared with multiple baseline models on the two benchmark datasets of Twitter2015 and Restaurant2014.
    • loading

    Catalog

      Turn off MathJax
      Article Contents

      /

      DownLoad:  Full-Size Img  PowerPoint
      Return
      Return