高级检索
    周如, 朱浩泽, 郭文雅, 于胜龙, 张莹. 基于多模态方面术语提取和方面级情感分类的统一框架[J]. 计算机研究与发展, 2023, 60(12): 2877-2889. DOI: 10.7544/issn1000-1239.202220441
    引用本文: 周如, 朱浩泽, 郭文雅, 于胜龙, 张莹. 基于多模态方面术语提取和方面级情感分类的统一框架[J]. 计算机研究与发展, 2023, 60(12): 2877-2889. DOI: 10.7544/issn1000-1239.202220441
    Zhou Ru, Zhu Haoze, Guo Wenya, Yu Shenglong, Zhang Ying. A Unified Framework Based on Multimodal Aspect-Term Extraction and Aspect-Level Sentiment Classification[J]. Journal of Computer Research and Development, 2023, 60(12): 2877-2889. DOI: 10.7544/issn1000-1239.202220441
    Citation: Zhou Ru, Zhu Haoze, Guo Wenya, Yu Shenglong, Zhang Ying. A Unified Framework Based on Multimodal Aspect-Term Extraction and Aspect-Level Sentiment Classification[J]. Journal of Computer Research and Development, 2023, 60(12): 2877-2889. DOI: 10.7544/issn1000-1239.202220441

    基于多模态方面术语提取和方面级情感分类的统一框架

    A Unified Framework Based on Multimodal Aspect-Term Extraction and Aspect-Level Sentiment Classification

    • 摘要: 通过方面术语提取和方面级情感分类任务提取句子中的方面-情感对,有助于Twitter,Facebook等社交媒体平台挖掘用户对不同方面的情感,对个性化推荐有重要的意义. 在多模态领域,现有方法使用2个独立的模型分别完成2个子任务,方面术语提取提取句子中包含的商品、重要人物等实体或实体的方面,方面级情感分类根据给定的方面术语预测用户的情感倾向. 上述方法存在2个问题:1)使用2个独立的模型丢失了2个任务之间在底层特征的延续性,无法建模句子潜在的语义关联;2)方面级情感分类1次预测1个方面的情感,与方面术语提取同时提取多个方面的吞吐量不匹配,且2个模型串行执行使得提取方面-情感对的效率低. 为解决这2个问题,提出基于多模态方面术语提取和方面级情感分类的统一框架UMAS. 首先,建立共享特征模块,实现任务间潜在语义关联建模,并且共享表示层使得2个子任务只需关心各自上层的网络,降低了模型的复杂性;其次,模型利用序列标注同时输出句子中包含的多个方面及其对应的情感类别,提高了方面-情感对的提取效率. 此外,在这2个子任务中同时引入词性:利用其中蕴含的语法信息提升方面术语提取的性能;通过词性获取观点词信息,提升方面级情感分类的性能. 实验结果表明,该统一框架在Twitter2015,Restaurant2014这2个基准数据集上相比于多个基线模型具有优越的性能.

       

      Abstract: Aspect-term extraction (AE) and aspect-level sentiment classification (ALSC) extract aspect-sentiment pairs in the sentence, which helps social media platforms such as Twitter and Facebook to mine users’ sentiments of different aspects and is of great significance to personalized recommendation. In the field of multimodality, the existing method uses two independent models to complete two subtasks respectively. Aspect-term extraction identifies goods, important people and other entities or entities’ aspects in the sentence, and aspect-level sentiment classification predicts the user’s sentiment orientation according to the given aspect terms. There are two problems in the above method: first, using two independent models loses the continuity of the underlying features between the two tasks, and cannot model the potential semantic association of sentences; second, aspect-level sentiment classification can only predict the sentiment of one aspect at a time, which does not match the throughput of aspect-term extraction model that extracts multiple aspects simultaneously, and the serial execution of the two models makes the efficiency of extracting aspect-sentiment pairs low. To solve the above problems, a unified framework based on multimodal aspect-term extraction and aspect-level sentiment classification, called UMAS, is proposed in this paper. Firstly, the shared feature module is built to realize the latent semantic association modeling between tasks, and to make the two subtasks only need to care about their upper network, which reduces the complexity of the model. Secondly, multiple aspects and their corresponding sentiment categories in the sentence are output at the same time by using sequence tagging, which improves the extraction efficiency of aspect-sentiment pairs. In addition, we introduce part of speech in two subtasks at the same time: using the grammatical information to improve the performance of aspect-term extraction, and the information of opinion words is obtained through part of speech to improve the performance of aspect-level sentiment classification. The experimental results show that the unified model has superior performance compared with multiple baseline models on the two benchmark datasets of Twitter2015 and Restaurant2014.

       

    /

    返回文章
    返回