高级检索

    面向情感语义不一致的多模态情感分析方法

    Multimodal Sentiment Analysis Method for Sentimental Semantic Inconsistency

    • 摘要: 多模态情感分析是利用多种模式的主观信息对情感进行分析判断的一种多模态任务. 情感表达具有主观性,在某些场景下不同模态的情感表达不一致,甚至存在相悖的情况,这会削弱多模态协同决策的效果. 针对不同模态间情感语义不一致的问题,提出一种多模态学习方法,学习情感语义表达一致的模态特征表示. 为了在不影响模态原始信息的同时,提高各模态的共性特征表达并增加模态间的动态交互,首先学习每个模态的共性特征表示,然后利用交叉注意力使单个模态能有效从其余模态的共性特征表示中获取辅助信息. 在模态融合模块,以软注意力机制为基础提出模态注意力,对情感语义表达一致的各模态特征表示进行加权连接,以增大强模态的表达,抑制弱模态对任务的影响. 提出的模型在情感分析数据集MOSI,MOSEI,CH-SIMS上的实验结果均优于对比模型,表明在多模态情感分析任务中考虑情感语义不一致问题的必要性与合理性.

       

      Abstract: Multimodal sentiment analysis is a multimodal task that uses multiple modalities of subjective information to analyze sentiment. In some scenarios, the sentimental expression in different modalities is inconsistent, even contradictory, which will weaken the effect of multimodal collaborative decision-making. In this paper, a multimodal learning method is proposed to learn the modal feature representations with consistent sentimental semantics. In order to improve the common feature representation of different modalities and learn the dynamic interaction between modalities without affecting the original information, we first learn the common feature representation of each modality, and then use cross attention to enable one modality to effectively obtain auxiliary information from the common feature representations of other modalities. In multimodal fusion, we propose a multimodal attention, which is used to weighted concatenate modal feature representations, in order to increase the expression of contributed modalities and suppress the influence of weak modalities. The experimental results of the proposed method on the sentiment analysis datasets MOSI, MOSEI, CH-SIMS are better than those of the comparison models, indicating the necessity and rationality of considering the problem of sentimental semantic inconsistency in multimodal sentiment analysis.

       

    /

    返回文章
    返回