• 中国精品科技期刊
  • CCF推荐A类中文期刊
  • 计算领域高质量科技期刊T1类
Advanced Search
Cai Guoyong, Lü Guangrui, Xu Zhi. A Hierarchical Deep Correlative Fusion Network for Sentiment Classification in Social Media[J]. Journal of Computer Research and Development, 2019, 56(6): 1312-1324. DOI: 10.7544/issn1000-1239.2019.20180341
Citation: Cai Guoyong, Lü Guangrui, Xu Zhi. A Hierarchical Deep Correlative Fusion Network for Sentiment Classification in Social Media[J]. Journal of Computer Research and Development, 2019, 56(6): 1312-1324. DOI: 10.7544/issn1000-1239.2019.20180341

A Hierarchical Deep Correlative Fusion Network for Sentiment Classification in Social Media

Funds: This work was supported by the National Natural Science Foundation of China (61763007, 66162014), the Natural Science Foundation of Guangxi Province of China (2017JJD160017), and the Project of the Guangxi Key Laboratory of Trusted Software (201503).
More Information
  • Published Date: May 31, 2019
  • Most existing research of sentiment analysis are based on either textual or visual data and can not achieve satisfied results. As multi-modal data can provide richer information, multi-modal sentiment analysis is attracting more and more attentions and has become a hot research topic. Due to the strong semantic correlation between visual data and the co-occurrence textual data in social media, mixed data of texts and images provides a new view to learn better classifier for social media sentiment classification. A hierarchical deep correlative fusion network framework is proposed to jointly learn textual and visual sentiment representations from training samples for sentiment classification. In order to alleviate the problem of fine-grained semantic matching between image and text, both the middle level semantic features of images and the deep multi-modal discriminative correlation analysis are applied to learn the most relevant visual feature representation and semantic feature representation, meanwhile, keeping both the visual and semantic feature representations to be linear discriminable. Motivated by the successful use of attention mechanisms, we further propose a multi-modal attention fusion network by incorporating visual and semantic feature representations to train sentiment classifier. Experiments on the real-world datasets which come from social networks show that, the proposed method gets more accurate prediction on multi-media sentiment analysis by capturing the internal relations between text and image hierarchically.
  • Cited by

    Periodical cited type(10)

    1. 李梦云,张景,张换香,张晓琳,刘璐瑶. 基于跨模态语义信息增强的多模态情感分析. 计算机科学与探索. 2024(09): 2476-2486 .
    2. 仲兆满,黄贤波,熊玉龙. 基于混合融合的突发事件多模态情感分析. 江苏海洋大学学报(自然科学版). 2023(01): 1-8 .
    3. 高鑫月,宋沛林,薛润生. 新型冠状病毒肺炎期间公众情感的时空演化分析. 北京测绘. 2022(03): 254-259 .
    4. 刘颖,王哲,房杰,朱婷鸽,李琳娜,刘继明. 基于图文融合的多模态舆情分析. 计算机科学与探索. 2022(06): 1260-1278 .
    5. 孟祥瑞,杨文忠,王婷. 基于图文融合的情感分析研究综述. 计算机应用. 2021(02): 307-317 .
    6. 胡慧君,冯梦媛,曹梦丽,刘茂福. 基于语义相关的多模态社交情感分析. 北京航空航天大学学报. 2021(03): 469-477 .
    7. 章荪,尹春勇. 基于多任务学习的时序多模态情感分析模型. 计算机应用. 2021(06): 1631-1639 .
    8. 蔡国永,储阳阳. 基于双注意力多层特征融合的视觉情感分析. 计算机工程. 2021(09): 227-234 .
    9. 尹魁. 计算机多媒体网络教学发展趋势和方向. 卫星电视与宽带多媒体. 2020(03): 101-102 .
    10. 范瑛. 软件开发活动数据集的层次化及多版本化方法. 信息与电脑(理论版). 2020(10): 73-74 .

    Other cited types(21)

Catalog

    Article views (1183) PDF downloads (789) Cited by(31)
    Turn off MathJax
    Article Contents

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return