ISSN 1000-1239 CN 11-1777/TP

计算机研究与发展 ›› 2021, Vol. 58 ›› Issue (7): 1385-1394.doi: 10.7544/issn1000-1239.2021.20200817

所属专题: 2021虚假信息检测专题

• 信息处理 • 上一篇    下一篇



  1. 1(阿里巴巴集团 北京 100102);2(中国科学院信息工程研究所 北京 100093);3(廊坊职业技术学院 河北廊坊 065001) (
  • 出版日期: 2021-07-01
  • 基金资助: 

Fake Review Detection Based on Joint Topic and Sentiment Pre-Training Model

Zhang Dongjie1, Huang Longtao1, Zhang Rong1, Xue Hui1, Lin Junyu2, Lu Yao3   

  1. 1(Alibaba Group, Beijing 100102);2(Institute of Information Engineering, Chinese Academy of Sciences, Beijing 100093);3(Langfang Polytechnic Institute, Langfang, Hebei 065001)
  • Online: 2021-07-01
  • Supported by: 
    This work was supported by the Key Technology Research and Development Program of Langfang (2020011005).

摘要: 商品评论信息是用户线上决策的重要依据,但在利益的驱使下商家往往会通过雇佣专业的写手撰写大量虚假评论的方式来误导用户,进而达到包装自己或诋毁竞争对手的目的.这种现象会造成不正当的商业竞争和极差的用户体验.针对这一现象,我们通过情感预训练的方法对现有的虚假评论识别模型进行了改进,并提出了一种能够同时整合评论语义和情感信息的联合预训练学习方法.鉴于预训练模型强大的语义表示能力, 在联合学习框架中采用了2种预训练模型编码器分别用于抽取评论的语义和情感上下文特征,并通过联合训练的方法整合2种特征,最后使用Center Loss损失函数对模型进行优化.在多个公开数据集和多个不同任务上进行了验证实验,实验表明提出的联合模型在虚假评论检测与情感极性分析任务上都取得了目前最好的效果且具有更强的泛化能力.

关键词: 虚假评论检测, 预训练模型, 情感分析, 联合训练, Center Loss

Abstract: Product review information is an important basis for users’ online decision-making. However, driven by profit, businesses often hire professional writers to write a large number of fake reviews to mislead users and achieve the purpose of packaging themselves and denigrating competitors, resulting in unfair business competition and extremely poor user experience. In response to this phenomenon, we improved the existing spam review recognition methods through Pre-training Models, and proposed a joint pre-training learning method that can simultaneously integrate the semantic and sentimental information of product reviews. In view of the powerful semantic representation capabilities of the pre-trained model, we apply two pre-trained encoders to extract the semantic and emotional features of reviews in the joint learning framework. We integrate the two types of features through joint pre-training learning method. Apart from that, we add the Center Loss function to optimize the model. We have conducted several verification experiments on multiple public data sets and multiple different tasks. The experiments show that our proposed joint model has achieved the best results and has a stronger generalization in both fake review detection and sentiment analysis tasks.

Key words: fake review detection, pre-training model, sentiment analysis, joint learning framework, Center Loss