ISSN 1000-1239 CN 11-1777/TP

Journal of Computer Research and Development ›› 2020, Vol. 57 ›› Issue (11): 2328-2336.doi: 10.7544/issn1000-1239.2020.20200413

Special Issue: 2020密码学与数据隐私保护研究专题

Previous Articles     Next Articles

MSRD: Multi-Modal Web Rumor Detection Method

Liu Jinshuo1, Feng Kuo1, Jeff Z. Pan2, Deng Juan1, Wang Lina1   

  1. 1(Key Laboratory of Aerospace Information Security and Trusted Computing,Ministry of Education, School of Cyber Science and Engineering, Wuhan University, Wuhan 430072);2(University of Aberdeen, Aberdeen, Scotland AB24 3FX)
  • Online:2020-11-01
  • Supported by: 
    This work was supported by the National Natural Science Foundation of China (U1936107, 6187613, 61672393).

Abstract: The multi-modal web rumors that combine images and texts are more confusing and inflammatory, so they are more harmful to national security and social stability. At present, the web rumor detection work fully considers the text content of the essay in the rumor, and ignores the image content and the embedded text in the image. Therefore, this paper proposes a multi-modal web rumors detection method MSRD for the image, embedded text in the image and the text of the essay based on deep neural networks. This method uses the VGG-19 network to extract image content features, DenseNet to extract embedded text content, and LSTM network to extract text content features. After concatenating with the image features, the mean and variance vectors of the image and text shared representations are obtained through the fully connected layer, and the random variables sampled from the Gaussian distribution are used to form a re-parameterized multi-modal feature and used as the input of the rumor detector. Experiments show that the method achieves 68.5% and 79.4% accuracy on the two data sets of Twitter and Weibo.

Key words: multimodal, rumor detection, inline text in image, natural language processing, deep neural network

CLC Number: