高级检索

    MSRD: 多模态网络谣言检测方法

    MSRD: Multi-Modal Web Rumor Detection Method

    • 摘要: 图像和文本相结合的多模态网络谣言由于更具迷惑性和煽动性,对国家安全和社会稳定的危害性更严重.目前网络谣言检测工作充分考虑了谣言中配文的文本内容而忽略了图像内容以及图像中的内嵌文本内容,因此,提出了一种基于深度神经网络针对图像、图像内嵌文本以及配文文本内容的多模态网络谣言检测方法MSRD.该方法使用VGG-19网络提取图像内容特征,使用DenseNet提取图像内嵌文本内容,使用LSTM网络提取文本内容特征,与图像特征串接后,通过完全连接层获取图像与文本共享表示的均值与方差向量,借助从高斯分布中采样的随机变量以形成重新参数化的多模态特征并作为谣言检测器的输入进行谣言检测.实验表明:该方法在Twitter和微博两大数据集上达到了68.5%和79.4%的准确率.

       

      Abstract: The multi-modal web rumors that combine images and texts are more confusing and inflammatory, so they are more harmful to national security and social stability. At present, the web rumor detection work fully considers the text content of the essay in the rumor, and ignores the image content and the embedded text in the image. Therefore, this paper proposes a multi-modal web rumors detection method MSRD for the image, embedded text in the image and the text of the essay based on deep neural networks. This method uses the VGG-19 network to extract image content features, DenseNet to extract embedded text content, and LSTM network to extract text content features. After concatenating with the image features, the mean and variance vectors of the image and text shared representations are obtained through the fully connected layer, and the random variables sampled from the Gaussian distribution are used to form a re-parameterized multi-modal feature and used as the input of the rumor detector. Experiments show that the method achieves 68.5% and 79.4% accuracy on the two data sets of Twitter and Weibo.

       

    /

    返回文章
    返回