Abstract:
In recent years, social media has become the main access where people acquire the latest news. However, the convenience and openness of social media have also facilitated the proliferation of fake news. With the development of multimedia technology, fake news on social media has been evolving from text-only posts to multimedia posts containing images or videos. Therefore, multi-modal fake news detection is attracting more and more attention. Existing methods for multi-modal fake news detection mostly focus on capturing appearance-level features that are highly dependent on the dataset distribution but insufficiently exploit the semantics-level features. Thus, the methods often fail to understand the deep semantics of textual and visual entities in the fake news, which indeed limits the generalizability of models in real applications. To tackle this problem, this paper proposes a semantics-enhanced multi-modal model for fake news detection, which better models the underlying semantics of multi-modal news by implicitly utilizing the factual knowledge in the pre-trained language model and explicitly extracting the visual entities. Furthermore, the proposed method extracts visual features of different semantic levels and models the semantic interaction between the textual and visual features by the text-guided attention mechanism, which better fuses the multi-modal heterogeneous features. Extensive experiments on the Weibo dataset strongly evidence that our method outperforms the state of the art significantly.