GS2F: Multimodal Fake News Detection Utilizing Graph Structure and Guided Semantic Fusion

The prevalence of fake news online has become a significant societal concern. To combat this, multimodal detection techniques based on images and text have shown promise. Yet, these methods struggle to analyze complex relationships within and between modalities due to the diverse discriminative elem...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:ACM transactions on Asian and low-resource language information processing 2024-12
Hauptverfasser: Zhou, Dong, Ouyang, Qiang, Lin, Nankai, Zhou, Yongmei, Yang, Aimin
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:The prevalence of fake news online has become a significant societal concern. To combat this, multimodal detection techniques based on images and text have shown promise. Yet, these methods struggle to analyze complex relationships within and between modalities due to the diverse discriminative elements in the news content. In addition, research on multimodal and multi-class fake news detection remains insufficient. To address the above challenges, in this paper, we propose a novel detection model, GS2F, leveraging graph structure and guided semantic fusion. Specifically, we construct a multimodal graph structure to align two modalities and employ graph contrastive learning for refined fusion representations. Furthermore, a guided semantic fusion module is introduced to maximize the utilization of single-modal information and a dynamic contribution assignment layer is designed to weigh the importance of image, text, and multimodal features. Experimental results on Fakeddit demonstrate that our model outperforms existing methods, marking a step forward in the multimodal and multi-class fake news detection.
ISSN:2375-4699
2375-4702
DOI:10.1145/3708536