TMIF: transformer-based multi-modal interactive fusion for automatic rumor detection

The rapid development of social media platforms has made them one of the most important news sources. While it provides people with convenient real-time communication channels, fake news and rumors are also spread rapidly through social media platforms, misleading the public and even causing bad soc...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Multimedia systems 2023-10, Vol.29 (5), p.2979-2989
Hauptverfasser: Lv, Jiandong, Wang, Xingang, Shao, Cuiling
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:The rapid development of social media platforms has made them one of the most important news sources. While it provides people with convenient real-time communication channels, fake news and rumors are also spread rapidly through social media platforms, misleading the public and even causing bad social impact. In view of the slow speed and poor consistency of artificial rumor detection, we propose an end-to-end automatic rumor detection model named TMIF, which is based on transformer to map multi-modal feature representations to the same data domain for fusion. It can capture the multi-level dependencies among multi-modal content while reducing the impact of multi-modal heterogeneity differences. We validated it on two multi-modal rumor detection datasets and proved the superior performance and early detection performance of the proposed model.
ISSN:0942-4962
1432-1882
DOI:10.1007/s00530-022-00916-8