Aspect-level multimodal sentiment analysis based on co-attention fusion

Aspect-level multimodal sentiment analysis is the fine-grained sentiment analysis task of predicting the sentiment polarity of given aspects in multimodal data. Most existing multimodal sentiment analysis approaches focus on mining and fusing multimodal global features, overlooking the correlation o...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:International journal of data science and analytics 2024-01
Hauptverfasser: Wang, Shunjie, Cai, Guoyong, Lv, Guangrui
Format: Artikel
Sprache:eng
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Aspect-level multimodal sentiment analysis is the fine-grained sentiment analysis task of predicting the sentiment polarity of given aspects in multimodal data. Most existing multimodal sentiment analysis approaches focus on mining and fusing multimodal global features, overlooking the correlation of more fine-grained multimodal local features, which considerably limits the semantic relevance between different modalities. Therefore, a novel aspect-level multimodal sentiment analysis method based on global–local features fusion with co-attention (GLFFCA) is proposed to comprehensively explore multimodal associations from both global and local perspectives. Specially, an aspect-guided global co-attention module is designed to capture aspect-guided intra-modality global correlations. Meanwhile, a gated local co-attention module is introduced to capture the adaptive association alignment of multimodal local features. Following that, a global–local multimodal feature fusion module is constructed to integrate global–local multimodal features in a hierarchical manner. Extensive experiments on the Twitter-2015 dataset and Twitter-2017 dataset validate the effectiveness of the proposed method, which can achieve better aspect-level multimodal sentiment analysis performance compared with other related methods.
ISSN:2364-415X
2364-4168
DOI:10.1007/s41060-023-00497-3