Sentiment-aware multimodal pre-training for multimodal sentiment analysis
Pre-trained models, together with fine-tuning on downstream labeled datasets, have demonstrated great success in various tasks, including multimodal sentiment analysis. However, most most multimodal pre-trained models focus on learning general lexical and/or visual information, while ignoring sentim...
Gespeichert in:
Veröffentlicht in: | Knowledge-based systems 2022-12, Vol.258, p.110021, Article 110021 |
---|---|
Hauptverfasser: | , , , , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Pre-trained models, together with fine-tuning on downstream labeled datasets, have demonstrated great success in various tasks, including multimodal sentiment analysis. However, most most multimodal pre-trained models focus on learning general lexical and/or visual information, while ignoring sentiment signals. To address this problem, we propose a sentiment-aware multimodal pre-training (SMP) framework for multimodal sentiment analysis. In particular, we design a cross-modal contrastive learning module based on the interactions between visual and textual information, and introduce additional sentiment-aware pre-training objectives (e,g., fine-grained sentiment labeling) to capture fine-grained sentiment information from sentiment-rich datasets. We adopt two objectives (i.e., masked language modeling and masked auto-encoders) to capture semantic information from text and images. We conduct a series of experiments on sentence-level and target-oriented multimodal sentiment classification tasks, wherein the results of our SMP model exceeds the state-of-the-art results. Additionally, ablation studies and case studies are conducted to verify the effectiveness of our SMP model. |
---|---|
ISSN: | 0950-7051 1872-7409 |
DOI: | 10.1016/j.knosys.2022.110021 |