Mutual Information of Crossmodal Utterance Representation for Multimodal Sentiment Analysis
Since the continuous progress of Internet technology and social networks, content sharing on social platforms that reflects personal feelings and emotions has proliferated. Consequently, the study of people's emotions has gained considerable popularity because of the expanding use of social med...
Gespeichert in:
Veröffentlicht in: | IEEE transactions on affective computing 2024-10, p.1-9 |
---|---|
Hauptverfasser: | , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Since the continuous progress of Internet technology and social networks, content sharing on social platforms that reflects personal feelings and emotions has proliferated. Consequently, the study of people's emotions has gained considerable popularity because of the expanding use of social media, which has provided enormous data for data-driven research and more attention on the psychological health of people nowadays. To deepen the analysis of people's emotions on the internet, Multimodal Sentiment Analysis(MSA) combines multiple modalities such as texts, images, and sounds to comprehensively analyze and assess an individual's emotional state. However, ignoring the relationship between different modalities, most of the previous multimodal sentiment analysis models were limited to feature extraction of a single modal that cannot precisely predict object's state of mind. In this article, we propose a framework called Mutual Infomax Utterance Representation (MIUR) which draws on the concept of mutual information in information theory and introduces an information exchange module between modalities, effectively filters out task independent random noise while preserving shared information across modalities as much as possible. We conducted experimental verification on the publicly available popular sentiment datasets MOSI and MOSEI, and the results showed that our model demonstrated significant advancements in contrast to existing advanced models |
---|---|
ISSN: | 1949-3045 1949-3045 |
DOI: | 10.1109/TAFFC.2024.3466968 |