Adversarial Representation with Intra-Modal and Inter-Modal Graph Contrastive Learning for Multimodal Emotion Recognition
With the release of increasing open-source emotion recognition datasets on social media platforms and the rapid development of computing resources, multimodal emotion recognition tasks (MER) have begun to receive widespread research attention. The MER task extracts and fuses complementary semantic i...
Gespeichert in:
Hauptverfasser: | , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | With the release of increasing open-source emotion recognition datasets on
social media platforms and the rapid development of computing resources,
multimodal emotion recognition tasks (MER) have begun to receive widespread
research attention. The MER task extracts and fuses complementary semantic
information from different modalities, which can classify the speaker's
emotions. However, the existing feature fusion methods have usually mapped the
features of different modalities into the same feature space for information
fusion, which can not eliminate the heterogeneity between different modalities.
Therefore, it is challenging to make the subsequent emotion class boundary
learning. To tackle the above problems, we have proposed a novel Adversarial
Representation with Intra-Modal and Inter-Modal Graph Contrastive for
Multimodal Emotion Recognition (AR-IIGCN) method. Firstly, we input video,
audio, and text features into a multi-layer perceptron (MLP) to map them into
separate feature spaces. Secondly, we build a generator and a discriminator for
the three modal features through adversarial representation, which can achieve
information interaction between modalities and eliminate heterogeneity among
modalities. Thirdly, we introduce contrastive graph representation learning to
capture intra-modal and inter-modal complementary semantic information and
learn intra-class and inter-class boundary information of emotion categories.
Specifically, we construct a graph structure for three modal features and
perform contrastive representation learning on nodes with different emotions in
the same modality and the same emotion in different modalities, which can
improve the feature representation ability of nodes. Extensive experimental
works show that the ARL-IIGCN method can significantly improve emotion
recognition accuracy on IEMOCAP and MELD datasets. |
---|---|
DOI: | 10.48550/arxiv.2312.16778 |