Multimodal Relational Tensor Network for Sentiment and Emotion Classification
Understanding Affect from video segments has brought researchers from the language, audio and video domains together. Most of the current multimodal research in this area deals with various techniques to fuse the modalities, and mostly treat the segments of a video independently. Motivated by the wo...
Gespeichert in:
Hauptverfasser: | , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Understanding Affect from video segments has brought researchers from the
language, audio and video domains together. Most of the current multimodal
research in this area deals with various techniques to fuse the modalities, and
mostly treat the segments of a video independently. Motivated by the work of
(Zadeh et al., 2017) and (Poria et al., 2017), we present our architecture,
Relational Tensor Network, where we use the inter-modal interactions within a
segment (intra-segment) and also consider the sequence of segments in a video
to model the inter-segment inter-modal interactions. We also generate rich
representations of text and audio modalities by leveraging richer audio and
linguistic context alongwith fusing fine-grained knowledge based polarity
scores from text. We present the results of our model on CMU-MOSEI dataset and
show that our model outperforms many baselines and state of the art methods for
sentiment classification and emotion recognition. |
---|---|
DOI: | 10.48550/arxiv.1806.02923 |