Multitask Representation Learning for Multimodal Estimation of Depression Level

We propose a novel multitask learning attention -based deep neural network model, which facilitates the fusion of various modalities. In particular, we use this network to both regress and classify the level of depression. Acoustic, textual, and visual modalities have been used to train our proposed...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE intelligent systems 2019-09, Vol.34 (5), p.45-52
Hauptverfasser: Qureshi, Syed Arbaaz, Saha, Sriparna, Hasanuzzaman, Mohammed, Dias, Gael, Cambria, Erik
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:We propose a novel multitask learning attention -based deep neural network model, which facilitates the fusion of various modalities. In particular, we use this network to both regress and classify the level of depression. Acoustic, textual, and visual modalities have been used to train our proposed network. Various experiments have been carried out on the benchmark dataset, namely, Distress Analysis Interview Corpus -a Wizard of Oz. From the results, we empirically justify that a) multitask learning networks cotrained over regression and classification have better performance compared to single -task networks, and b) the fusion of all the modalities helps in giving the most accurate estimation of depression with respect to regression.
ISSN:1541-1672
1941-1294
DOI:10.1109/MIS.2019.2925204