Multitask Representation Learning for Multimodal Estimation of Depression Level
We propose a novel multitask learning attention -based deep neural network model, which facilitates the fusion of various modalities. In particular, we use this network to both regress and classify the level of depression. Acoustic, textual, and visual modalities have been used to train our proposed...
Gespeichert in:
Veröffentlicht in: | IEEE intelligent systems 2019-09, Vol.34 (5), p.45-52 |
---|---|
Hauptverfasser: | , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | 52 |
---|---|
container_issue | 5 |
container_start_page | 45 |
container_title | IEEE intelligent systems |
container_volume | 34 |
creator | Qureshi, Syed Arbaaz Saha, Sriparna Hasanuzzaman, Mohammed Dias, Gael Cambria, Erik |
description | We propose a novel multitask learning attention -based deep neural network model, which facilitates the fusion of various modalities. In particular, we use this network to both regress and classify the level of depression. Acoustic, textual, and visual modalities have been used to train our proposed network. Various experiments have been carried out on the benchmark dataset, namely, Distress Analysis Interview Corpus -a Wizard of Oz. From the results, we empirically justify that a) multitask learning networks cotrained over regression and classification have better performance compared to single -task networks, and b) the fusion of all the modalities helps in giving the most accurate estimation of depression with respect to regression. |
doi_str_mv | 10.1109/MIS.2019.2925204 |
format | Article |
fullrecord | <record><control><sourceid>proquest_RIE</sourceid><recordid>TN_cdi_crossref_primary_10_1109_MIS_2019_2925204</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>8910655</ieee_id><sourcerecordid>2317729674</sourcerecordid><originalsourceid>FETCH-LOGICAL-c291t-56e2a24cda72d0e94bdcd8bf958cf3b2df803c591f04a07f89b493600b8ac1b3</originalsourceid><addsrcrecordid>eNo9kM9LwzAUx4MoOKd3wUvBc-vLrzY5ypw62Bjo7iFNE-mszUw6wf_ebB2e3vfw-bz3-CJ0i6HAGOTDavFeEMCyIJJwAuwMTbBkOMdEsvOU-SGXFblEVzFuAQgFLCZovdp3Qzvo-Jm92V2w0faDHlrfZ0urQ9_2H5nzITtSX77RXTaPKY2Id9nTUYqj8GO7a3ThdBftzWlO0eZ5vpm95sv1y2L2uMwNkXjIeWmJJsw0uiINWMnqxjSidpIL42hNGieAGi6xA6ahckLWTNISoBba4JpO0f24dhf8997GQW39PvTpoiIUVxWRZcUSBSNlgo8xWKd2If0efhUGdWhNpdbUoTV1ai0pd6PSWmv_cSExlJzTP361aXE</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2317729674</pqid></control><display><type>article</type><title>Multitask Representation Learning for Multimodal Estimation of Depression Level</title><source>IEEE Electronic Library (IEL)</source><creator>Qureshi, Syed Arbaaz ; Saha, Sriparna ; Hasanuzzaman, Mohammed ; Dias, Gael ; Cambria, Erik</creator><contributor>Erik Cambria</contributor><creatorcontrib>Qureshi, Syed Arbaaz ; Saha, Sriparna ; Hasanuzzaman, Mohammed ; Dias, Gael ; Cambria, Erik ; Erik Cambria</creatorcontrib><description>We propose a novel multitask learning attention -based deep neural network model, which facilitates the fusion of various modalities. In particular, we use this network to both regress and classify the level of depression. Acoustic, textual, and visual modalities have been used to train our proposed network. Various experiments have been carried out on the benchmark dataset, namely, Distress Analysis Interview Corpus -a Wizard of Oz. From the results, we empirically justify that a) multitask learning networks cotrained over regression and classification have better performance compared to single -task networks, and b) the fusion of all the modalities helps in giving the most accurate estimation of depression with respect to regression.</description><identifier>ISSN: 1541-1672</identifier><identifier>EISSN: 1941-1294</identifier><identifier>DOI: 10.1109/MIS.2019.2925204</identifier><identifier>CODEN: IISYF7</identifier><language>eng</language><publisher>Los Alamitos: IEEE</publisher><subject>Affective computing ; Depression ; Estimation ; Learning systems ; Medical conditions</subject><ispartof>IEEE intelligent systems, 2019-09, Vol.34 (5), p.45-52</ispartof><rights>Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2019</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c291t-56e2a24cda72d0e94bdcd8bf958cf3b2df803c591f04a07f89b493600b8ac1b3</citedby><cites>FETCH-LOGICAL-c291t-56e2a24cda72d0e94bdcd8bf958cf3b2df803c591f04a07f89b493600b8ac1b3</cites></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/8910655$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>314,780,784,796,27924,27925,54758</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/8910655$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc></links><search><contributor>Erik Cambria</contributor><creatorcontrib>Qureshi, Syed Arbaaz</creatorcontrib><creatorcontrib>Saha, Sriparna</creatorcontrib><creatorcontrib>Hasanuzzaman, Mohammed</creatorcontrib><creatorcontrib>Dias, Gael</creatorcontrib><creatorcontrib>Cambria, Erik</creatorcontrib><title>Multitask Representation Learning for Multimodal Estimation of Depression Level</title><title>IEEE intelligent systems</title><addtitle>MIS</addtitle><description>We propose a novel multitask learning attention -based deep neural network model, which facilitates the fusion of various modalities. In particular, we use this network to both regress and classify the level of depression. Acoustic, textual, and visual modalities have been used to train our proposed network. Various experiments have been carried out on the benchmark dataset, namely, Distress Analysis Interview Corpus -a Wizard of Oz. From the results, we empirically justify that a) multitask learning networks cotrained over regression and classification have better performance compared to single -task networks, and b) the fusion of all the modalities helps in giving the most accurate estimation of depression with respect to regression.</description><subject>Affective computing</subject><subject>Depression</subject><subject>Estimation</subject><subject>Learning systems</subject><subject>Medical conditions</subject><issn>1541-1672</issn><issn>1941-1294</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2019</creationdate><recordtype>article</recordtype><sourceid>RIE</sourceid><recordid>eNo9kM9LwzAUx4MoOKd3wUvBc-vLrzY5ypw62Bjo7iFNE-mszUw6wf_ebB2e3vfw-bz3-CJ0i6HAGOTDavFeEMCyIJJwAuwMTbBkOMdEsvOU-SGXFblEVzFuAQgFLCZovdp3Qzvo-Jm92V2w0faDHlrfZ0urQ9_2H5nzITtSX77RXTaPKY2Id9nTUYqj8GO7a3ThdBftzWlO0eZ5vpm95sv1y2L2uMwNkXjIeWmJJsw0uiINWMnqxjSidpIL42hNGieAGi6xA6ahckLWTNISoBba4JpO0f24dhf8997GQW39PvTpoiIUVxWRZcUSBSNlgo8xWKd2If0efhUGdWhNpdbUoTV1ai0pd6PSWmv_cSExlJzTP361aXE</recordid><startdate>20190901</startdate><enddate>20190901</enddate><creator>Qureshi, Syed Arbaaz</creator><creator>Saha, Sriparna</creator><creator>Hasanuzzaman, Mohammed</creator><creator>Dias, Gael</creator><creator>Cambria, Erik</creator><general>IEEE</general><general>The Institute of Electrical and Electronics Engineers, Inc. (IEEE)</general><scope>97E</scope><scope>RIA</scope><scope>RIE</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7SC</scope><scope>7SP</scope><scope>8FD</scope><scope>E3H</scope><scope>F2A</scope><scope>JQ2</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope></search><sort><creationdate>20190901</creationdate><title>Multitask Representation Learning for Multimodal Estimation of Depression Level</title><author>Qureshi, Syed Arbaaz ; Saha, Sriparna ; Hasanuzzaman, Mohammed ; Dias, Gael ; Cambria, Erik</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c291t-56e2a24cda72d0e94bdcd8bf958cf3b2df803c591f04a07f89b493600b8ac1b3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2019</creationdate><topic>Affective computing</topic><topic>Depression</topic><topic>Estimation</topic><topic>Learning systems</topic><topic>Medical conditions</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Qureshi, Syed Arbaaz</creatorcontrib><creatorcontrib>Saha, Sriparna</creatorcontrib><creatorcontrib>Hasanuzzaman, Mohammed</creatorcontrib><creatorcontrib>Dias, Gael</creatorcontrib><creatorcontrib>Cambria, Erik</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005–Present</collection><collection>IEEE All-Society Periodicals Package (ASPP) 1998-Present</collection><collection>IEEE Electronic Library (IEL)</collection><collection>CrossRef</collection><collection>Computer and Information Systems Abstracts</collection><collection>Electronics & Communications Abstracts</collection><collection>Technology Research Database</collection><collection>Library & Information Sciences Abstracts (LISA)</collection><collection>Library & Information Science Abstracts (LISA)</collection><collection>ProQuest Computer Science Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><jtitle>IEEE intelligent systems</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Qureshi, Syed Arbaaz</au><au>Saha, Sriparna</au><au>Hasanuzzaman, Mohammed</au><au>Dias, Gael</au><au>Cambria, Erik</au><au>Erik Cambria</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Multitask Representation Learning for Multimodal Estimation of Depression Level</atitle><jtitle>IEEE intelligent systems</jtitle><stitle>MIS</stitle><date>2019-09-01</date><risdate>2019</risdate><volume>34</volume><issue>5</issue><spage>45</spage><epage>52</epage><pages>45-52</pages><issn>1541-1672</issn><eissn>1941-1294</eissn><coden>IISYF7</coden><abstract>We propose a novel multitask learning attention -based deep neural network model, which facilitates the fusion of various modalities. In particular, we use this network to both regress and classify the level of depression. Acoustic, textual, and visual modalities have been used to train our proposed network. Various experiments have been carried out on the benchmark dataset, namely, Distress Analysis Interview Corpus -a Wizard of Oz. From the results, we empirically justify that a) multitask learning networks cotrained over regression and classification have better performance compared to single -task networks, and b) the fusion of all the modalities helps in giving the most accurate estimation of depression with respect to regression.</abstract><cop>Los Alamitos</cop><pub>IEEE</pub><doi>10.1109/MIS.2019.2925204</doi><tpages>8</tpages></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | ISSN: 1541-1672 |
ispartof | IEEE intelligent systems, 2019-09, Vol.34 (5), p.45-52 |
issn | 1541-1672 1941-1294 |
language | eng |
recordid | cdi_crossref_primary_10_1109_MIS_2019_2925204 |
source | IEEE Electronic Library (IEL) |
subjects | Affective computing Depression Estimation Learning systems Medical conditions |
title | Multitask Representation Learning for Multimodal Estimation of Depression Level |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-25T04%3A50%3A30IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_RIE&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Multitask%20Representation%20Learning%20for%20Multimodal%20Estimation%20of%20Depression%20Level&rft.jtitle=IEEE%20intelligent%20systems&rft.au=Qureshi,%20Syed%20Arbaaz&rft.date=2019-09-01&rft.volume=34&rft.issue=5&rft.spage=45&rft.epage=52&rft.pages=45-52&rft.issn=1541-1672&rft.eissn=1941-1294&rft.coden=IISYF7&rft_id=info:doi/10.1109/MIS.2019.2925204&rft_dat=%3Cproquest_RIE%3E2317729674%3C/proquest_RIE%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2317729674&rft_id=info:pmid/&rft_ieee_id=8910655&rfr_iscdi=true |