Autoencoder for Semisupervised Multiple Emotion Detection of Conversation Transcripts
Textual emotion detection is a challenge in computational linguistics and affective computing study as it involves the discovery of all associated emotions expressed within a given piece of text. It becomes an even more difficult problem when applied to conversation transcripts, as we need to model...
Gespeichert in:
Veröffentlicht in: | IEEE transactions on affective computing 2021-07, Vol.12 (3), p.682-691 |
---|---|
Hauptverfasser: | , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | 691 |
---|---|
container_issue | 3 |
container_start_page | 682 |
container_title | IEEE transactions on affective computing |
container_volume | 12 |
creator | Phan, Duc-Anh Matsumoto, Yuji Shindo, Hiroyuki |
description | Textual emotion detection is a challenge in computational linguistics and affective computing study as it involves the discovery of all associated emotions expressed within a given piece of text. It becomes an even more difficult problem when applied to conversation transcripts, as we need to model the spoken utterances between speakers, keeping in mind the context of the entire conversation. In this paper, we propose a semisupervised multilabel method of predicting emotions from conversation transcripts. The corpus contains conversational quotes extracted from movies. A small number of them are annotated, while the rest are used for unsupervised training. We use the word2vec word-embedding method to build an emotion lexicon from the corpus and to embed the utterances into vector representations. A deep-learning autoencoder is then used to discover the underlying structure of the unsupervised data. We fine-tune the learned model on labeled training data, and measure its performance on a test set. The experiment result suggests that the method is effective and is only slightly behind human annotators. |
doi_str_mv | 10.1109/TAFFC.2018.2885304 |
format | Article |
fullrecord | <record><control><sourceid>proquest_RIE</sourceid><recordid>TN_cdi_ieee_primary_8570758</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>8570758</ieee_id><sourcerecordid>2568777277</sourcerecordid><originalsourceid>FETCH-LOGICAL-c361t-24b5a5b524cb87d71739f810b7e2e291f648e260f77192c1749f7b275adf7e23</originalsourceid><addsrcrecordid>eNpNkM1Lw0AQxRdRsNT-A3oJeE7dj2xmcyyxUaHiwXheks0spLTZuJsU_O9NPxDnMm_gvXnwI-Se0SVjNHsqV0WRLzllasmVkoImV2TGsiSLJymv_-lbsghhS6cRQqQcZuRrNQ4OO-Ma9JF1PvrEfRvGHv2hDdhE7-NuaPsdRuu9G1rXRc84oDkpZ6PcdQf0oTrdpa-6YHzbD-GO3NhqF3Bx2XNSFusyf403Hy9v-WoTG5GyIeZJLStZS56YWkEDDERmFaM1IEeeMZsmCnlKLQDLuGGQZBZqDrJq7GQRc_J4ftt79z1iGPTWjb6bGjWXqQIADjC5-NllvAvBo9W9b_eV_9GM6iNAfQKojwD1BeAUejiHWkT8CygJFKQSv5OTbKE</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2568777277</pqid></control><display><type>article</type><title>Autoencoder for Semisupervised Multiple Emotion Detection of Conversation Transcripts</title><source>IEEE Electronic Library (IEL)</source><creator>Phan, Duc-Anh ; Matsumoto, Yuji ; Shindo, Hiroyuki</creator><creatorcontrib>Phan, Duc-Anh ; Matsumoto, Yuji ; Shindo, Hiroyuki</creatorcontrib><description>Textual emotion detection is a challenge in computational linguistics and affective computing study as it involves the discovery of all associated emotions expressed within a given piece of text. It becomes an even more difficult problem when applied to conversation transcripts, as we need to model the spoken utterances between speakers, keeping in mind the context of the entire conversation. In this paper, we propose a semisupervised multilabel method of predicting emotions from conversation transcripts. The corpus contains conversational quotes extracted from movies. A small number of them are annotated, while the rest are used for unsupervised training. We use the word2vec word-embedding method to build an emotion lexicon from the corpus and to embed the utterances into vector representations. A deep-learning autoencoder is then used to discover the underlying structure of the unsupervised data. We fine-tune the learned model on labeled training data, and measure its performance on a test set. The experiment result suggests that the method is effective and is only slightly behind human annotators.</description><identifier>ISSN: 1949-3045</identifier><identifier>EISSN: 1949-3045</identifier><identifier>DOI: 10.1109/TAFFC.2018.2885304</identifier><identifier>CODEN: ITACBQ</identifier><language>eng</language><publisher>Piscataway: IEEE</publisher><subject>Affective computing ; autoencoder ; Context modeling ; Correlation ; Data models ; Emotion recognition ; Emotions ; Linguistics ; Motion pictures ; multilabel ; Neural networks ; semisupervised learning ; Social network services ; Training ; Training data ; word2vec</subject><ispartof>IEEE transactions on affective computing, 2021-07, Vol.12 (3), p.682-691</ispartof><rights>Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2021</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c361t-24b5a5b524cb87d71739f810b7e2e291f648e260f77192c1749f7b275adf7e23</citedby><cites>FETCH-LOGICAL-c361t-24b5a5b524cb87d71739f810b7e2e291f648e260f77192c1749f7b275adf7e23</cites><orcidid>0000-0001-9691-1666 ; 0000-0003-4946-9574</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/8570758$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>314,780,784,796,27923,27924,54757</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/8570758$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc></links><search><creatorcontrib>Phan, Duc-Anh</creatorcontrib><creatorcontrib>Matsumoto, Yuji</creatorcontrib><creatorcontrib>Shindo, Hiroyuki</creatorcontrib><title>Autoencoder for Semisupervised Multiple Emotion Detection of Conversation Transcripts</title><title>IEEE transactions on affective computing</title><addtitle>TAFFC</addtitle><description>Textual emotion detection is a challenge in computational linguistics and affective computing study as it involves the discovery of all associated emotions expressed within a given piece of text. It becomes an even more difficult problem when applied to conversation transcripts, as we need to model the spoken utterances between speakers, keeping in mind the context of the entire conversation. In this paper, we propose a semisupervised multilabel method of predicting emotions from conversation transcripts. The corpus contains conversational quotes extracted from movies. A small number of them are annotated, while the rest are used for unsupervised training. We use the word2vec word-embedding method to build an emotion lexicon from the corpus and to embed the utterances into vector representations. A deep-learning autoencoder is then used to discover the underlying structure of the unsupervised data. We fine-tune the learned model on labeled training data, and measure its performance on a test set. The experiment result suggests that the method is effective and is only slightly behind human annotators.</description><subject>Affective computing</subject><subject>autoencoder</subject><subject>Context modeling</subject><subject>Correlation</subject><subject>Data models</subject><subject>Emotion recognition</subject><subject>Emotions</subject><subject>Linguistics</subject><subject>Motion pictures</subject><subject>multilabel</subject><subject>Neural networks</subject><subject>semisupervised learning</subject><subject>Social network services</subject><subject>Training</subject><subject>Training data</subject><subject>word2vec</subject><issn>1949-3045</issn><issn>1949-3045</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2021</creationdate><recordtype>article</recordtype><sourceid>RIE</sourceid><recordid>eNpNkM1Lw0AQxRdRsNT-A3oJeE7dj2xmcyyxUaHiwXheks0spLTZuJsU_O9NPxDnMm_gvXnwI-Se0SVjNHsqV0WRLzllasmVkoImV2TGsiSLJymv_-lbsghhS6cRQqQcZuRrNQ4OO-Ma9JF1PvrEfRvGHv2hDdhE7-NuaPsdRuu9G1rXRc84oDkpZ6PcdQf0oTrdpa-6YHzbD-GO3NhqF3Bx2XNSFusyf403Hy9v-WoTG5GyIeZJLStZS56YWkEDDERmFaM1IEeeMZsmCnlKLQDLuGGQZBZqDrJq7GQRc_J4ftt79z1iGPTWjb6bGjWXqQIADjC5-NllvAvBo9W9b_eV_9GM6iNAfQKojwD1BeAUejiHWkT8CygJFKQSv5OTbKE</recordid><startdate>20210701</startdate><enddate>20210701</enddate><creator>Phan, Duc-Anh</creator><creator>Matsumoto, Yuji</creator><creator>Shindo, Hiroyuki</creator><general>IEEE</general><general>The Institute of Electrical and Electronics Engineers, Inc. (IEEE)</general><scope>97E</scope><scope>RIA</scope><scope>RIE</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7SC</scope><scope>8FD</scope><scope>JQ2</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><orcidid>https://orcid.org/0000-0001-9691-1666</orcidid><orcidid>https://orcid.org/0000-0003-4946-9574</orcidid></search><sort><creationdate>20210701</creationdate><title>Autoencoder for Semisupervised Multiple Emotion Detection of Conversation Transcripts</title><author>Phan, Duc-Anh ; Matsumoto, Yuji ; Shindo, Hiroyuki</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c361t-24b5a5b524cb87d71739f810b7e2e291f648e260f77192c1749f7b275adf7e23</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2021</creationdate><topic>Affective computing</topic><topic>autoencoder</topic><topic>Context modeling</topic><topic>Correlation</topic><topic>Data models</topic><topic>Emotion recognition</topic><topic>Emotions</topic><topic>Linguistics</topic><topic>Motion pictures</topic><topic>multilabel</topic><topic>Neural networks</topic><topic>semisupervised learning</topic><topic>Social network services</topic><topic>Training</topic><topic>Training data</topic><topic>word2vec</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Phan, Duc-Anh</creatorcontrib><creatorcontrib>Matsumoto, Yuji</creatorcontrib><creatorcontrib>Shindo, Hiroyuki</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005-present</collection><collection>IEEE All-Society Periodicals Package (ASPP) 1998-Present</collection><collection>IEEE Electronic Library (IEL)</collection><collection>CrossRef</collection><collection>Computer and Information Systems Abstracts</collection><collection>Technology Research Database</collection><collection>ProQuest Computer Science Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><jtitle>IEEE transactions on affective computing</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Phan, Duc-Anh</au><au>Matsumoto, Yuji</au><au>Shindo, Hiroyuki</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Autoencoder for Semisupervised Multiple Emotion Detection of Conversation Transcripts</atitle><jtitle>IEEE transactions on affective computing</jtitle><stitle>TAFFC</stitle><date>2021-07-01</date><risdate>2021</risdate><volume>12</volume><issue>3</issue><spage>682</spage><epage>691</epage><pages>682-691</pages><issn>1949-3045</issn><eissn>1949-3045</eissn><coden>ITACBQ</coden><abstract>Textual emotion detection is a challenge in computational linguistics and affective computing study as it involves the discovery of all associated emotions expressed within a given piece of text. It becomes an even more difficult problem when applied to conversation transcripts, as we need to model the spoken utterances between speakers, keeping in mind the context of the entire conversation. In this paper, we propose a semisupervised multilabel method of predicting emotions from conversation transcripts. The corpus contains conversational quotes extracted from movies. A small number of them are annotated, while the rest are used for unsupervised training. We use the word2vec word-embedding method to build an emotion lexicon from the corpus and to embed the utterances into vector representations. A deep-learning autoencoder is then used to discover the underlying structure of the unsupervised data. We fine-tune the learned model on labeled training data, and measure its performance on a test set. The experiment result suggests that the method is effective and is only slightly behind human annotators.</abstract><cop>Piscataway</cop><pub>IEEE</pub><doi>10.1109/TAFFC.2018.2885304</doi><tpages>10</tpages><orcidid>https://orcid.org/0000-0001-9691-1666</orcidid><orcidid>https://orcid.org/0000-0003-4946-9574</orcidid></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | ISSN: 1949-3045 |
ispartof | IEEE transactions on affective computing, 2021-07, Vol.12 (3), p.682-691 |
issn | 1949-3045 1949-3045 |
language | eng |
recordid | cdi_ieee_primary_8570758 |
source | IEEE Electronic Library (IEL) |
subjects | Affective computing autoencoder Context modeling Correlation Data models Emotion recognition Emotions Linguistics Motion pictures multilabel Neural networks semisupervised learning Social network services Training Training data word2vec |
title | Autoencoder for Semisupervised Multiple Emotion Detection of Conversation Transcripts |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-09T07%3A27%3A39IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_RIE&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Autoencoder%20for%20Semisupervised%20Multiple%20Emotion%20Detection%20of%20Conversation%20Transcripts&rft.jtitle=IEEE%20transactions%20on%20affective%20computing&rft.au=Phan,%20Duc-Anh&rft.date=2021-07-01&rft.volume=12&rft.issue=3&rft.spage=682&rft.epage=691&rft.pages=682-691&rft.issn=1949-3045&rft.eissn=1949-3045&rft.coden=ITACBQ&rft_id=info:doi/10.1109/TAFFC.2018.2885304&rft_dat=%3Cproquest_RIE%3E2568777277%3C/proquest_RIE%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2568777277&rft_id=info:pmid/&rft_ieee_id=8570758&rfr_iscdi=true |