Joint Turn and Dialogue level User Satisfaction Estimation on Multi-Domain Conversations

Dialogue level quality estimation is vital for optimizing data driven dialogue management. Current automated methods to estimate turn and dialogue level user satisfaction employ hand-crafted features and rely on complex annotation schemes, which reduce the generalizability of the trained models. We...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:arXiv.org 2020-10
Hauptverfasser: Bodigutla, Praveen Kumar, Tiwari, Aditya, Josep Valls Vargas, Polymenakos, Lazaros, Matsoukas, Spyros
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title arXiv.org
container_volume
creator Bodigutla, Praveen Kumar
Tiwari, Aditya
Josep Valls Vargas
Polymenakos, Lazaros
Matsoukas, Spyros
description Dialogue level quality estimation is vital for optimizing data driven dialogue management. Current automated methods to estimate turn and dialogue level user satisfaction employ hand-crafted features and rely on complex annotation schemes, which reduce the generalizability of the trained models. We propose a novel user satisfaction estimation approach which minimizes an adaptive multi-task loss function in order to jointly predict turn-level Response Quality labels provided by experts and explicit dialogue-level ratings provided by end users. The proposed BiLSTM based deep neural net model automatically weighs each turn's contribution towards the estimated dialogue-level rating, implicitly encodes temporal dependencies, and removes the need to hand-craft features. On dialogues sampled from 28 Alexa domains, two dialogue systems and three user groups, the joint dialogue-level satisfaction estimation model achieved up to an absolute 27% (0.43->0.70) and 7% (0.63->0.70) improvement in linear correlation performance over baseline deep neural net and benchmark Gradient boosting regression models, respectively.
format Article
fullrecord <record><control><sourceid>proquest</sourceid><recordid>TN_cdi_proquest_journals_2449026655</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2449026655</sourcerecordid><originalsourceid>FETCH-proquest_journals_24490266553</originalsourceid><addsrcrecordid>eNqNjs0KwjAQhIMgWLTvsOC5UNMf9dxWRPBkBW9l0VRS0kSzSZ_fID6AMDAD38DMjEU8yzbJLud8wWKiIU1TXm55UWQRu52M1A5abzWgfkAtUZmnF6DEJBRcSVi4oJPU491Jo6EhJ0f8xqCzV04mtRlRaqiMnoSlL6QVm_eoSMQ_X7L1oWmrY_Ky5u0FuW4wYTSgjuf5Pjwqw6H_Wh-Wf0IU</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2449026655</pqid></control><display><type>article</type><title>Joint Turn and Dialogue level User Satisfaction Estimation on Multi-Domain Conversations</title><source>Free E- Journals</source><creator>Bodigutla, Praveen Kumar ; Tiwari, Aditya ; Josep Valls Vargas ; Polymenakos, Lazaros ; Matsoukas, Spyros</creator><creatorcontrib>Bodigutla, Praveen Kumar ; Tiwari, Aditya ; Josep Valls Vargas ; Polymenakos, Lazaros ; Matsoukas, Spyros</creatorcontrib><description>Dialogue level quality estimation is vital for optimizing data driven dialogue management. Current automated methods to estimate turn and dialogue level user satisfaction employ hand-crafted features and rely on complex annotation schemes, which reduce the generalizability of the trained models. We propose a novel user satisfaction estimation approach which minimizes an adaptive multi-task loss function in order to jointly predict turn-level Response Quality labels provided by experts and explicit dialogue-level ratings provided by end users. The proposed BiLSTM based deep neural net model automatically weighs each turn's contribution towards the estimated dialogue-level rating, implicitly encodes temporal dependencies, and removes the need to hand-craft features. On dialogues sampled from 28 Alexa domains, two dialogue systems and three user groups, the joint dialogue-level satisfaction estimation model achieved up to an absolute 27% (0.43-&gt;0.70) and 7% (0.63-&gt;0.70) improvement in linear correlation performance over baseline deep neural net and benchmark Gradient boosting regression models, respectively.</description><identifier>EISSN: 2331-8422</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Annotations ; Domains ; End users ; Regression models ; User groups ; User satisfaction</subject><ispartof>arXiv.org, 2020-10</ispartof><rights>2020. This work is published under http://arxiv.org/licenses/nonexclusive-distrib/1.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>776,780</link.rule.ids></links><search><creatorcontrib>Bodigutla, Praveen Kumar</creatorcontrib><creatorcontrib>Tiwari, Aditya</creatorcontrib><creatorcontrib>Josep Valls Vargas</creatorcontrib><creatorcontrib>Polymenakos, Lazaros</creatorcontrib><creatorcontrib>Matsoukas, Spyros</creatorcontrib><title>Joint Turn and Dialogue level User Satisfaction Estimation on Multi-Domain Conversations</title><title>arXiv.org</title><description>Dialogue level quality estimation is vital for optimizing data driven dialogue management. Current automated methods to estimate turn and dialogue level user satisfaction employ hand-crafted features and rely on complex annotation schemes, which reduce the generalizability of the trained models. We propose a novel user satisfaction estimation approach which minimizes an adaptive multi-task loss function in order to jointly predict turn-level Response Quality labels provided by experts and explicit dialogue-level ratings provided by end users. The proposed BiLSTM based deep neural net model automatically weighs each turn's contribution towards the estimated dialogue-level rating, implicitly encodes temporal dependencies, and removes the need to hand-craft features. On dialogues sampled from 28 Alexa domains, two dialogue systems and three user groups, the joint dialogue-level satisfaction estimation model achieved up to an absolute 27% (0.43-&gt;0.70) and 7% (0.63-&gt;0.70) improvement in linear correlation performance over baseline deep neural net and benchmark Gradient boosting regression models, respectively.</description><subject>Annotations</subject><subject>Domains</subject><subject>End users</subject><subject>Regression models</subject><subject>User groups</subject><subject>User satisfaction</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2020</creationdate><recordtype>article</recordtype><sourceid>BENPR</sourceid><recordid>eNqNjs0KwjAQhIMgWLTvsOC5UNMf9dxWRPBkBW9l0VRS0kSzSZ_fID6AMDAD38DMjEU8yzbJLud8wWKiIU1TXm55UWQRu52M1A5abzWgfkAtUZmnF6DEJBRcSVi4oJPU491Jo6EhJ0f8xqCzV04mtRlRaqiMnoSlL6QVm_eoSMQ_X7L1oWmrY_Ky5u0FuW4wYTSgjuf5Pjwqw6H_Wh-Wf0IU</recordid><startdate>20201008</startdate><enddate>20201008</enddate><creator>Bodigutla, Praveen Kumar</creator><creator>Tiwari, Aditya</creator><creator>Josep Valls Vargas</creator><creator>Polymenakos, Lazaros</creator><creator>Matsoukas, Spyros</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope></search><sort><creationdate>20201008</creationdate><title>Joint Turn and Dialogue level User Satisfaction Estimation on Multi-Domain Conversations</title><author>Bodigutla, Praveen Kumar ; Tiwari, Aditya ; Josep Valls Vargas ; Polymenakos, Lazaros ; Matsoukas, Spyros</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-proquest_journals_24490266553</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2020</creationdate><topic>Annotations</topic><topic>Domains</topic><topic>End users</topic><topic>Regression models</topic><topic>User groups</topic><topic>User satisfaction</topic><toplevel>online_resources</toplevel><creatorcontrib>Bodigutla, Praveen Kumar</creatorcontrib><creatorcontrib>Tiwari, Aditya</creatorcontrib><creatorcontrib>Josep Valls Vargas</creatorcontrib><creatorcontrib>Polymenakos, Lazaros</creatorcontrib><creatorcontrib>Matsoukas, Spyros</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science &amp; Engineering Collection</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Bodigutla, Praveen Kumar</au><au>Tiwari, Aditya</au><au>Josep Valls Vargas</au><au>Polymenakos, Lazaros</au><au>Matsoukas, Spyros</au><format>book</format><genre>document</genre><ristype>GEN</ristype><atitle>Joint Turn and Dialogue level User Satisfaction Estimation on Multi-Domain Conversations</atitle><jtitle>arXiv.org</jtitle><date>2020-10-08</date><risdate>2020</risdate><eissn>2331-8422</eissn><abstract>Dialogue level quality estimation is vital for optimizing data driven dialogue management. Current automated methods to estimate turn and dialogue level user satisfaction employ hand-crafted features and rely on complex annotation schemes, which reduce the generalizability of the trained models. We propose a novel user satisfaction estimation approach which minimizes an adaptive multi-task loss function in order to jointly predict turn-level Response Quality labels provided by experts and explicit dialogue-level ratings provided by end users. The proposed BiLSTM based deep neural net model automatically weighs each turn's contribution towards the estimated dialogue-level rating, implicitly encodes temporal dependencies, and removes the need to hand-craft features. On dialogues sampled from 28 Alexa domains, two dialogue systems and three user groups, the joint dialogue-level satisfaction estimation model achieved up to an absolute 27% (0.43-&gt;0.70) and 7% (0.63-&gt;0.70) improvement in linear correlation performance over baseline deep neural net and benchmark Gradient boosting regression models, respectively.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier EISSN: 2331-8422
ispartof arXiv.org, 2020-10
issn 2331-8422
language eng
recordid cdi_proquest_journals_2449026655
source Free E- Journals
subjects Annotations
Domains
End users
Regression models
User groups
User satisfaction
title Joint Turn and Dialogue level User Satisfaction Estimation on Multi-Domain Conversations
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-14T09%3A04%3A34IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=document&rft.atitle=Joint%20Turn%20and%20Dialogue%20level%20User%20Satisfaction%20Estimation%20on%20Multi-Domain%20Conversations&rft.jtitle=arXiv.org&rft.au=Bodigutla,%20Praveen%20Kumar&rft.date=2020-10-08&rft.eissn=2331-8422&rft_id=info:doi/&rft_dat=%3Cproquest%3E2449026655%3C/proquest%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2449026655&rft_id=info:pmid/&rfr_iscdi=true