Generating More Interesting Responses in Neural Conversation Models with Distributional Constraints

Neural conversation models tend to generate safe, generic responses for most inputs. This is due to the limitations of likelihood-based decoding objectives in generation tasks with diverse outputs, such as conversation. To address this challenge, we propose a simple yet effective approach for incorp...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Baheti, Ashutosh, Ritter, Alan, Li, Jiwei, Dolan, Bill
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Baheti, Ashutosh
Ritter, Alan
Li, Jiwei
Dolan, Bill
description Neural conversation models tend to generate safe, generic responses for most inputs. This is due to the limitations of likelihood-based decoding objectives in generation tasks with diverse outputs, such as conversation. To address this challenge, we propose a simple yet effective approach for incorporating side information in the form of distributional constraints over the generated responses. We propose two constraints that help generate more content rich responses that are based on a model of syntax and topics (Griffiths et al., 2005) and semantic similarity (Arora et al., 2016). We evaluate our approach against a variety of competitive baselines, using both automatic metrics and human judgments, showing that our proposed approach generates responses that are much less generic without sacrificing plausibility. A working demo of our code can be found at https://github.com/abaheti95/DC-NeuralConversation.
doi_str_mv 10.48550/arxiv.1809.01215
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_1809_01215</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>1809_01215</sourcerecordid><originalsourceid>FETCH-LOGICAL-a675-2e03cd93a55fc6a5fdbde04a1300ae941905ec9e5446a1558e780d14784977a23</originalsourceid><addsrcrecordid>eNotj0FOwzAQRb1hgQoHYFVfIMGOPXG8RAFKpQIS6j6axpNiKTiVnRa4PWnKajTz_xvpMXYnRa4rAHGP8cefclkJmwtZSLhm7YoCRRx92PPXIRJfh5EipfnwQekwhESJ-8Df6Bix5_UQThTTRAxhIhz1iX_78ZM_-jRGvzueg0tv2tGHMd2wqw77RLf_c8G2z0_b-iXbvK_W9cMmw9JAVpBQrbMKAbq2ROjczpHQKJUQSFZLK4BaS6B1iRKgIlMJJ7WptDUGC7Vgy8vb2bI5RP-F8bc52zazrfoDzZ5RZg</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Generating More Interesting Responses in Neural Conversation Models with Distributional Constraints</title><source>arXiv.org</source><creator>Baheti, Ashutosh ; Ritter, Alan ; Li, Jiwei ; Dolan, Bill</creator><creatorcontrib>Baheti, Ashutosh ; Ritter, Alan ; Li, Jiwei ; Dolan, Bill</creatorcontrib><description>Neural conversation models tend to generate safe, generic responses for most inputs. This is due to the limitations of likelihood-based decoding objectives in generation tasks with diverse outputs, such as conversation. To address this challenge, we propose a simple yet effective approach for incorporating side information in the form of distributional constraints over the generated responses. We propose two constraints that help generate more content rich responses that are based on a model of syntax and topics (Griffiths et al., 2005) and semantic similarity (Arora et al., 2016). We evaluate our approach against a variety of competitive baselines, using both automatic metrics and human judgments, showing that our proposed approach generates responses that are much less generic without sacrificing plausibility. A working demo of our code can be found at https://github.com/abaheti95/DC-NeuralConversation.</description><identifier>DOI: 10.48550/arxiv.1809.01215</identifier><language>eng</language><subject>Computer Science - Computation and Language</subject><creationdate>2018-09</creationdate><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,780,885</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/1809.01215$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.1809.01215$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Baheti, Ashutosh</creatorcontrib><creatorcontrib>Ritter, Alan</creatorcontrib><creatorcontrib>Li, Jiwei</creatorcontrib><creatorcontrib>Dolan, Bill</creatorcontrib><title>Generating More Interesting Responses in Neural Conversation Models with Distributional Constraints</title><description>Neural conversation models tend to generate safe, generic responses for most inputs. This is due to the limitations of likelihood-based decoding objectives in generation tasks with diverse outputs, such as conversation. To address this challenge, we propose a simple yet effective approach for incorporating side information in the form of distributional constraints over the generated responses. We propose two constraints that help generate more content rich responses that are based on a model of syntax and topics (Griffiths et al., 2005) and semantic similarity (Arora et al., 2016). We evaluate our approach against a variety of competitive baselines, using both automatic metrics and human judgments, showing that our proposed approach generates responses that are much less generic without sacrificing plausibility. A working demo of our code can be found at https://github.com/abaheti95/DC-NeuralConversation.</description><subject>Computer Science - Computation and Language</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2018</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotj0FOwzAQRb1hgQoHYFVfIMGOPXG8RAFKpQIS6j6axpNiKTiVnRa4PWnKajTz_xvpMXYnRa4rAHGP8cefclkJmwtZSLhm7YoCRRx92PPXIRJfh5EipfnwQekwhESJ-8Df6Bix5_UQThTTRAxhIhz1iX_78ZM_-jRGvzueg0tv2tGHMd2wqw77RLf_c8G2z0_b-iXbvK_W9cMmw9JAVpBQrbMKAbq2ROjczpHQKJUQSFZLK4BaS6B1iRKgIlMJJ7WptDUGC7Vgy8vb2bI5RP-F8bc52zazrfoDzZ5RZg</recordid><startdate>20180904</startdate><enddate>20180904</enddate><creator>Baheti, Ashutosh</creator><creator>Ritter, Alan</creator><creator>Li, Jiwei</creator><creator>Dolan, Bill</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20180904</creationdate><title>Generating More Interesting Responses in Neural Conversation Models with Distributional Constraints</title><author>Baheti, Ashutosh ; Ritter, Alan ; Li, Jiwei ; Dolan, Bill</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a675-2e03cd93a55fc6a5fdbde04a1300ae941905ec9e5446a1558e780d14784977a23</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2018</creationdate><topic>Computer Science - Computation and Language</topic><toplevel>online_resources</toplevel><creatorcontrib>Baheti, Ashutosh</creatorcontrib><creatorcontrib>Ritter, Alan</creatorcontrib><creatorcontrib>Li, Jiwei</creatorcontrib><creatorcontrib>Dolan, Bill</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Baheti, Ashutosh</au><au>Ritter, Alan</au><au>Li, Jiwei</au><au>Dolan, Bill</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Generating More Interesting Responses in Neural Conversation Models with Distributional Constraints</atitle><date>2018-09-04</date><risdate>2018</risdate><abstract>Neural conversation models tend to generate safe, generic responses for most inputs. This is due to the limitations of likelihood-based decoding objectives in generation tasks with diverse outputs, such as conversation. To address this challenge, we propose a simple yet effective approach for incorporating side information in the form of distributional constraints over the generated responses. We propose two constraints that help generate more content rich responses that are based on a model of syntax and topics (Griffiths et al., 2005) and semantic similarity (Arora et al., 2016). We evaluate our approach against a variety of competitive baselines, using both automatic metrics and human judgments, showing that our proposed approach generates responses that are much less generic without sacrificing plausibility. A working demo of our code can be found at https://github.com/abaheti95/DC-NeuralConversation.</abstract><doi>10.48550/arxiv.1809.01215</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.1809.01215
ispartof
issn
language eng
recordid cdi_arxiv_primary_1809_01215
source arXiv.org
subjects Computer Science - Computation and Language
title Generating More Interesting Responses in Neural Conversation Models with Distributional Constraints
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-07T04%3A37%3A55IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Generating%20More%20Interesting%20Responses%20in%20Neural%20Conversation%20Models%20with%20Distributional%20Constraints&rft.au=Baheti,%20Ashutosh&rft.date=2018-09-04&rft_id=info:doi/10.48550/arxiv.1809.01215&rft_dat=%3Carxiv_GOX%3E1809_01215%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true