A Comparative Study on Language Models for Task-Oriented Dialogue Systems

2021 8th International Conference on Advanced Informatics: Concepts, Theory and Applications (ICAICTA) (pp. 1-5). IEEE The recent development of language models has shown promising results by achieving state-of-the-art performance on various natural language tasks by fine-tuning pretrained models. I...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Andreas, Vinsen Marselino, Winata, Genta Indra, Purwarianti, Ayu
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Andreas, Vinsen Marselino
Winata, Genta Indra
Purwarianti, Ayu
description 2021 8th International Conference on Advanced Informatics: Concepts, Theory and Applications (ICAICTA) (pp. 1-5). IEEE The recent development of language models has shown promising results by achieving state-of-the-art performance on various natural language tasks by fine-tuning pretrained models. In task-oriented dialogue (ToD) systems, language models can be used for end-to-end training without relying on dialogue state tracking to track the dialogue history but allowing the language models to generate responses according to the context given as input. This paper conducts a comparative study to show the effectiveness and strength of using recent pretrained models for fine-tuning, such as BART and T5, on endto-end ToD systems. The experimental results show substantial performance improvements after language model fine-tuning. The models produce more fluent responses after adding knowledge to the context that guides the model to avoid hallucination and generate accurate entities in the generated responses. Furthermore, we found that BART and T5 outperform GPT-based models in BLEU and F1 scores and achieve state-of-the-art performance in a ToD system.
doi_str_mv 10.48550/arxiv.2201.08687
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2201_08687</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2201_08687</sourcerecordid><originalsourceid>FETCH-LOGICAL-a677-ec50503e87e42ceb806982208f69f796e795f48221689faa3d026144dd0a0dfc3</originalsourceid><addsrcrecordid>eNotj8tugzAURL3pokr7AV3VPwA14Ocyoq9IRFmUPbrF1wgVcGRDVP6-NO1qpJHmaA4hDxlLuRaCPUH47i9pnrMsZVpqdUsOe1r68QwB5v6C9GNe7Er9RCuYugU6pEdvcYjU-UBriF_JKfQ4zWjpcw-D75Zts8YZx3hHbhwMEe__c0fq15e6fE-q09uh3FcJSKUSbAUTrECtkOctfmomjd4OaSeNU0aiMsLxrcmkNg6gsCyXGefWMmDWtcWOPP5hry7NOfQjhLX5dWquTsUPbi5GGg</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>A Comparative Study on Language Models for Task-Oriented Dialogue Systems</title><source>arXiv.org</source><creator>Andreas, Vinsen Marselino ; Winata, Genta Indra ; Purwarianti, Ayu</creator><creatorcontrib>Andreas, Vinsen Marselino ; Winata, Genta Indra ; Purwarianti, Ayu</creatorcontrib><description>2021 8th International Conference on Advanced Informatics: Concepts, Theory and Applications (ICAICTA) (pp. 1-5). IEEE The recent development of language models has shown promising results by achieving state-of-the-art performance on various natural language tasks by fine-tuning pretrained models. In task-oriented dialogue (ToD) systems, language models can be used for end-to-end training without relying on dialogue state tracking to track the dialogue history but allowing the language models to generate responses according to the context given as input. This paper conducts a comparative study to show the effectiveness and strength of using recent pretrained models for fine-tuning, such as BART and T5, on endto-end ToD systems. The experimental results show substantial performance improvements after language model fine-tuning. The models produce more fluent responses after adding knowledge to the context that guides the model to avoid hallucination and generate accurate entities in the generated responses. Furthermore, we found that BART and T5 outperform GPT-based models in BLEU and F1 scores and achieve state-of-the-art performance in a ToD system.</description><identifier>DOI: 10.48550/arxiv.2201.08687</identifier><language>eng</language><subject>Computer Science - Computation and Language</subject><creationdate>2022-01</creationdate><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,780,885</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2201.08687$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2201.08687$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Andreas, Vinsen Marselino</creatorcontrib><creatorcontrib>Winata, Genta Indra</creatorcontrib><creatorcontrib>Purwarianti, Ayu</creatorcontrib><title>A Comparative Study on Language Models for Task-Oriented Dialogue Systems</title><description>2021 8th International Conference on Advanced Informatics: Concepts, Theory and Applications (ICAICTA) (pp. 1-5). IEEE The recent development of language models has shown promising results by achieving state-of-the-art performance on various natural language tasks by fine-tuning pretrained models. In task-oriented dialogue (ToD) systems, language models can be used for end-to-end training without relying on dialogue state tracking to track the dialogue history but allowing the language models to generate responses according to the context given as input. This paper conducts a comparative study to show the effectiveness and strength of using recent pretrained models for fine-tuning, such as BART and T5, on endto-end ToD systems. The experimental results show substantial performance improvements after language model fine-tuning. The models produce more fluent responses after adding knowledge to the context that guides the model to avoid hallucination and generate accurate entities in the generated responses. Furthermore, we found that BART and T5 outperform GPT-based models in BLEU and F1 scores and achieve state-of-the-art performance in a ToD system.</description><subject>Computer Science - Computation and Language</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2022</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotj8tugzAURL3pokr7AV3VPwA14Ocyoq9IRFmUPbrF1wgVcGRDVP6-NO1qpJHmaA4hDxlLuRaCPUH47i9pnrMsZVpqdUsOe1r68QwB5v6C9GNe7Er9RCuYugU6pEdvcYjU-UBriF_JKfQ4zWjpcw-D75Zts8YZx3hHbhwMEe__c0fq15e6fE-q09uh3FcJSKUSbAUTrECtkOctfmomjd4OaSeNU0aiMsLxrcmkNg6gsCyXGefWMmDWtcWOPP5hry7NOfQjhLX5dWquTsUPbi5GGg</recordid><startdate>20220121</startdate><enddate>20220121</enddate><creator>Andreas, Vinsen Marselino</creator><creator>Winata, Genta Indra</creator><creator>Purwarianti, Ayu</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20220121</creationdate><title>A Comparative Study on Language Models for Task-Oriented Dialogue Systems</title><author>Andreas, Vinsen Marselino ; Winata, Genta Indra ; Purwarianti, Ayu</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a677-ec50503e87e42ceb806982208f69f796e795f48221689faa3d026144dd0a0dfc3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2022</creationdate><topic>Computer Science - Computation and Language</topic><toplevel>online_resources</toplevel><creatorcontrib>Andreas, Vinsen Marselino</creatorcontrib><creatorcontrib>Winata, Genta Indra</creatorcontrib><creatorcontrib>Purwarianti, Ayu</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Andreas, Vinsen Marselino</au><au>Winata, Genta Indra</au><au>Purwarianti, Ayu</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>A Comparative Study on Language Models for Task-Oriented Dialogue Systems</atitle><date>2022-01-21</date><risdate>2022</risdate><abstract>2021 8th International Conference on Advanced Informatics: Concepts, Theory and Applications (ICAICTA) (pp. 1-5). IEEE The recent development of language models has shown promising results by achieving state-of-the-art performance on various natural language tasks by fine-tuning pretrained models. In task-oriented dialogue (ToD) systems, language models can be used for end-to-end training without relying on dialogue state tracking to track the dialogue history but allowing the language models to generate responses according to the context given as input. This paper conducts a comparative study to show the effectiveness and strength of using recent pretrained models for fine-tuning, such as BART and T5, on endto-end ToD systems. The experimental results show substantial performance improvements after language model fine-tuning. The models produce more fluent responses after adding knowledge to the context that guides the model to avoid hallucination and generate accurate entities in the generated responses. Furthermore, we found that BART and T5 outperform GPT-based models in BLEU and F1 scores and achieve state-of-the-art performance in a ToD system.</abstract><doi>10.48550/arxiv.2201.08687</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2201.08687
ispartof
issn
language eng
recordid cdi_arxiv_primary_2201_08687
source arXiv.org
subjects Computer Science - Computation and Language
title A Comparative Study on Language Models for Task-Oriented Dialogue Systems
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-14T16%3A50%3A30IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=A%20Comparative%20Study%20on%20Language%20Models%20for%20Task-Oriented%20Dialogue%20Systems&rft.au=Andreas,%20Vinsen%20Marselino&rft.date=2022-01-21&rft_id=info:doi/10.48550/arxiv.2201.08687&rft_dat=%3Carxiv_GOX%3E2201_08687%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true