Joint Text and Label Generation for Spoken Language Understanding

Generalization is a central problem in machine learning, especially when data is limited. Using prior information to enforce constraints is the principled way of encouraging generalization. In this work, we propose to leverage the prior information embedded in pretrained language models (LM) to impr...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Li, Yang, Athiwaratkun, Ben, Santos, Cicero Nogueira dos, Xiang, Bing
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Li, Yang
Athiwaratkun, Ben
Santos, Cicero Nogueira dos
Xiang, Bing
description Generalization is a central problem in machine learning, especially when data is limited. Using prior information to enforce constraints is the principled way of encouraging generalization. In this work, we propose to leverage the prior information embedded in pretrained language models (LM) to improve generalization for intent classification and slot labeling tasks with limited training data. Specifically, we extract prior knowledge from pretrained LM in the form of synthetic data, which encode the prior implicitly. We fine-tune the LM to generate an augmented language, which contains not only text but also encodes both intent labels and slot labels. The generated synthetic data can be used to train a classifier later. Since the generated data may contain noise, we rephrase the learning from generated data as learning with noisy labels. We then utilize the mixout regularization for the classifier and prove its effectiveness to resist label noise in generated data. Empirically, our method demonstrates superior performance and outperforms the baseline by a large margin.
doi_str_mv 10.48550/arxiv.2105.05052
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2105_05052</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2105_05052</sourcerecordid><originalsourceid>FETCH-LOGICAL-a672-79a0bdc7f65a4bca1ced266074cddd4bbd6bdc7a1d27260829ea7a6bd9e27c433</originalsourceid><addsrcrecordid>eNotj8tuwjAURL3pooJ-QFf1DyR1HD_IEqEWWkViQVhH1743kVVwkEkr-vcE2tVIc0YjHcaeC5GrhdbiFdIl_OSyEDoXWmj5yJafQ4gjb-gycojIa3B04GuKlGAMQ-TdkPjuNHxRnFjsv6Envo9I6TxO-xD7OXvo4HCmp_-cseb9rVltsnq7_lgt6wyMlZmtQDj0tjMalPNQeEJpjLDKI6JyDs0NQ4HSSiMWsiKwMHUVSetVWc7Yy9_t3aE9pXCE9NveXNq7S3kFbvRE7g</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Joint Text and Label Generation for Spoken Language Understanding</title><source>arXiv.org</source><creator>Li, Yang ; Athiwaratkun, Ben ; Santos, Cicero Nogueira dos ; Xiang, Bing</creator><creatorcontrib>Li, Yang ; Athiwaratkun, Ben ; Santos, Cicero Nogueira dos ; Xiang, Bing</creatorcontrib><description>Generalization is a central problem in machine learning, especially when data is limited. Using prior information to enforce constraints is the principled way of encouraging generalization. In this work, we propose to leverage the prior information embedded in pretrained language models (LM) to improve generalization for intent classification and slot labeling tasks with limited training data. Specifically, we extract prior knowledge from pretrained LM in the form of synthetic data, which encode the prior implicitly. We fine-tune the LM to generate an augmented language, which contains not only text but also encodes both intent labels and slot labels. The generated synthetic data can be used to train a classifier later. Since the generated data may contain noise, we rephrase the learning from generated data as learning with noisy labels. We then utilize the mixout regularization for the classifier and prove its effectiveness to resist label noise in generated data. Empirically, our method demonstrates superior performance and outperforms the baseline by a large margin.</description><identifier>DOI: 10.48550/arxiv.2105.05052</identifier><language>eng</language><subject>Computer Science - Computation and Language ; Computer Science - Learning</subject><creationdate>2021-05</creationdate><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,780,885</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2105.05052$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2105.05052$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Li, Yang</creatorcontrib><creatorcontrib>Athiwaratkun, Ben</creatorcontrib><creatorcontrib>Santos, Cicero Nogueira dos</creatorcontrib><creatorcontrib>Xiang, Bing</creatorcontrib><title>Joint Text and Label Generation for Spoken Language Understanding</title><description>Generalization is a central problem in machine learning, especially when data is limited. Using prior information to enforce constraints is the principled way of encouraging generalization. In this work, we propose to leverage the prior information embedded in pretrained language models (LM) to improve generalization for intent classification and slot labeling tasks with limited training data. Specifically, we extract prior knowledge from pretrained LM in the form of synthetic data, which encode the prior implicitly. We fine-tune the LM to generate an augmented language, which contains not only text but also encodes both intent labels and slot labels. The generated synthetic data can be used to train a classifier later. Since the generated data may contain noise, we rephrase the learning from generated data as learning with noisy labels. We then utilize the mixout regularization for the classifier and prove its effectiveness to resist label noise in generated data. Empirically, our method demonstrates superior performance and outperforms the baseline by a large margin.</description><subject>Computer Science - Computation and Language</subject><subject>Computer Science - Learning</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2021</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotj8tuwjAURL3pooJ-QFf1DyR1HD_IEqEWWkViQVhH1743kVVwkEkr-vcE2tVIc0YjHcaeC5GrhdbiFdIl_OSyEDoXWmj5yJafQ4gjb-gycojIa3B04GuKlGAMQ-TdkPjuNHxRnFjsv6Envo9I6TxO-xD7OXvo4HCmp_-cseb9rVltsnq7_lgt6wyMlZmtQDj0tjMalPNQeEJpjLDKI6JyDs0NQ4HSSiMWsiKwMHUVSetVWc7Yy9_t3aE9pXCE9NveXNq7S3kFbvRE7g</recordid><startdate>20210511</startdate><enddate>20210511</enddate><creator>Li, Yang</creator><creator>Athiwaratkun, Ben</creator><creator>Santos, Cicero Nogueira dos</creator><creator>Xiang, Bing</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20210511</creationdate><title>Joint Text and Label Generation for Spoken Language Understanding</title><author>Li, Yang ; Athiwaratkun, Ben ; Santos, Cicero Nogueira dos ; Xiang, Bing</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a672-79a0bdc7f65a4bca1ced266074cddd4bbd6bdc7a1d27260829ea7a6bd9e27c433</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2021</creationdate><topic>Computer Science - Computation and Language</topic><topic>Computer Science - Learning</topic><toplevel>online_resources</toplevel><creatorcontrib>Li, Yang</creatorcontrib><creatorcontrib>Athiwaratkun, Ben</creatorcontrib><creatorcontrib>Santos, Cicero Nogueira dos</creatorcontrib><creatorcontrib>Xiang, Bing</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Li, Yang</au><au>Athiwaratkun, Ben</au><au>Santos, Cicero Nogueira dos</au><au>Xiang, Bing</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Joint Text and Label Generation for Spoken Language Understanding</atitle><date>2021-05-11</date><risdate>2021</risdate><abstract>Generalization is a central problem in machine learning, especially when data is limited. Using prior information to enforce constraints is the principled way of encouraging generalization. In this work, we propose to leverage the prior information embedded in pretrained language models (LM) to improve generalization for intent classification and slot labeling tasks with limited training data. Specifically, we extract prior knowledge from pretrained LM in the form of synthetic data, which encode the prior implicitly. We fine-tune the LM to generate an augmented language, which contains not only text but also encodes both intent labels and slot labels. The generated synthetic data can be used to train a classifier later. Since the generated data may contain noise, we rephrase the learning from generated data as learning with noisy labels. We then utilize the mixout regularization for the classifier and prove its effectiveness to resist label noise in generated data. Empirically, our method demonstrates superior performance and outperforms the baseline by a large margin.</abstract><doi>10.48550/arxiv.2105.05052</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2105.05052
ispartof
issn
language eng
recordid cdi_arxiv_primary_2105_05052
source arXiv.org
subjects Computer Science - Computation and Language
Computer Science - Learning
title Joint Text and Label Generation for Spoken Language Understanding
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-26T06%3A03%3A46IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Joint%20Text%20and%20Label%20Generation%20for%20Spoken%20Language%20Understanding&rft.au=Li,%20Yang&rft.date=2021-05-11&rft_id=info:doi/10.48550/arxiv.2105.05052&rft_dat=%3Carxiv_GOX%3E2105_05052%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true