Improving Open-Ended Text Generation via Adaptive Decoding

Current language models decode text token by token according to probabilistic distribution, and determining the appropriate candidates for the next token is crucial to ensure generation quality. This study introduces adaptive decoding, a mechanism that dynamically empowers language models to ascerta...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Zhu, Wenhong, Hao, Hongkun, He, Zhiwei, Ai, Yiming, Wang, Rui
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Zhu, Wenhong
Hao, Hongkun
He, Zhiwei
Ai, Yiming
Wang, Rui
description Current language models decode text token by token according to probabilistic distribution, and determining the appropriate candidates for the next token is crucial to ensure generation quality. This study introduces adaptive decoding, a mechanism that dynamically empowers language models to ascertain a sensible candidate set during generation. Specifically, we introduce an entropy-based metric called confidence and conceptualize determining the optimal candidate set as a confidence-increasing process. The rationality of including a token in the candidate set is assessed by leveraging the increment of confidence. Experimental results reveal that our method balances diversity and coherence well. The human evaluation shows that our method can generate human-preferred text. Additionally, our method can potentially improve the reasoning ability of language models.
doi_str_mv 10.48550/arxiv.2402.18223
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2402_18223</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2402_18223</sourcerecordid><originalsourceid>FETCH-LOGICAL-a673-cf76cdbca4c504143f315481874c1b7afece3ff035caf5e220fbb81f5d8ec1833</originalsourceid><addsrcrecordid>eNotj71uwjAURr0wVNAH6FS_QFL_EqsbAkqRkFiyRzfX91aWihOZKKJv35Z2OtN39B0hnrSqXfBevUC5pbk2TplaB2Psg3g9XsYyzCl_yPNIudrnSFG2dJvkgTIVmNKQ5ZxAbiKMU5pJ7giH-DNYiQXD55Ue_7kU7du-3b5Xp_PhuN2cKlg3tkJu1hh7BIdeOe0sW-1d0KFxqPsGmJAss7IegT0Zo7jvg2YfA6EO1i7F85_2fr4bS7pA-ep-I7p7hP0GIYZB7A</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Improving Open-Ended Text Generation via Adaptive Decoding</title><source>arXiv.org</source><creator>Zhu, Wenhong ; Hao, Hongkun ; He, Zhiwei ; Ai, Yiming ; Wang, Rui</creator><creatorcontrib>Zhu, Wenhong ; Hao, Hongkun ; He, Zhiwei ; Ai, Yiming ; Wang, Rui</creatorcontrib><description>Current language models decode text token by token according to probabilistic distribution, and determining the appropriate candidates for the next token is crucial to ensure generation quality. This study introduces adaptive decoding, a mechanism that dynamically empowers language models to ascertain a sensible candidate set during generation. Specifically, we introduce an entropy-based metric called confidence and conceptualize determining the optimal candidate set as a confidence-increasing process. The rationality of including a token in the candidate set is assessed by leveraging the increment of confidence. Experimental results reveal that our method balances diversity and coherence well. The human evaluation shows that our method can generate human-preferred text. Additionally, our method can potentially improve the reasoning ability of language models.</description><identifier>DOI: 10.48550/arxiv.2402.18223</identifier><language>eng</language><subject>Computer Science - Computation and Language</subject><creationdate>2024-02</creationdate><rights>http://creativecommons.org/licenses/by/4.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,780,885</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2402.18223$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2402.18223$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Zhu, Wenhong</creatorcontrib><creatorcontrib>Hao, Hongkun</creatorcontrib><creatorcontrib>He, Zhiwei</creatorcontrib><creatorcontrib>Ai, Yiming</creatorcontrib><creatorcontrib>Wang, Rui</creatorcontrib><title>Improving Open-Ended Text Generation via Adaptive Decoding</title><description>Current language models decode text token by token according to probabilistic distribution, and determining the appropriate candidates for the next token is crucial to ensure generation quality. This study introduces adaptive decoding, a mechanism that dynamically empowers language models to ascertain a sensible candidate set during generation. Specifically, we introduce an entropy-based metric called confidence and conceptualize determining the optimal candidate set as a confidence-increasing process. The rationality of including a token in the candidate set is assessed by leveraging the increment of confidence. Experimental results reveal that our method balances diversity and coherence well. The human evaluation shows that our method can generate human-preferred text. Additionally, our method can potentially improve the reasoning ability of language models.</description><subject>Computer Science - Computation and Language</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotj71uwjAURr0wVNAH6FS_QFL_EqsbAkqRkFiyRzfX91aWihOZKKJv35Z2OtN39B0hnrSqXfBevUC5pbk2TplaB2Psg3g9XsYyzCl_yPNIudrnSFG2dJvkgTIVmNKQ5ZxAbiKMU5pJ7giH-DNYiQXD55Ue_7kU7du-3b5Xp_PhuN2cKlg3tkJu1hh7BIdeOe0sW-1d0KFxqPsGmJAss7IegT0Zo7jvg2YfA6EO1i7F85_2fr4bS7pA-ep-I7p7hP0GIYZB7A</recordid><startdate>20240228</startdate><enddate>20240228</enddate><creator>Zhu, Wenhong</creator><creator>Hao, Hongkun</creator><creator>He, Zhiwei</creator><creator>Ai, Yiming</creator><creator>Wang, Rui</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20240228</creationdate><title>Improving Open-Ended Text Generation via Adaptive Decoding</title><author>Zhu, Wenhong ; Hao, Hongkun ; He, Zhiwei ; Ai, Yiming ; Wang, Rui</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a673-cf76cdbca4c504143f315481874c1b7afece3ff035caf5e220fbb81f5d8ec1833</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Computer Science - Computation and Language</topic><toplevel>online_resources</toplevel><creatorcontrib>Zhu, Wenhong</creatorcontrib><creatorcontrib>Hao, Hongkun</creatorcontrib><creatorcontrib>He, Zhiwei</creatorcontrib><creatorcontrib>Ai, Yiming</creatorcontrib><creatorcontrib>Wang, Rui</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Zhu, Wenhong</au><au>Hao, Hongkun</au><au>He, Zhiwei</au><au>Ai, Yiming</au><au>Wang, Rui</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Improving Open-Ended Text Generation via Adaptive Decoding</atitle><date>2024-02-28</date><risdate>2024</risdate><abstract>Current language models decode text token by token according to probabilistic distribution, and determining the appropriate candidates for the next token is crucial to ensure generation quality. This study introduces adaptive decoding, a mechanism that dynamically empowers language models to ascertain a sensible candidate set during generation. Specifically, we introduce an entropy-based metric called confidence and conceptualize determining the optimal candidate set as a confidence-increasing process. The rationality of including a token in the candidate set is assessed by leveraging the increment of confidence. Experimental results reveal that our method balances diversity and coherence well. The human evaluation shows that our method can generate human-preferred text. Additionally, our method can potentially improve the reasoning ability of language models.</abstract><doi>10.48550/arxiv.2402.18223</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2402.18223
ispartof
issn
language eng
recordid cdi_arxiv_primary_2402_18223
source arXiv.org
subjects Computer Science - Computation and Language
title Improving Open-Ended Text Generation via Adaptive Decoding
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-09T18%3A13%3A41IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Improving%20Open-Ended%20Text%20Generation%20via%20Adaptive%20Decoding&rft.au=Zhu,%20Wenhong&rft.date=2024-02-28&rft_id=info:doi/10.48550/arxiv.2402.18223&rft_dat=%3Carxiv_GOX%3E2402_18223%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true