CLLMs: Consistency Large Language Models

Parallel decoding methods such as Jacobi decoding show promise for more efficient LLM inference as it breaks the sequential nature of the LLM decoding process and transforms it into parallelizable computation. However, in practice, it achieves little speedup compared to traditional autoregressive (A...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Kou, Siqi, Hu, Lanxiang, He, Zhezhi, Deng, Zhijie, Zhang, Hao
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Kou, Siqi
Hu, Lanxiang
He, Zhezhi
Deng, Zhijie
Zhang, Hao
description Parallel decoding methods such as Jacobi decoding show promise for more efficient LLM inference as it breaks the sequential nature of the LLM decoding process and transforms it into parallelizable computation. However, in practice, it achieves little speedup compared to traditional autoregressive (AR) decoding, primarily because Jacobi decoding seldom accurately predicts more than one token in a single fixed-point iteration step. To address this, we develop a new approach aimed at realizing fast convergence from any state to the fixed point on a Jacobi trajectory. This is accomplished by refining the target LLM to consistently predict the fixed point given any state as input. Extensive experiments demonstrate the effectiveness of our method, showing 2.4$\times$ to 3.4$\times$ improvements in generation speed while preserving generation quality across both domain-specific and open-domain benchmarks.
doi_str_mv 10.48550/arxiv.2403.00835
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2403_00835</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2403_00835</sourcerecordid><originalsourceid>FETCH-LOGICAL-a675-c1683ec1bd73231fd7feee816b631bd92b5c294ce40cccfce40bde2b19b361073</originalsourceid><addsrcrecordid>eNotjj0PgjAURbs4GPQHOOnoArZ9tICbIX4lJS7spC0PQ6JoqBr594K63HNzh5tDyIzRIIyFoCvdvutXwEMKAaUxiDFZpkplbr1Ib42r3QMb2y2Ubs_YZ3N-6r5ktxIvbkJGlb44nP7pkXy3zdODr077Y7pRvpaR8C2TMaBlpoyAA6vKqELEmEkjoR8TboTlSWgxpNbaaqApkRuWGJCMRuCR-e_2q1rc2_qq264YlIuvMnwAju063A</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>CLLMs: Consistency Large Language Models</title><source>arXiv.org</source><creator>Kou, Siqi ; Hu, Lanxiang ; He, Zhezhi ; Deng, Zhijie ; Zhang, Hao</creator><creatorcontrib>Kou, Siqi ; Hu, Lanxiang ; He, Zhezhi ; Deng, Zhijie ; Zhang, Hao</creatorcontrib><description>Parallel decoding methods such as Jacobi decoding show promise for more efficient LLM inference as it breaks the sequential nature of the LLM decoding process and transforms it into parallelizable computation. However, in practice, it achieves little speedup compared to traditional autoregressive (AR) decoding, primarily because Jacobi decoding seldom accurately predicts more than one token in a single fixed-point iteration step. To address this, we develop a new approach aimed at realizing fast convergence from any state to the fixed point on a Jacobi trajectory. This is accomplished by refining the target LLM to consistently predict the fixed point given any state as input. Extensive experiments demonstrate the effectiveness of our method, showing 2.4$\times$ to 3.4$\times$ improvements in generation speed while preserving generation quality across both domain-specific and open-domain benchmarks.</description><identifier>DOI: 10.48550/arxiv.2403.00835</identifier><language>eng</language><subject>Computer Science - Artificial Intelligence ; Computer Science - Computation and Language</subject><creationdate>2024-02</creationdate><rights>http://creativecommons.org/publicdomain/zero/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,780,885</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2403.00835$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2403.00835$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Kou, Siqi</creatorcontrib><creatorcontrib>Hu, Lanxiang</creatorcontrib><creatorcontrib>He, Zhezhi</creatorcontrib><creatorcontrib>Deng, Zhijie</creatorcontrib><creatorcontrib>Zhang, Hao</creatorcontrib><title>CLLMs: Consistency Large Language Models</title><description>Parallel decoding methods such as Jacobi decoding show promise for more efficient LLM inference as it breaks the sequential nature of the LLM decoding process and transforms it into parallelizable computation. However, in practice, it achieves little speedup compared to traditional autoregressive (AR) decoding, primarily because Jacobi decoding seldom accurately predicts more than one token in a single fixed-point iteration step. To address this, we develop a new approach aimed at realizing fast convergence from any state to the fixed point on a Jacobi trajectory. This is accomplished by refining the target LLM to consistently predict the fixed point given any state as input. Extensive experiments demonstrate the effectiveness of our method, showing 2.4$\times$ to 3.4$\times$ improvements in generation speed while preserving generation quality across both domain-specific and open-domain benchmarks.</description><subject>Computer Science - Artificial Intelligence</subject><subject>Computer Science - Computation and Language</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotjj0PgjAURbs4GPQHOOnoArZ9tICbIX4lJS7spC0PQ6JoqBr594K63HNzh5tDyIzRIIyFoCvdvutXwEMKAaUxiDFZpkplbr1Ib42r3QMb2y2Ubs_YZ3N-6r5ktxIvbkJGlb44nP7pkXy3zdODr077Y7pRvpaR8C2TMaBlpoyAA6vKqELEmEkjoR8TboTlSWgxpNbaaqApkRuWGJCMRuCR-e_2q1rc2_qq264YlIuvMnwAju063A</recordid><startdate>20240228</startdate><enddate>20240228</enddate><creator>Kou, Siqi</creator><creator>Hu, Lanxiang</creator><creator>He, Zhezhi</creator><creator>Deng, Zhijie</creator><creator>Zhang, Hao</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20240228</creationdate><title>CLLMs: Consistency Large Language Models</title><author>Kou, Siqi ; Hu, Lanxiang ; He, Zhezhi ; Deng, Zhijie ; Zhang, Hao</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a675-c1683ec1bd73231fd7feee816b631bd92b5c294ce40cccfce40bde2b19b361073</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Computer Science - Artificial Intelligence</topic><topic>Computer Science - Computation and Language</topic><toplevel>online_resources</toplevel><creatorcontrib>Kou, Siqi</creatorcontrib><creatorcontrib>Hu, Lanxiang</creatorcontrib><creatorcontrib>He, Zhezhi</creatorcontrib><creatorcontrib>Deng, Zhijie</creatorcontrib><creatorcontrib>Zhang, Hao</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Kou, Siqi</au><au>Hu, Lanxiang</au><au>He, Zhezhi</au><au>Deng, Zhijie</au><au>Zhang, Hao</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>CLLMs: Consistency Large Language Models</atitle><date>2024-02-28</date><risdate>2024</risdate><abstract>Parallel decoding methods such as Jacobi decoding show promise for more efficient LLM inference as it breaks the sequential nature of the LLM decoding process and transforms it into parallelizable computation. However, in practice, it achieves little speedup compared to traditional autoregressive (AR) decoding, primarily because Jacobi decoding seldom accurately predicts more than one token in a single fixed-point iteration step. To address this, we develop a new approach aimed at realizing fast convergence from any state to the fixed point on a Jacobi trajectory. This is accomplished by refining the target LLM to consistently predict the fixed point given any state as input. Extensive experiments demonstrate the effectiveness of our method, showing 2.4$\times$ to 3.4$\times$ improvements in generation speed while preserving generation quality across both domain-specific and open-domain benchmarks.</abstract><doi>10.48550/arxiv.2403.00835</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2403.00835
ispartof
issn
language eng
recordid cdi_arxiv_primary_2403_00835
source arXiv.org
subjects Computer Science - Artificial Intelligence
Computer Science - Computation and Language
title CLLMs: Consistency Large Language Models
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-20T05%3A39%3A13IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=CLLMs:%20Consistency%20Large%20Language%20Models&rft.au=Kou,%20Siqi&rft.date=2024-02-28&rft_id=info:doi/10.48550/arxiv.2403.00835&rft_dat=%3Carxiv_GOX%3E2403_00835%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true