Training Large Language Models to Reason in a Continuous Latent Space

Large language models (LLMs) are restricted to reason in the "language space", where they typically express the reasoning process with a chain-of-thought (CoT) to solve a complex reasoning problem. However, we argue that language space may not always be optimal for reasoning. For example,...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Hao, Shibo, Sukhbaatar, Sainbayar, Su, DiJia, Li, Xian, Hu, Zhiting, Weston, Jason, Tian, Yuandong
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Hao, Shibo
Sukhbaatar, Sainbayar
Su, DiJia
Li, Xian
Hu, Zhiting
Weston, Jason
Tian, Yuandong
description Large language models (LLMs) are restricted to reason in the "language space", where they typically express the reasoning process with a chain-of-thought (CoT) to solve a complex reasoning problem. However, we argue that language space may not always be optimal for reasoning. For example, most word tokens are primarily for textual coherence and not essential for reasoning, while some critical tokens require complex planning and pose huge challenges to LLMs. To explore the potential of LLM reasoning in an unrestricted latent space instead of using natural language, we introduce a new paradigm Coconut (Chain of Continuous Thought). We utilize the last hidden state of the LLM as a representation of the reasoning state (termed "continuous thought"). Rather than decoding this into a word token, we feed it back to the LLM as the subsequent input embedding directly in the continuous space. Experiments show that Coconut can effectively augment the LLM on several reasoning tasks. This novel latent reasoning paradigm leads to emergent advanced reasoning patterns: the continuous thought can encode multiple alternative next reasoning steps, allowing the model to perform a breadth-first search (BFS) to solve the problem, rather than prematurely committing to a single deterministic path like CoT. Coconut outperforms CoT in certain logical reasoning tasks that require substantial backtracking during planning, with fewer thinking tokens during inference. These findings demonstrate the promise of latent reasoning and offer valuable insights for future research.
doi_str_mv 10.48550/arxiv.2412.06769
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2412_06769</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2412_06769</sourcerecordid><originalsourceid>FETCH-arxiv_primary_2412_067693</originalsourceid><addsrcrecordid>eNpjYJA0NNAzsTA1NdBPLKrILNMzMjE00jMwMzez5GRwDSlKzMzLzEtX8EksSk8FknnppYlAhm9-SmpOsUJJvkJQamJxfp5CZp5CooJzfl5JZl5pfmkxUGVJal6JQnBBYnIqDwNrWmJOcSovlOZmkHdzDXH20AVbGF9QlJmbWFQZD7I4HmyxMWEVANsON-U</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Training Large Language Models to Reason in a Continuous Latent Space</title><source>arXiv.org</source><creator>Hao, Shibo ; Sukhbaatar, Sainbayar ; Su, DiJia ; Li, Xian ; Hu, Zhiting ; Weston, Jason ; Tian, Yuandong</creator><creatorcontrib>Hao, Shibo ; Sukhbaatar, Sainbayar ; Su, DiJia ; Li, Xian ; Hu, Zhiting ; Weston, Jason ; Tian, Yuandong</creatorcontrib><description>Large language models (LLMs) are restricted to reason in the "language space", where they typically express the reasoning process with a chain-of-thought (CoT) to solve a complex reasoning problem. However, we argue that language space may not always be optimal for reasoning. For example, most word tokens are primarily for textual coherence and not essential for reasoning, while some critical tokens require complex planning and pose huge challenges to LLMs. To explore the potential of LLM reasoning in an unrestricted latent space instead of using natural language, we introduce a new paradigm Coconut (Chain of Continuous Thought). We utilize the last hidden state of the LLM as a representation of the reasoning state (termed "continuous thought"). Rather than decoding this into a word token, we feed it back to the LLM as the subsequent input embedding directly in the continuous space. Experiments show that Coconut can effectively augment the LLM on several reasoning tasks. This novel latent reasoning paradigm leads to emergent advanced reasoning patterns: the continuous thought can encode multiple alternative next reasoning steps, allowing the model to perform a breadth-first search (BFS) to solve the problem, rather than prematurely committing to a single deterministic path like CoT. Coconut outperforms CoT in certain logical reasoning tasks that require substantial backtracking during planning, with fewer thinking tokens during inference. These findings demonstrate the promise of latent reasoning and offer valuable insights for future research.</description><identifier>DOI: 10.48550/arxiv.2412.06769</identifier><language>eng</language><subject>Computer Science - Computation and Language</subject><creationdate>2024-12</creationdate><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,780,885</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2412.06769$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2412.06769$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Hao, Shibo</creatorcontrib><creatorcontrib>Sukhbaatar, Sainbayar</creatorcontrib><creatorcontrib>Su, DiJia</creatorcontrib><creatorcontrib>Li, Xian</creatorcontrib><creatorcontrib>Hu, Zhiting</creatorcontrib><creatorcontrib>Weston, Jason</creatorcontrib><creatorcontrib>Tian, Yuandong</creatorcontrib><title>Training Large Language Models to Reason in a Continuous Latent Space</title><description>Large language models (LLMs) are restricted to reason in the "language space", where they typically express the reasoning process with a chain-of-thought (CoT) to solve a complex reasoning problem. However, we argue that language space may not always be optimal for reasoning. For example, most word tokens are primarily for textual coherence and not essential for reasoning, while some critical tokens require complex planning and pose huge challenges to LLMs. To explore the potential of LLM reasoning in an unrestricted latent space instead of using natural language, we introduce a new paradigm Coconut (Chain of Continuous Thought). We utilize the last hidden state of the LLM as a representation of the reasoning state (termed "continuous thought"). Rather than decoding this into a word token, we feed it back to the LLM as the subsequent input embedding directly in the continuous space. Experiments show that Coconut can effectively augment the LLM on several reasoning tasks. This novel latent reasoning paradigm leads to emergent advanced reasoning patterns: the continuous thought can encode multiple alternative next reasoning steps, allowing the model to perform a breadth-first search (BFS) to solve the problem, rather than prematurely committing to a single deterministic path like CoT. Coconut outperforms CoT in certain logical reasoning tasks that require substantial backtracking during planning, with fewer thinking tokens during inference. These findings demonstrate the promise of latent reasoning and offer valuable insights for future research.</description><subject>Computer Science - Computation and Language</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNpjYJA0NNAzsTA1NdBPLKrILNMzMjE00jMwMzez5GRwDSlKzMzLzEtX8EksSk8FknnppYlAhm9-SmpOsUJJvkJQamJxfp5CZp5CooJzfl5JZl5pfmkxUGVJal6JQnBBYnIqDwNrWmJOcSovlOZmkHdzDXH20AVbGF9QlJmbWFQZD7I4HmyxMWEVANsON-U</recordid><startdate>20241209</startdate><enddate>20241209</enddate><creator>Hao, Shibo</creator><creator>Sukhbaatar, Sainbayar</creator><creator>Su, DiJia</creator><creator>Li, Xian</creator><creator>Hu, Zhiting</creator><creator>Weston, Jason</creator><creator>Tian, Yuandong</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20241209</creationdate><title>Training Large Language Models to Reason in a Continuous Latent Space</title><author>Hao, Shibo ; Sukhbaatar, Sainbayar ; Su, DiJia ; Li, Xian ; Hu, Zhiting ; Weston, Jason ; Tian, Yuandong</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-arxiv_primary_2412_067693</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Computer Science - Computation and Language</topic><toplevel>online_resources</toplevel><creatorcontrib>Hao, Shibo</creatorcontrib><creatorcontrib>Sukhbaatar, Sainbayar</creatorcontrib><creatorcontrib>Su, DiJia</creatorcontrib><creatorcontrib>Li, Xian</creatorcontrib><creatorcontrib>Hu, Zhiting</creatorcontrib><creatorcontrib>Weston, Jason</creatorcontrib><creatorcontrib>Tian, Yuandong</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Hao, Shibo</au><au>Sukhbaatar, Sainbayar</au><au>Su, DiJia</au><au>Li, Xian</au><au>Hu, Zhiting</au><au>Weston, Jason</au><au>Tian, Yuandong</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Training Large Language Models to Reason in a Continuous Latent Space</atitle><date>2024-12-09</date><risdate>2024</risdate><abstract>Large language models (LLMs) are restricted to reason in the "language space", where they typically express the reasoning process with a chain-of-thought (CoT) to solve a complex reasoning problem. However, we argue that language space may not always be optimal for reasoning. For example, most word tokens are primarily for textual coherence and not essential for reasoning, while some critical tokens require complex planning and pose huge challenges to LLMs. To explore the potential of LLM reasoning in an unrestricted latent space instead of using natural language, we introduce a new paradigm Coconut (Chain of Continuous Thought). We utilize the last hidden state of the LLM as a representation of the reasoning state (termed "continuous thought"). Rather than decoding this into a word token, we feed it back to the LLM as the subsequent input embedding directly in the continuous space. Experiments show that Coconut can effectively augment the LLM on several reasoning tasks. This novel latent reasoning paradigm leads to emergent advanced reasoning patterns: the continuous thought can encode multiple alternative next reasoning steps, allowing the model to perform a breadth-first search (BFS) to solve the problem, rather than prematurely committing to a single deterministic path like CoT. Coconut outperforms CoT in certain logical reasoning tasks that require substantial backtracking during planning, with fewer thinking tokens during inference. These findings demonstrate the promise of latent reasoning and offer valuable insights for future research.</abstract><doi>10.48550/arxiv.2412.06769</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2412.06769
ispartof
issn
language eng
recordid cdi_arxiv_primary_2412_06769
source arXiv.org
subjects Computer Science - Computation and Language
title Training Large Language Models to Reason in a Continuous Latent Space
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-04T15%3A44%3A26IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Training%20Large%20Language%20Models%20to%20Reason%20in%20a%20Continuous%20Latent%20Space&rft.au=Hao,%20Shibo&rft.date=2024-12-09&rft_id=info:doi/10.48550/arxiv.2412.06769&rft_dat=%3Carxiv_GOX%3E2412_06769%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true