Arrows of Time for Large Language Models

We study the probabilistic modeling performed by Autoregressive Large Language Models (LLMs) through the angle of time directionality, addressing a question first raised in (Shannon, 1951). For large enough models, we empirically find a time asymmetry in their ability to learn natural language: a di...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Papadopoulos, Vassilis, Wenger, Jérémie, Hongler, Clément
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Papadopoulos, Vassilis
Wenger, Jérémie
Hongler, Clément
description We study the probabilistic modeling performed by Autoregressive Large Language Models (LLMs) through the angle of time directionality, addressing a question first raised in (Shannon, 1951). For large enough models, we empirically find a time asymmetry in their ability to learn natural language: a difference in the average log-perplexity when trying to predict the next token versus when trying to predict the previous one. This difference is at the same time subtle and very consistent across various modalities (language, model size, training time, ...). Theoretically, this is surprising: from an information-theoretic point of view, there should be no such difference. We provide a theoretical framework to explain how such an asymmetry can appear from sparsity and computational complexity considerations, and outline a number of perspectives opened by our results.
doi_str_mv 10.48550/arxiv.2401.17505
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2401_17505</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2401_17505</sourcerecordid><originalsourceid>FETCH-LOGICAL-a675-e1cb992f935c83fefc28a1c394cb7ac1950d927006c71ce48b27567300a50f453</originalsourceid><addsrcrecordid>eNotzrkOwjAQBFA3FCjwAVSkpElYHxvHJUJcUhBN-mhjbBQJCHLE9feczcxUo8fYiEOqckSYUng0t1Qo4CnXCNhnk1kI7b2LWx-XzcnFvg1xQeHg3nk-XOk9tu3eHbsB63k6dm7474iVy0U5XyfFbrWZz4qEMo2J47Y2Rngj0ebSO29FTtxKo2ytyXKDsDdCA2RWc-tUXguNmZYAhOAVyoiNf7dfanUJzYnCs_qQqy9ZvgBZKDmi</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Arrows of Time for Large Language Models</title><source>arXiv.org</source><creator>Papadopoulos, Vassilis ; Wenger, Jérémie ; Hongler, Clément</creator><creatorcontrib>Papadopoulos, Vassilis ; Wenger, Jérémie ; Hongler, Clément</creatorcontrib><description>We study the probabilistic modeling performed by Autoregressive Large Language Models (LLMs) through the angle of time directionality, addressing a question first raised in (Shannon, 1951). For large enough models, we empirically find a time asymmetry in their ability to learn natural language: a difference in the average log-perplexity when trying to predict the next token versus when trying to predict the previous one. This difference is at the same time subtle and very consistent across various modalities (language, model size, training time, ...). Theoretically, this is surprising: from an information-theoretic point of view, there should be no such difference. We provide a theoretical framework to explain how such an asymmetry can appear from sparsity and computational complexity considerations, and outline a number of perspectives opened by our results.</description><identifier>DOI: 10.48550/arxiv.2401.17505</identifier><language>eng</language><subject>Computer Science - Artificial Intelligence ; Computer Science - Computation and Language ; Computer Science - Learning</subject><creationdate>2024-01</creationdate><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,780,885</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2401.17505$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2401.17505$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Papadopoulos, Vassilis</creatorcontrib><creatorcontrib>Wenger, Jérémie</creatorcontrib><creatorcontrib>Hongler, Clément</creatorcontrib><title>Arrows of Time for Large Language Models</title><description>We study the probabilistic modeling performed by Autoregressive Large Language Models (LLMs) through the angle of time directionality, addressing a question first raised in (Shannon, 1951). For large enough models, we empirically find a time asymmetry in their ability to learn natural language: a difference in the average log-perplexity when trying to predict the next token versus when trying to predict the previous one. This difference is at the same time subtle and very consistent across various modalities (language, model size, training time, ...). Theoretically, this is surprising: from an information-theoretic point of view, there should be no such difference. We provide a theoretical framework to explain how such an asymmetry can appear from sparsity and computational complexity considerations, and outline a number of perspectives opened by our results.</description><subject>Computer Science - Artificial Intelligence</subject><subject>Computer Science - Computation and Language</subject><subject>Computer Science - Learning</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotzrkOwjAQBFA3FCjwAVSkpElYHxvHJUJcUhBN-mhjbBQJCHLE9feczcxUo8fYiEOqckSYUng0t1Qo4CnXCNhnk1kI7b2LWx-XzcnFvg1xQeHg3nk-XOk9tu3eHbsB63k6dm7474iVy0U5XyfFbrWZz4qEMo2J47Y2Rngj0ebSO29FTtxKo2ytyXKDsDdCA2RWc-tUXguNmZYAhOAVyoiNf7dfanUJzYnCs_qQqy9ZvgBZKDmi</recordid><startdate>20240130</startdate><enddate>20240130</enddate><creator>Papadopoulos, Vassilis</creator><creator>Wenger, Jérémie</creator><creator>Hongler, Clément</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20240130</creationdate><title>Arrows of Time for Large Language Models</title><author>Papadopoulos, Vassilis ; Wenger, Jérémie ; Hongler, Clément</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a675-e1cb992f935c83fefc28a1c394cb7ac1950d927006c71ce48b27567300a50f453</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Computer Science - Artificial Intelligence</topic><topic>Computer Science - Computation and Language</topic><topic>Computer Science - Learning</topic><toplevel>online_resources</toplevel><creatorcontrib>Papadopoulos, Vassilis</creatorcontrib><creatorcontrib>Wenger, Jérémie</creatorcontrib><creatorcontrib>Hongler, Clément</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Papadopoulos, Vassilis</au><au>Wenger, Jérémie</au><au>Hongler, Clément</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Arrows of Time for Large Language Models</atitle><date>2024-01-30</date><risdate>2024</risdate><abstract>We study the probabilistic modeling performed by Autoregressive Large Language Models (LLMs) through the angle of time directionality, addressing a question first raised in (Shannon, 1951). For large enough models, we empirically find a time asymmetry in their ability to learn natural language: a difference in the average log-perplexity when trying to predict the next token versus when trying to predict the previous one. This difference is at the same time subtle and very consistent across various modalities (language, model size, training time, ...). Theoretically, this is surprising: from an information-theoretic point of view, there should be no such difference. We provide a theoretical framework to explain how such an asymmetry can appear from sparsity and computational complexity considerations, and outline a number of perspectives opened by our results.</abstract><doi>10.48550/arxiv.2401.17505</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2401.17505
ispartof
issn
language eng
recordid cdi_arxiv_primary_2401_17505
source arXiv.org
subjects Computer Science - Artificial Intelligence
Computer Science - Computation and Language
Computer Science - Learning
title Arrows of Time for Large Language Models
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-23T10%3A12%3A07IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Arrows%20of%20Time%20for%20Large%20Language%20Models&rft.au=Papadopoulos,%20Vassilis&rft.date=2024-01-30&rft_id=info:doi/10.48550/arxiv.2401.17505&rft_dat=%3Carxiv_GOX%3E2401_17505%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true