One Step of Gradient Descent is Provably the Optimal In-Context Learner with One Layer of Linear Self-Attention
Recent works have empirically analyzed in-context learning and shown that transformers trained on synthetic linear regression tasks can learn to implement ridge regression, which is the Bayes-optimal predictor, given sufficient capacity [Aky\"urek et al., 2023], while one-layer transformers wit...
Gespeichert in:
Hauptverfasser: | , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | |
container_start_page | |
container_title | |
container_volume | |
creator | Mahankali, Arvind Hashimoto, Tatsunori B Ma, Tengyu |
description | Recent works have empirically analyzed in-context learning and shown that
transformers trained on synthetic linear regression tasks can learn to
implement ridge regression, which is the Bayes-optimal predictor, given
sufficient capacity [Aky\"urek et al., 2023], while one-layer transformers with
linear self-attention and no MLP layer will learn to implement one step of
gradient descent (GD) on a least-squares linear regression objective [von
Oswald et al., 2022]. However, the theory behind these observations remains
poorly understood. We theoretically study transformers with a single layer of
linear self-attention, trained on synthetic noisy linear regression data.
First, we mathematically show that when the covariates are drawn from a
standard Gaussian distribution, the one-layer transformer which minimizes the
pre-training loss will implement a single step of GD on the least-squares
linear regression objective. Then, we find that changing the distribution of
the covariates and weight vector to a non-isotropic Gaussian distribution has a
strong impact on the learned algorithm: the global minimizer of the
pre-training loss now implements a single step of $\textit{pre-conditioned}$
GD. However, if only the distribution of the responses is changed, then this
does not have a large effect on the learned algorithm: even when the response
comes from a more general family of $\textit{nonlinear}$ functions, the global
minimizer of the pre-training loss still implements a single step of GD on a
least-squares linear regression objective. |
doi_str_mv | 10.48550/arxiv.2307.03576 |
format | Article |
fullrecord | <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2307_03576</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2307_03576</sourcerecordid><originalsourceid>FETCH-LOGICAL-a676-8cc97d8b625fa4d24fcb2af184e3dc19f20465187c0bce2f11ac68f9d9e2ad83</originalsourceid><addsrcrecordid>eNotj8FqhDAURbPpokz7AV31_YA2iRrjcrDtdECwYPcSkxcmYOMQw3T8--q0q8vlwuEeQp4YTXNZFPRFhau7pDyjZUqzohT3ZGo9QhfxDJOFQ1DGoY_wirPe0s3wGaaLGsYF4gmhPUf3rUY4-qSefMRrhAZV8Bjgx8UTbLBGLWtdaY3z6wYdjjbZx7jy3OQfyJ1V44yP_7kj3fvbV_2RNO3hWO-bRIlSJFLrqjRyELywKjc8t3rgyjKZY2Y0qyynuSiYLDUdNHLLmNJC2spUyJWR2Y48_1Fvwv05rK_D0m_i_U08-wVxsVRi</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>One Step of Gradient Descent is Provably the Optimal In-Context Learner with One Layer of Linear Self-Attention</title><source>arXiv.org</source><creator>Mahankali, Arvind ; Hashimoto, Tatsunori B ; Ma, Tengyu</creator><creatorcontrib>Mahankali, Arvind ; Hashimoto, Tatsunori B ; Ma, Tengyu</creatorcontrib><description>Recent works have empirically analyzed in-context learning and shown that
transformers trained on synthetic linear regression tasks can learn to
implement ridge regression, which is the Bayes-optimal predictor, given
sufficient capacity [Aky\"urek et al., 2023], while one-layer transformers with
linear self-attention and no MLP layer will learn to implement one step of
gradient descent (GD) on a least-squares linear regression objective [von
Oswald et al., 2022]. However, the theory behind these observations remains
poorly understood. We theoretically study transformers with a single layer of
linear self-attention, trained on synthetic noisy linear regression data.
First, we mathematically show that when the covariates are drawn from a
standard Gaussian distribution, the one-layer transformer which minimizes the
pre-training loss will implement a single step of GD on the least-squares
linear regression objective. Then, we find that changing the distribution of
the covariates and weight vector to a non-isotropic Gaussian distribution has a
strong impact on the learned algorithm: the global minimizer of the
pre-training loss now implements a single step of $\textit{pre-conditioned}$
GD. However, if only the distribution of the responses is changed, then this
does not have a large effect on the learned algorithm: even when the response
comes from a more general family of $\textit{nonlinear}$ functions, the global
minimizer of the pre-training loss still implements a single step of GD on a
least-squares linear regression objective.</description><identifier>DOI: 10.48550/arxiv.2307.03576</identifier><language>eng</language><subject>Computer Science - Learning</subject><creationdate>2023-07</creationdate><rights>http://creativecommons.org/licenses/by/4.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,776,881</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2307.03576$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2307.03576$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Mahankali, Arvind</creatorcontrib><creatorcontrib>Hashimoto, Tatsunori B</creatorcontrib><creatorcontrib>Ma, Tengyu</creatorcontrib><title>One Step of Gradient Descent is Provably the Optimal In-Context Learner with One Layer of Linear Self-Attention</title><description>Recent works have empirically analyzed in-context learning and shown that
transformers trained on synthetic linear regression tasks can learn to
implement ridge regression, which is the Bayes-optimal predictor, given
sufficient capacity [Aky\"urek et al., 2023], while one-layer transformers with
linear self-attention and no MLP layer will learn to implement one step of
gradient descent (GD) on a least-squares linear regression objective [von
Oswald et al., 2022]. However, the theory behind these observations remains
poorly understood. We theoretically study transformers with a single layer of
linear self-attention, trained on synthetic noisy linear regression data.
First, we mathematically show that when the covariates are drawn from a
standard Gaussian distribution, the one-layer transformer which minimizes the
pre-training loss will implement a single step of GD on the least-squares
linear regression objective. Then, we find that changing the distribution of
the covariates and weight vector to a non-isotropic Gaussian distribution has a
strong impact on the learned algorithm: the global minimizer of the
pre-training loss now implements a single step of $\textit{pre-conditioned}$
GD. However, if only the distribution of the responses is changed, then this
does not have a large effect on the learned algorithm: even when the response
comes from a more general family of $\textit{nonlinear}$ functions, the global
minimizer of the pre-training loss still implements a single step of GD on a
least-squares linear regression objective.</description><subject>Computer Science - Learning</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotj8FqhDAURbPpokz7AV31_YA2iRrjcrDtdECwYPcSkxcmYOMQw3T8--q0q8vlwuEeQp4YTXNZFPRFhau7pDyjZUqzohT3ZGo9QhfxDJOFQ1DGoY_wirPe0s3wGaaLGsYF4gmhPUf3rUY4-qSefMRrhAZV8Bjgx8UTbLBGLWtdaY3z6wYdjjbZx7jy3OQfyJ1V44yP_7kj3fvbV_2RNO3hWO-bRIlSJFLrqjRyELywKjc8t3rgyjKZY2Y0qyynuSiYLDUdNHLLmNJC2spUyJWR2Y48_1Fvwv05rK_D0m_i_U08-wVxsVRi</recordid><startdate>20230707</startdate><enddate>20230707</enddate><creator>Mahankali, Arvind</creator><creator>Hashimoto, Tatsunori B</creator><creator>Ma, Tengyu</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20230707</creationdate><title>One Step of Gradient Descent is Provably the Optimal In-Context Learner with One Layer of Linear Self-Attention</title><author>Mahankali, Arvind ; Hashimoto, Tatsunori B ; Ma, Tengyu</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a676-8cc97d8b625fa4d24fcb2af184e3dc19f20465187c0bce2f11ac68f9d9e2ad83</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Computer Science - Learning</topic><toplevel>online_resources</toplevel><creatorcontrib>Mahankali, Arvind</creatorcontrib><creatorcontrib>Hashimoto, Tatsunori B</creatorcontrib><creatorcontrib>Ma, Tengyu</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Mahankali, Arvind</au><au>Hashimoto, Tatsunori B</au><au>Ma, Tengyu</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>One Step of Gradient Descent is Provably the Optimal In-Context Learner with One Layer of Linear Self-Attention</atitle><date>2023-07-07</date><risdate>2023</risdate><abstract>Recent works have empirically analyzed in-context learning and shown that
transformers trained on synthetic linear regression tasks can learn to
implement ridge regression, which is the Bayes-optimal predictor, given
sufficient capacity [Aky\"urek et al., 2023], while one-layer transformers with
linear self-attention and no MLP layer will learn to implement one step of
gradient descent (GD) on a least-squares linear regression objective [von
Oswald et al., 2022]. However, the theory behind these observations remains
poorly understood. We theoretically study transformers with a single layer of
linear self-attention, trained on synthetic noisy linear regression data.
First, we mathematically show that when the covariates are drawn from a
standard Gaussian distribution, the one-layer transformer which minimizes the
pre-training loss will implement a single step of GD on the least-squares
linear regression objective. Then, we find that changing the distribution of
the covariates and weight vector to a non-isotropic Gaussian distribution has a
strong impact on the learned algorithm: the global minimizer of the
pre-training loss now implements a single step of $\textit{pre-conditioned}$
GD. However, if only the distribution of the responses is changed, then this
does not have a large effect on the learned algorithm: even when the response
comes from a more general family of $\textit{nonlinear}$ functions, the global
minimizer of the pre-training loss still implements a single step of GD on a
least-squares linear regression objective.</abstract><doi>10.48550/arxiv.2307.03576</doi><oa>free_for_read</oa></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | DOI: 10.48550/arxiv.2307.03576 |
ispartof | |
issn | |
language | eng |
recordid | cdi_arxiv_primary_2307_03576 |
source | arXiv.org |
subjects | Computer Science - Learning |
title | One Step of Gradient Descent is Provably the Optimal In-Context Learner with One Layer of Linear Self-Attention |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-07T23%3A09%3A28IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=One%20Step%20of%20Gradient%20Descent%20is%20Provably%20the%20Optimal%20In-Context%20Learner%20with%20One%20Layer%20of%20Linear%20Self-Attention&rft.au=Mahankali,%20Arvind&rft.date=2023-07-07&rft_id=info:doi/10.48550/arxiv.2307.03576&rft_dat=%3Carxiv_GOX%3E2307_03576%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true |