A Finite Time Analysis of Temporal Difference Learning with Linear Function Approximation

Temporal difference learning (TD) is a simple iterative algorithm widely used for policy evaluation in Markov reward processes. Bhandari et al. prove finite time convergence rates for TD learning with linear function approximation. The analysis follows using a key insight that establishes rigorous c...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Operations research 2021-05, Vol.69 (3), p.950-973
Hauptverfasser: Bhandari, Jalaj, Russo, Daniel, Singal, Raghav
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 973
container_issue 3
container_start_page 950
container_title Operations research
container_volume 69
creator Bhandari, Jalaj
Russo, Daniel
Singal, Raghav
description Temporal difference learning (TD) is a simple iterative algorithm widely used for policy evaluation in Markov reward processes. Bhandari et al. prove finite time convergence rates for TD learning with linear function approximation. The analysis follows using a key insight that establishes rigorous connections between TD updates and those of online gradient descent. In a model where observations are corrupted by i.i.d. noise, convergence results for TD follow by essentially mirroring the analysis for online gradient descent. Using an information-theoretic technique, the authors also provide results for the case when TD is applied to a single Markovian data stream where the algorithm’s updates can be severely biased. Their analysis seamlessly extends to the study of TD learning with eligibility traces and Q-learning for high-dimensional optimal stopping problems. Temporal difference learning (TD) is a simple iterative algorithm used to estimate the value function corresponding to a given policy in a Markov decision process. Although TD is one of the most widely used algorithms in reinforcement learning, its theoretical analysis has proved challenging and few guarantees on its statistical efficiency are available. In this work, we provide a simple and explicit finite time analysis of temporal difference learning with linear function approximation. Except for a few key insights, our analysis mirrors standard techniques for analyzing stochastic gradient descent algorithms and therefore inherits the simplicity and elegance of that literature. Final sections of the paper show how all of our main results extend to the study of TD learning with eligibility traces, known as TD( λ ), and to Q-learning applied in high-dimensional optimal stopping problems.
doi_str_mv 10.1287/opre.2020.2024
format Article
fullrecord <record><control><sourceid>gale_proqu</sourceid><recordid>TN_cdi_proquest_journals_2546650026</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><galeid>A665442420</galeid><sourcerecordid>A665442420</sourcerecordid><originalsourceid>FETCH-LOGICAL-c401t-5b4f85875e51ec617d404aea6d6e22a3e4763a74632fd1894e4333f0773f8f713</originalsourceid><addsrcrecordid>eNqFkM1LAzEQxYMoWD-ungOCt6353u1xqVaFBS8V9BTidlJTtsmabLH9781S717eMPB7M7yH0A0lU8qq8j70EaaMMDKKOEETKpkqpFD8FE0I4aTgSryfo4uUNoSQmVRygj5qvHDeDYCXbgu49qY7JJdwsHgJ2z5E0-EHZy1E8C3gBkz0zq_xjxu-cON83vFi59vBBY_rvo9h77Zm3K7QmTVdguu_eYneFo_L-XPRvD69zOumaAWhQyE_ha1kVUqQFFpFy5UgwoBRKwWMGQ6iVNyUOQWzK1rNBAjOuSVlyW1lS8ov0e3xbv79vYM06E3YxZwjaZbDK0kIU5m6O1Jr04F2vg1-gP2wNruUtK4zJgQTjGRwegTbGFKKYHUfc6J40JTosWc99qzHnkcR2VAcDc7bELfpP_4Xn19-RA</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2546650026</pqid></control><display><type>article</type><title>A Finite Time Analysis of Temporal Difference Learning with Linear Function Approximation</title><source>INFORMS PubsOnLine</source><creator>Bhandari, Jalaj ; Russo, Daniel ; Singal, Raghav</creator><creatorcontrib>Bhandari, Jalaj ; Russo, Daniel ; Singal, Raghav</creatorcontrib><description>Temporal difference learning (TD) is a simple iterative algorithm widely used for policy evaluation in Markov reward processes. Bhandari et al. prove finite time convergence rates for TD learning with linear function approximation. The analysis follows using a key insight that establishes rigorous connections between TD updates and those of online gradient descent. In a model where observations are corrupted by i.i.d. noise, convergence results for TD follow by essentially mirroring the analysis for online gradient descent. Using an information-theoretic technique, the authors also provide results for the case when TD is applied to a single Markovian data stream where the algorithm’s updates can be severely biased. Their analysis seamlessly extends to the study of TD learning with eligibility traces and Q-learning for high-dimensional optimal stopping problems. Temporal difference learning (TD) is a simple iterative algorithm used to estimate the value function corresponding to a given policy in a Markov decision process. Although TD is one of the most widely used algorithms in reinforcement learning, its theoretical analysis has proved challenging and few guarantees on its statistical efficiency are available. In this work, we provide a simple and explicit finite time analysis of temporal difference learning with linear function approximation. Except for a few key insights, our analysis mirrors standard techniques for analyzing stochastic gradient descent algorithms and therefore inherits the simplicity and elegance of that literature. Final sections of the paper show how all of our main results extend to the study of TD learning with eligibility traces, known as TD( λ ), and to Q-learning applied in high-dimensional optimal stopping problems.</description><identifier>ISSN: 0030-364X</identifier><identifier>EISSN: 1526-5463</identifier><identifier>DOI: 10.1287/opre.2020.2024</identifier><language>eng</language><publisher>Linthicum: INFORMS</publisher><subject>Algorithms ; Analysis ; Approximation ; decision analysis: sequential ; dynamic programming/optimal control ; finite time analysis ; Iterative algorithms ; Iterative methods ; Linear functions ; Machine learning ; Machine Learning and Data Science ; Markov analysis ; Markov processes ; Mathematical analysis ; Operations research ; reinforcement learning ; stochastic gradient descent ; Studies ; temporal difference learning</subject><ispartof>Operations research, 2021-05, Vol.69 (3), p.950-973</ispartof><rights>COPYRIGHT 2021 Institute for Operations Research and the Management Sciences</rights><rights>Copyright Institute for Operations Research and the Management Sciences May/Jun 2021</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c401t-5b4f85875e51ec617d404aea6d6e22a3e4763a74632fd1894e4333f0773f8f713</citedby><cites>FETCH-LOGICAL-c401t-5b4f85875e51ec617d404aea6d6e22a3e4763a74632fd1894e4333f0773f8f713</cites><orcidid>0000-0001-9277-7383 ; 0000-0001-5926-8624</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://pubsonline.informs.org/doi/full/10.1287/opre.2020.2024$$EHTML$$P50$$Ginforms$$H</linktohtml><link.rule.ids>314,776,780,3679,27901,27902,62589</link.rule.ids></links><search><creatorcontrib>Bhandari, Jalaj</creatorcontrib><creatorcontrib>Russo, Daniel</creatorcontrib><creatorcontrib>Singal, Raghav</creatorcontrib><title>A Finite Time Analysis of Temporal Difference Learning with Linear Function Approximation</title><title>Operations research</title><description>Temporal difference learning (TD) is a simple iterative algorithm widely used for policy evaluation in Markov reward processes. Bhandari et al. prove finite time convergence rates for TD learning with linear function approximation. The analysis follows using a key insight that establishes rigorous connections between TD updates and those of online gradient descent. In a model where observations are corrupted by i.i.d. noise, convergence results for TD follow by essentially mirroring the analysis for online gradient descent. Using an information-theoretic technique, the authors also provide results for the case when TD is applied to a single Markovian data stream where the algorithm’s updates can be severely biased. Their analysis seamlessly extends to the study of TD learning with eligibility traces and Q-learning for high-dimensional optimal stopping problems. Temporal difference learning (TD) is a simple iterative algorithm used to estimate the value function corresponding to a given policy in a Markov decision process. Although TD is one of the most widely used algorithms in reinforcement learning, its theoretical analysis has proved challenging and few guarantees on its statistical efficiency are available. In this work, we provide a simple and explicit finite time analysis of temporal difference learning with linear function approximation. Except for a few key insights, our analysis mirrors standard techniques for analyzing stochastic gradient descent algorithms and therefore inherits the simplicity and elegance of that literature. Final sections of the paper show how all of our main results extend to the study of TD learning with eligibility traces, known as TD( λ ), and to Q-learning applied in high-dimensional optimal stopping problems.</description><subject>Algorithms</subject><subject>Analysis</subject><subject>Approximation</subject><subject>decision analysis: sequential</subject><subject>dynamic programming/optimal control</subject><subject>finite time analysis</subject><subject>Iterative algorithms</subject><subject>Iterative methods</subject><subject>Linear functions</subject><subject>Machine learning</subject><subject>Machine Learning and Data Science</subject><subject>Markov analysis</subject><subject>Markov processes</subject><subject>Mathematical analysis</subject><subject>Operations research</subject><subject>reinforcement learning</subject><subject>stochastic gradient descent</subject><subject>Studies</subject><subject>temporal difference learning</subject><issn>0030-364X</issn><issn>1526-5463</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2021</creationdate><recordtype>article</recordtype><recordid>eNqFkM1LAzEQxYMoWD-ungOCt6353u1xqVaFBS8V9BTidlJTtsmabLH9781S717eMPB7M7yH0A0lU8qq8j70EaaMMDKKOEETKpkqpFD8FE0I4aTgSryfo4uUNoSQmVRygj5qvHDeDYCXbgu49qY7JJdwsHgJ2z5E0-EHZy1E8C3gBkz0zq_xjxu-cON83vFi59vBBY_rvo9h77Zm3K7QmTVdguu_eYneFo_L-XPRvD69zOumaAWhQyE_ha1kVUqQFFpFy5UgwoBRKwWMGQ6iVNyUOQWzK1rNBAjOuSVlyW1lS8ov0e3xbv79vYM06E3YxZwjaZbDK0kIU5m6O1Jr04F2vg1-gP2wNruUtK4zJgQTjGRwegTbGFKKYHUfc6J40JTosWc99qzHnkcR2VAcDc7bELfpP_4Xn19-RA</recordid><startdate>20210501</startdate><enddate>20210501</enddate><creator>Bhandari, Jalaj</creator><creator>Russo, Daniel</creator><creator>Singal, Raghav</creator><general>INFORMS</general><general>Institute for Operations Research and the Management Sciences</general><scope>AAYXX</scope><scope>CITATION</scope><scope>JQ2</scope><scope>K9.</scope><orcidid>https://orcid.org/0000-0001-9277-7383</orcidid><orcidid>https://orcid.org/0000-0001-5926-8624</orcidid></search><sort><creationdate>20210501</creationdate><title>A Finite Time Analysis of Temporal Difference Learning with Linear Function Approximation</title><author>Bhandari, Jalaj ; Russo, Daniel ; Singal, Raghav</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c401t-5b4f85875e51ec617d404aea6d6e22a3e4763a74632fd1894e4333f0773f8f713</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2021</creationdate><topic>Algorithms</topic><topic>Analysis</topic><topic>Approximation</topic><topic>decision analysis: sequential</topic><topic>dynamic programming/optimal control</topic><topic>finite time analysis</topic><topic>Iterative algorithms</topic><topic>Iterative methods</topic><topic>Linear functions</topic><topic>Machine learning</topic><topic>Machine Learning and Data Science</topic><topic>Markov analysis</topic><topic>Markov processes</topic><topic>Mathematical analysis</topic><topic>Operations research</topic><topic>reinforcement learning</topic><topic>stochastic gradient descent</topic><topic>Studies</topic><topic>temporal difference learning</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Bhandari, Jalaj</creatorcontrib><creatorcontrib>Russo, Daniel</creatorcontrib><creatorcontrib>Singal, Raghav</creatorcontrib><collection>CrossRef</collection><collection>ProQuest Computer Science Collection</collection><collection>ProQuest Health &amp; Medical Complete (Alumni)</collection><jtitle>Operations research</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Bhandari, Jalaj</au><au>Russo, Daniel</au><au>Singal, Raghav</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>A Finite Time Analysis of Temporal Difference Learning with Linear Function Approximation</atitle><jtitle>Operations research</jtitle><date>2021-05-01</date><risdate>2021</risdate><volume>69</volume><issue>3</issue><spage>950</spage><epage>973</epage><pages>950-973</pages><issn>0030-364X</issn><eissn>1526-5463</eissn><abstract>Temporal difference learning (TD) is a simple iterative algorithm widely used for policy evaluation in Markov reward processes. Bhandari et al. prove finite time convergence rates for TD learning with linear function approximation. The analysis follows using a key insight that establishes rigorous connections between TD updates and those of online gradient descent. In a model where observations are corrupted by i.i.d. noise, convergence results for TD follow by essentially mirroring the analysis for online gradient descent. Using an information-theoretic technique, the authors also provide results for the case when TD is applied to a single Markovian data stream where the algorithm’s updates can be severely biased. Their analysis seamlessly extends to the study of TD learning with eligibility traces and Q-learning for high-dimensional optimal stopping problems. Temporal difference learning (TD) is a simple iterative algorithm used to estimate the value function corresponding to a given policy in a Markov decision process. Although TD is one of the most widely used algorithms in reinforcement learning, its theoretical analysis has proved challenging and few guarantees on its statistical efficiency are available. In this work, we provide a simple and explicit finite time analysis of temporal difference learning with linear function approximation. Except for a few key insights, our analysis mirrors standard techniques for analyzing stochastic gradient descent algorithms and therefore inherits the simplicity and elegance of that literature. Final sections of the paper show how all of our main results extend to the study of TD learning with eligibility traces, known as TD( λ ), and to Q-learning applied in high-dimensional optimal stopping problems.</abstract><cop>Linthicum</cop><pub>INFORMS</pub><doi>10.1287/opre.2020.2024</doi><tpages>24</tpages><orcidid>https://orcid.org/0000-0001-9277-7383</orcidid><orcidid>https://orcid.org/0000-0001-5926-8624</orcidid></addata></record>
fulltext fulltext
identifier ISSN: 0030-364X
ispartof Operations research, 2021-05, Vol.69 (3), p.950-973
issn 0030-364X
1526-5463
language eng
recordid cdi_proquest_journals_2546650026
source INFORMS PubsOnLine
subjects Algorithms
Analysis
Approximation
decision analysis: sequential
dynamic programming/optimal control
finite time analysis
Iterative algorithms
Iterative methods
Linear functions
Machine learning
Machine Learning and Data Science
Markov analysis
Markov processes
Mathematical analysis
Operations research
reinforcement learning
stochastic gradient descent
Studies
temporal difference learning
title A Finite Time Analysis of Temporal Difference Learning with Linear Function Approximation
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-06T00%3A16%3A21IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-gale_proqu&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=A%20Finite%20Time%20Analysis%20of%20Temporal%20Difference%20Learning%20with%20Linear%20Function%20Approximation&rft.jtitle=Operations%20research&rft.au=Bhandari,%20Jalaj&rft.date=2021-05-01&rft.volume=69&rft.issue=3&rft.spage=950&rft.epage=973&rft.pages=950-973&rft.issn=0030-364X&rft.eissn=1526-5463&rft_id=info:doi/10.1287/opre.2020.2024&rft_dat=%3Cgale_proqu%3EA665442420%3C/gale_proqu%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2546650026&rft_id=info:pmid/&rft_galeid=A665442420&rfr_iscdi=true