Orchestrated Value Mapping for Reinforcement Learning
We present a general convergent class of reinforcement learning algorithms that is founded on two distinct principles: (1) mapping value estimates to a different space using arbitrary functions from a broad class, and (2) linearly decomposing the reward signal into multiple channels. The first princ...
Gespeichert in:
Hauptverfasser: | , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | |
container_start_page | |
container_title | |
container_volume | |
creator | Fatemi, Mehdi Tavakoli, Arash |
description | We present a general convergent class of reinforcement learning algorithms
that is founded on two distinct principles: (1) mapping value estimates to a
different space using arbitrary functions from a broad class, and (2) linearly
decomposing the reward signal into multiple channels. The first principle
enables incorporating specific properties into the value estimator that can
enhance learning. The second principle, on the other hand, allows for the value
function to be represented as a composition of multiple utility functions. This
can be leveraged for various purposes, e.g. dealing with highly varying reward
scales, incorporating a priori knowledge about the sources of reward, and
ensemble learning. Combining the two principles yields a general blueprint for
instantiating convergent algorithms by orchestrating diverse mapping functions
over multiple reward channels. This blueprint generalizes and subsumes
algorithms such as Q-Learning, Log Q-Learning, and Q-Decomposition. In
addition, our convergence proof for this general class relaxes certain required
assumptions in some of these algorithms. Based on our theory, we discuss
several interesting configurations as special cases. Finally, to illustrate the
potential of the design space that our theory opens up, we instantiate a
particular algorithm and evaluate its performance on the Atari suite. |
doi_str_mv | 10.48550/arxiv.2203.07171 |
format | Article |
fullrecord | <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2203_07171</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2203_07171</sourcerecordid><originalsourceid>FETCH-LOGICAL-a671-2a0f27fc3deb281546f4a11d89565952f213c9a12216a8da0ac65255b07345263</originalsourceid><addsrcrecordid>eNotzsFKw0AQxvG9eJDaB_DkvkDizmxmNzlKUSukBErxGqabWQ20MWyjtG9vrT39Dx98_JS6B5MXJZF55HTsf3JEY3PjwcOtoiaFTzlMiSfp9DvvvkWveBz74UPHr6TX0g_nBtnLMOlaOA3n6U7dRN4dZH7tTG1enjeLZVY3r2-Lpzpj5yFDNhF9DLaTLZZAhYsFA3RlRY4qwohgQ8WACI7Ljg0HR0i0Nd4WhM7O1MP_7cXdjqnfczq1f_724re_jAo-vg</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Orchestrated Value Mapping for Reinforcement Learning</title><source>arXiv.org</source><creator>Fatemi, Mehdi ; Tavakoli, Arash</creator><creatorcontrib>Fatemi, Mehdi ; Tavakoli, Arash</creatorcontrib><description>We present a general convergent class of reinforcement learning algorithms
that is founded on two distinct principles: (1) mapping value estimates to a
different space using arbitrary functions from a broad class, and (2) linearly
decomposing the reward signal into multiple channels. The first principle
enables incorporating specific properties into the value estimator that can
enhance learning. The second principle, on the other hand, allows for the value
function to be represented as a composition of multiple utility functions. This
can be leveraged for various purposes, e.g. dealing with highly varying reward
scales, incorporating a priori knowledge about the sources of reward, and
ensemble learning. Combining the two principles yields a general blueprint for
instantiating convergent algorithms by orchestrating diverse mapping functions
over multiple reward channels. This blueprint generalizes and subsumes
algorithms such as Q-Learning, Log Q-Learning, and Q-Decomposition. In
addition, our convergence proof for this general class relaxes certain required
assumptions in some of these algorithms. Based on our theory, we discuss
several interesting configurations as special cases. Finally, to illustrate the
potential of the design space that our theory opens up, we instantiate a
particular algorithm and evaluate its performance on the Atari suite.</description><identifier>DOI: 10.48550/arxiv.2203.07171</identifier><language>eng</language><subject>Computer Science - Artificial Intelligence ; Computer Science - Learning</subject><creationdate>2022-03</creationdate><rights>http://creativecommons.org/licenses/by/4.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,776,881</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2203.07171$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2203.07171$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Fatemi, Mehdi</creatorcontrib><creatorcontrib>Tavakoli, Arash</creatorcontrib><title>Orchestrated Value Mapping for Reinforcement Learning</title><description>We present a general convergent class of reinforcement learning algorithms
that is founded on two distinct principles: (1) mapping value estimates to a
different space using arbitrary functions from a broad class, and (2) linearly
decomposing the reward signal into multiple channels. The first principle
enables incorporating specific properties into the value estimator that can
enhance learning. The second principle, on the other hand, allows for the value
function to be represented as a composition of multiple utility functions. This
can be leveraged for various purposes, e.g. dealing with highly varying reward
scales, incorporating a priori knowledge about the sources of reward, and
ensemble learning. Combining the two principles yields a general blueprint for
instantiating convergent algorithms by orchestrating diverse mapping functions
over multiple reward channels. This blueprint generalizes and subsumes
algorithms such as Q-Learning, Log Q-Learning, and Q-Decomposition. In
addition, our convergence proof for this general class relaxes certain required
assumptions in some of these algorithms. Based on our theory, we discuss
several interesting configurations as special cases. Finally, to illustrate the
potential of the design space that our theory opens up, we instantiate a
particular algorithm and evaluate its performance on the Atari suite.</description><subject>Computer Science - Artificial Intelligence</subject><subject>Computer Science - Learning</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2022</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotzsFKw0AQxvG9eJDaB_DkvkDizmxmNzlKUSukBErxGqabWQ20MWyjtG9vrT39Dx98_JS6B5MXJZF55HTsf3JEY3PjwcOtoiaFTzlMiSfp9DvvvkWveBz74UPHr6TX0g_nBtnLMOlaOA3n6U7dRN4dZH7tTG1enjeLZVY3r2-Lpzpj5yFDNhF9DLaTLZZAhYsFA3RlRY4qwohgQ8WACI7Ljg0HR0i0Nd4WhM7O1MP_7cXdjqnfczq1f_724re_jAo-vg</recordid><startdate>20220314</startdate><enddate>20220314</enddate><creator>Fatemi, Mehdi</creator><creator>Tavakoli, Arash</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20220314</creationdate><title>Orchestrated Value Mapping for Reinforcement Learning</title><author>Fatemi, Mehdi ; Tavakoli, Arash</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a671-2a0f27fc3deb281546f4a11d89565952f213c9a12216a8da0ac65255b07345263</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2022</creationdate><topic>Computer Science - Artificial Intelligence</topic><topic>Computer Science - Learning</topic><toplevel>online_resources</toplevel><creatorcontrib>Fatemi, Mehdi</creatorcontrib><creatorcontrib>Tavakoli, Arash</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Fatemi, Mehdi</au><au>Tavakoli, Arash</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Orchestrated Value Mapping for Reinforcement Learning</atitle><date>2022-03-14</date><risdate>2022</risdate><abstract>We present a general convergent class of reinforcement learning algorithms
that is founded on two distinct principles: (1) mapping value estimates to a
different space using arbitrary functions from a broad class, and (2) linearly
decomposing the reward signal into multiple channels. The first principle
enables incorporating specific properties into the value estimator that can
enhance learning. The second principle, on the other hand, allows for the value
function to be represented as a composition of multiple utility functions. This
can be leveraged for various purposes, e.g. dealing with highly varying reward
scales, incorporating a priori knowledge about the sources of reward, and
ensemble learning. Combining the two principles yields a general blueprint for
instantiating convergent algorithms by orchestrating diverse mapping functions
over multiple reward channels. This blueprint generalizes and subsumes
algorithms such as Q-Learning, Log Q-Learning, and Q-Decomposition. In
addition, our convergence proof for this general class relaxes certain required
assumptions in some of these algorithms. Based on our theory, we discuss
several interesting configurations as special cases. Finally, to illustrate the
potential of the design space that our theory opens up, we instantiate a
particular algorithm and evaluate its performance on the Atari suite.</abstract><doi>10.48550/arxiv.2203.07171</doi><oa>free_for_read</oa></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | DOI: 10.48550/arxiv.2203.07171 |
ispartof | |
issn | |
language | eng |
recordid | cdi_arxiv_primary_2203_07171 |
source | arXiv.org |
subjects | Computer Science - Artificial Intelligence Computer Science - Learning |
title | Orchestrated Value Mapping for Reinforcement Learning |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-14T16%3A19%3A40IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Orchestrated%20Value%20Mapping%20for%20Reinforcement%20Learning&rft.au=Fatemi,%20Mehdi&rft.date=2022-03-14&rft_id=info:doi/10.48550/arxiv.2203.07171&rft_dat=%3Carxiv_GOX%3E2203_07171%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true |