MetaCURL: Non-stationary Concave Utility Reinforcement Learning
We explore online learning in episodic loop-free Markov decision processes on non-stationary environments (changing losses and probability transitions). Our focus is on the Concave Utility Reinforcement Learning problem (CURL), an extension of classical RL for handling convex performance criteria in...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | |
container_start_page | |
container_title | |
container_volume | |
creator | Moreno, Bianca Marin Brégère, Margaux Gaillard, Pierre Oudjane, Nadia |
description | We explore online learning in episodic loop-free Markov decision processes on
non-stationary environments (changing losses and probability transitions). Our
focus is on the Concave Utility Reinforcement Learning problem (CURL), an
extension of classical RL for handling convex performance criteria in
state-action distributions induced by agent policies. While various machine
learning problems can be written as CURL, its non-linearity invalidates
traditional Bellman equations. Despite recent solutions to classical CURL, none
address non-stationary MDPs. This paper introduces MetaCURL, the first CURL
algorithm for non-stationary MDPs. It employs a meta-algorithm running multiple
black-box algorithms instances over different intervals, aggregating outputs
via a sleeping expert framework. The key hurdle is partial information due to
MDP uncertainty. Under partial information on the probability transitions
(uncertainty and non-stationarity coming only from external noise, independent
of agent state-action pairs), we achieve optimal dynamic regret without prior
knowledge of MDP changes. Unlike approaches for RL, MetaCURL handles full
adversarial losses, not just stochastic ones. We believe our approach for
managing non-stationarity with experts can be of interest to the RL community. |
doi_str_mv | 10.48550/arxiv.2405.19807 |
format | Article |
fullrecord | <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2405_19807</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2405_19807</sourcerecordid><originalsourceid>FETCH-arxiv_primary_2405_198073</originalsourceid><addsrcrecordid>eNpjYJA0NNAzsTA1NdBPLKrILNMzMjEw1TO0tDAw52Sw900tSXQODfKxUvDLz9MtLkksyczPSyyqVHDOz0tOLEtVCC3JzMksqVQISs3MS8svSk7NTc0rUfBJTSzKy8xL52FgTUvMKU7lhdLcDPJuriHOHrpgq-ILijJzgYbFg6yMB1tpTFgFAJ5_NlA</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>MetaCURL: Non-stationary Concave Utility Reinforcement Learning</title><source>arXiv.org</source><creator>Moreno, Bianca Marin ; Brégère, Margaux ; Gaillard, Pierre ; Oudjane, Nadia</creator><creatorcontrib>Moreno, Bianca Marin ; Brégère, Margaux ; Gaillard, Pierre ; Oudjane, Nadia</creatorcontrib><description>We explore online learning in episodic loop-free Markov decision processes on
non-stationary environments (changing losses and probability transitions). Our
focus is on the Concave Utility Reinforcement Learning problem (CURL), an
extension of classical RL for handling convex performance criteria in
state-action distributions induced by agent policies. While various machine
learning problems can be written as CURL, its non-linearity invalidates
traditional Bellman equations. Despite recent solutions to classical CURL, none
address non-stationary MDPs. This paper introduces MetaCURL, the first CURL
algorithm for non-stationary MDPs. It employs a meta-algorithm running multiple
black-box algorithms instances over different intervals, aggregating outputs
via a sleeping expert framework. The key hurdle is partial information due to
MDP uncertainty. Under partial information on the probability transitions
(uncertainty and non-stationarity coming only from external noise, independent
of agent state-action pairs), we achieve optimal dynamic regret without prior
knowledge of MDP changes. Unlike approaches for RL, MetaCURL handles full
adversarial losses, not just stochastic ones. We believe our approach for
managing non-stationarity with experts can be of interest to the RL community.</description><identifier>DOI: 10.48550/arxiv.2405.19807</identifier><language>eng</language><subject>Computer Science - Learning ; Mathematics - Probability ; Mathematics - Statistics Theory ; Statistics - Machine Learning ; Statistics - Theory</subject><creationdate>2024-05</creationdate><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,780,885</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2405.19807$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2405.19807$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Moreno, Bianca Marin</creatorcontrib><creatorcontrib>Brégère, Margaux</creatorcontrib><creatorcontrib>Gaillard, Pierre</creatorcontrib><creatorcontrib>Oudjane, Nadia</creatorcontrib><title>MetaCURL: Non-stationary Concave Utility Reinforcement Learning</title><description>We explore online learning in episodic loop-free Markov decision processes on
non-stationary environments (changing losses and probability transitions). Our
focus is on the Concave Utility Reinforcement Learning problem (CURL), an
extension of classical RL for handling convex performance criteria in
state-action distributions induced by agent policies. While various machine
learning problems can be written as CURL, its non-linearity invalidates
traditional Bellman equations. Despite recent solutions to classical CURL, none
address non-stationary MDPs. This paper introduces MetaCURL, the first CURL
algorithm for non-stationary MDPs. It employs a meta-algorithm running multiple
black-box algorithms instances over different intervals, aggregating outputs
via a sleeping expert framework. The key hurdle is partial information due to
MDP uncertainty. Under partial information on the probability transitions
(uncertainty and non-stationarity coming only from external noise, independent
of agent state-action pairs), we achieve optimal dynamic regret without prior
knowledge of MDP changes. Unlike approaches for RL, MetaCURL handles full
adversarial losses, not just stochastic ones. We believe our approach for
managing non-stationarity with experts can be of interest to the RL community.</description><subject>Computer Science - Learning</subject><subject>Mathematics - Probability</subject><subject>Mathematics - Statistics Theory</subject><subject>Statistics - Machine Learning</subject><subject>Statistics - Theory</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNpjYJA0NNAzsTA1NdBPLKrILNMzMjEw1TO0tDAw52Sw900tSXQODfKxUvDLz9MtLkksyczPSyyqVHDOz0tOLEtVCC3JzMksqVQISs3MS8svSk7NTc0rUfBJTSzKy8xL52FgTUvMKU7lhdLcDPJuriHOHrpgq-ILijJzgYbFg6yMB1tpTFgFAJ5_NlA</recordid><startdate>20240530</startdate><enddate>20240530</enddate><creator>Moreno, Bianca Marin</creator><creator>Brégère, Margaux</creator><creator>Gaillard, Pierre</creator><creator>Oudjane, Nadia</creator><scope>AKY</scope><scope>AKZ</scope><scope>EPD</scope><scope>GOX</scope></search><sort><creationdate>20240530</creationdate><title>MetaCURL: Non-stationary Concave Utility Reinforcement Learning</title><author>Moreno, Bianca Marin ; Brégère, Margaux ; Gaillard, Pierre ; Oudjane, Nadia</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-arxiv_primary_2405_198073</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Computer Science - Learning</topic><topic>Mathematics - Probability</topic><topic>Mathematics - Statistics Theory</topic><topic>Statistics - Machine Learning</topic><topic>Statistics - Theory</topic><toplevel>online_resources</toplevel><creatorcontrib>Moreno, Bianca Marin</creatorcontrib><creatorcontrib>Brégère, Margaux</creatorcontrib><creatorcontrib>Gaillard, Pierre</creatorcontrib><creatorcontrib>Oudjane, Nadia</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv Mathematics</collection><collection>arXiv Statistics</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Moreno, Bianca Marin</au><au>Brégère, Margaux</au><au>Gaillard, Pierre</au><au>Oudjane, Nadia</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>MetaCURL: Non-stationary Concave Utility Reinforcement Learning</atitle><date>2024-05-30</date><risdate>2024</risdate><abstract>We explore online learning in episodic loop-free Markov decision processes on
non-stationary environments (changing losses and probability transitions). Our
focus is on the Concave Utility Reinforcement Learning problem (CURL), an
extension of classical RL for handling convex performance criteria in
state-action distributions induced by agent policies. While various machine
learning problems can be written as CURL, its non-linearity invalidates
traditional Bellman equations. Despite recent solutions to classical CURL, none
address non-stationary MDPs. This paper introduces MetaCURL, the first CURL
algorithm for non-stationary MDPs. It employs a meta-algorithm running multiple
black-box algorithms instances over different intervals, aggregating outputs
via a sleeping expert framework. The key hurdle is partial information due to
MDP uncertainty. Under partial information on the probability transitions
(uncertainty and non-stationarity coming only from external noise, independent
of agent state-action pairs), we achieve optimal dynamic regret without prior
knowledge of MDP changes. Unlike approaches for RL, MetaCURL handles full
adversarial losses, not just stochastic ones. We believe our approach for
managing non-stationarity with experts can be of interest to the RL community.</abstract><doi>10.48550/arxiv.2405.19807</doi><oa>free_for_read</oa></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | DOI: 10.48550/arxiv.2405.19807 |
ispartof | |
issn | |
language | eng |
recordid | cdi_arxiv_primary_2405_19807 |
source | arXiv.org |
subjects | Computer Science - Learning Mathematics - Probability Mathematics - Statistics Theory Statistics - Machine Learning Statistics - Theory |
title | MetaCURL: Non-stationary Concave Utility Reinforcement Learning |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-10T15%3A27%3A33IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=MetaCURL:%20Non-stationary%20Concave%20Utility%20Reinforcement%20Learning&rft.au=Moreno,%20Bianca%20Marin&rft.date=2024-05-30&rft_id=info:doi/10.48550/arxiv.2405.19807&rft_dat=%3Carxiv_GOX%3E2405_19807%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true |