MetaCURL: Non-stationary Concave Utility Reinforcement Learning
We explore online learning in episodic loop-free Markov decision processes on non-stationary environments (changing losses and probability transitions). Our focus is on the Concave Utility Reinforcement Learning problem (CURL), an extension of classical RL for handling convex performance criteria in...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | We explore online learning in episodic loop-free Markov decision processes on
non-stationary environments (changing losses and probability transitions). Our
focus is on the Concave Utility Reinforcement Learning problem (CURL), an
extension of classical RL for handling convex performance criteria in
state-action distributions induced by agent policies. While various machine
learning problems can be written as CURL, its non-linearity invalidates
traditional Bellman equations. Despite recent solutions to classical CURL, none
address non-stationary MDPs. This paper introduces MetaCURL, the first CURL
algorithm for non-stationary MDPs. It employs a meta-algorithm running multiple
black-box algorithms instances over different intervals, aggregating outputs
via a sleeping expert framework. The key hurdle is partial information due to
MDP uncertainty. Under partial information on the probability transitions
(uncertainty and non-stationarity coming only from external noise, independent
of agent state-action pairs), we achieve optimal dynamic regret without prior
knowledge of MDP changes. Unlike approaches for RL, MetaCURL handles full
adversarial losses, not just stochastic ones. We believe our approach for
managing non-stationarity with experts can be of interest to the RL community. |
---|---|
DOI: | 10.48550/arxiv.2405.19807 |