Orchestrated Value Mapping for Reinforcement Learning
We present a general convergent class of reinforcement learning algorithms that is founded on two distinct principles: (1) mapping value estimates to a different space using arbitrary functions from a broad class, and (2) linearly decomposing the reward signal into multiple channels. The first princ...
Gespeichert in:
Hauptverfasser: | , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | We present a general convergent class of reinforcement learning algorithms
that is founded on two distinct principles: (1) mapping value estimates to a
different space using arbitrary functions from a broad class, and (2) linearly
decomposing the reward signal into multiple channels. The first principle
enables incorporating specific properties into the value estimator that can
enhance learning. The second principle, on the other hand, allows for the value
function to be represented as a composition of multiple utility functions. This
can be leveraged for various purposes, e.g. dealing with highly varying reward
scales, incorporating a priori knowledge about the sources of reward, and
ensemble learning. Combining the two principles yields a general blueprint for
instantiating convergent algorithms by orchestrating diverse mapping functions
over multiple reward channels. This blueprint generalizes and subsumes
algorithms such as Q-Learning, Log Q-Learning, and Q-Decomposition. In
addition, our convergence proof for this general class relaxes certain required
assumptions in some of these algorithms. Based on our theory, we discuss
several interesting configurations as special cases. Finally, to illustrate the
potential of the design space that our theory opens up, we instantiate a
particular algorithm and evaluate its performance on the Atari suite. |
---|---|
DOI: | 10.48550/arxiv.2203.07171 |