MERL: Multi-Head Reinforcement Learning
A common challenge in reinforcement learning is how to convert the agent's interactions with an environment into fast and robust learning. For instance, earlier work makes use of domain knowledge to improve existing reinforcement learning algorithms in complex tasks. While promising, previously...
Gespeichert in:
Hauptverfasser: | , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | A common challenge in reinforcement learning is how to convert the agent's
interactions with an environment into fast and robust learning. For instance,
earlier work makes use of domain knowledge to improve existing reinforcement
learning algorithms in complex tasks. While promising, previously acquired
knowledge is often costly and challenging to scale up. Instead, we decide to
consider problem knowledge with signals from quantities relevant to solve any
task, e.g., self-performance assessment and accurate expectations.
$\mathcal{V}^{ex}$ is such a quantity. It is the fraction of variance explained
by the value function $V$ and measures the discrepancy between $V$ and the
returns. Taking advantage of $\mathcal{V}^{ex}$, we propose MERL, a general
framework for structuring reinforcement learning by injecting problem knowledge
into policy gradient updates. As a result, the agent is not only optimized for
a reward but learns using problem-focused quantities provided by MERL,
applicable out-of-the-box to any task. In this paper: (a) We introduce and
define MERL, the multi-head reinforcement learning framework we use throughout
this work. (b) We conduct experiments across a variety of standard benchmark
environments, including 9 continuous control tasks, where results show improved
performance. (c) We demonstrate that MERL also improves transfer learning on a
set of challenging pixel-based tasks. (d) We ponder how MERL tackles the
problem of reward sparsity and better conditions the feature space of
reinforcement learning agents. |
---|---|
DOI: | 10.48550/arxiv.1909.11939 |