Why Generalization in RL is Difficult: Epistemic POMDPs and Implicit Partial Observability
Generalization is a central challenge for the deployment of reinforcement learning (RL) systems in the real world. In this paper, we show that the sequential structure of the RL problem necessitates new approaches to generalization beyond the well-studied techniques used in supervised learning. Whil...
Gespeichert in:
Hauptverfasser: | , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Generalization is a central challenge for the deployment of reinforcement
learning (RL) systems in the real world. In this paper, we show that the
sequential structure of the RL problem necessitates new approaches to
generalization beyond the well-studied techniques used in supervised learning.
While supervised learning methods can generalize effectively without explicitly
accounting for epistemic uncertainty, we show that, perhaps surprisingly, this
is not the case in RL. We show that generalization to unseen test conditions
from a limited number of training conditions induces implicit partial
observability, effectively turning even fully-observed MDPs into POMDPs.
Informed by this observation, we recast the problem of generalization in RL as
solving the induced partially observed Markov decision process, which we call
the epistemic POMDP. We demonstrate the failure modes of algorithms that do not
appropriately handle this partial observability, and suggest a simple
ensemble-based technique for approximately solving the partially observed
problem. Empirically, we demonstrate that our simple algorithm derived from the
epistemic POMDP achieves significant gains in generalization over current
methods on the Procgen benchmark suite. |
---|---|
DOI: | 10.48550/arxiv.2107.06277 |