Learning Domain Invariant Representations in Goal-conditioned Block MDPs
NeurIPS2021 Deep Reinforcement Learning (RL) is successful in solving many complex Markov Decision Processes (MDPs) problems. However, agents often face unanticipated environmental changes after deployment in the real world. These changes are often spurious and unrelated to the underlying problem, s...
Gespeichert in:
Hauptverfasser: | , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | NeurIPS2021 Deep Reinforcement Learning (RL) is successful in solving many complex Markov
Decision Processes (MDPs) problems. However, agents often face unanticipated
environmental changes after deployment in the real world. These changes are
often spurious and unrelated to the underlying problem, such as background
shifts for visual input agents. Unfortunately, deep RL policies are usually
sensitive to these changes and fail to act robustly against them. This
resembles the problem of domain generalization in supervised learning. In this
work, we study this problem for goal-conditioned RL agents. We propose a
theoretical framework in the Block MDP setting that characterizes the
generalizability of goal-conditioned policies to new environments. Under this
framework, we develop a practical method PA-SkewFit that enhances domain
generalization. The empirical evaluation shows that our goal-conditioned RL
agent can perform well in various unseen test environments, improving by 50%
over baselines. |
---|---|
DOI: | 10.48550/arxiv.2110.14248 |