Soft-Robust Actor-Critic Policy-Gradient
Robust Reinforcement Learning aims to derive optimal behavior that accounts for model uncertainty in dynamical systems. However, previous studies have shown that by considering the worst case scenario, robust policies can be overly conservative. Our soft-robust framework is an attempt to overcome th...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Robust Reinforcement Learning aims to derive optimal behavior that accounts
for model uncertainty in dynamical systems. However, previous studies have
shown that by considering the worst case scenario, robust policies can be
overly conservative. Our soft-robust framework is an attempt to overcome this
issue. In this paper, we present a novel Soft-Robust Actor-Critic algorithm
(SR-AC). It learns an optimal policy with respect to a distribution over an
uncertainty set and stays robust to model uncertainty but avoids the
conservativeness of robust strategies. We show the convergence of SR-AC and
test the efficiency of our approach on different domains by comparing it
against regular learning methods and their robust formulations. |
---|---|
DOI: | 10.48550/arxiv.1803.04848 |