Safe Policy Improvement with an Estimated Baseline Policy
Previous work has shown the unreliability of existing algorithms in the batch Reinforcement Learning setting, and proposed the theoretically-grounded Safe Policy Improvement with Baseline Bootstrapping (SPIBB) fix: reproduce the baseline policy in the uncertain state-action pairs, in order to contro...
Gespeichert in:
Hauptverfasser: | , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Previous work has shown the unreliability of existing algorithms in the batch
Reinforcement Learning setting, and proposed the theoretically-grounded Safe
Policy Improvement with Baseline Bootstrapping (SPIBB) fix: reproduce the
baseline policy in the uncertain state-action pairs, in order to control the
variance on the trained policy performance. However, in many real-world
applications such as dialogue systems, pharmaceutical tests or crop management,
data is collected under human supervision and the baseline remains unknown. In
this paper, we apply SPIBB algorithms with a baseline estimate built from the
data. We formally show safe policy improvement guarantees over the true
baseline even without direct access to it. Our empirical experiments on finite
and continuous states tasks support the theoretical findings. It shows little
loss of performance in comparison with SPIBB when the baseline policy is given,
and more importantly, drastically and significantly outperforms competing
algorithms both in safe policy improvement, and in average performance. |
---|---|
DOI: | 10.48550/arxiv.1909.05236 |