Behavior Proximal Policy Optimization
Offline reinforcement learning (RL) is a challenging setting where existing off-policy actor-critic methods perform poorly due to the overestimation of out-of-distribution state-action pairs. Thus, various additional augmentations are proposed to keep the learned policy close to the offline dataset...
Gespeichert in:
Hauptverfasser: | , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Offline reinforcement learning (RL) is a challenging setting where existing
off-policy actor-critic methods perform poorly due to the overestimation of
out-of-distribution state-action pairs. Thus, various additional augmentations
are proposed to keep the learned policy close to the offline dataset (or the
behavior policy). In this work, starting from the analysis of offline monotonic
policy improvement, we get a surprising finding that some online on-policy
algorithms are naturally able to solve offline RL. Specifically, the inherent
conservatism of these on-policy algorithms is exactly what the offline RL
method needs to overcome the overestimation. Based on this, we propose Behavior
Proximal Policy Optimization (BPPO), which solves offline RL without any extra
constraint or regularization introduced compared to PPO. Extensive experiments
on the D4RL benchmark indicate this extremely succinct method outperforms
state-of-the-art offline RL algorithms. Our implementation is available at
https://github.com/Dragon-Zhuang/BPPO. |
---|---|
DOI: | 10.48550/arxiv.2302.11312 |