Mildly Constrained Evaluation Policy for Offline Reinforcement Learning
Offline reinforcement learning (RL) methodologies enforce constraints on the policy to adhere closely to the behavior policy, thereby stabilizing value learning and mitigating the selection of out-of-distribution (OOD) actions during test time. Conventional approaches apply identical constraints for...
Gespeichert in:
Hauptverfasser: | , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Offline reinforcement learning (RL) methodologies enforce constraints on the
policy to adhere closely to the behavior policy, thereby stabilizing value
learning and mitigating the selection of out-of-distribution (OOD) actions
during test time. Conventional approaches apply identical constraints for both
value learning and test time inference. However, our findings indicate that the
constraints suitable for value estimation may in fact be excessively
restrictive for action selection during test time. To address this issue, we
propose a \textit{Mildly Constrained Evaluation Policy (MCEP)} for test time
inference with a more constrained \textit{target policy} for value estimation.
Since the \textit{target policy} has been adopted in various prior approaches,
MCEP can be seamlessly integrated with them as a plug-in. We instantiate MCEP
based on TD3BC (Fujimoto & Gu, 2021), AWAC (Nair et al., 2020) and DQL (Wang et
al., 2023) algorithms. The empirical results on D4RL MuJoCo locomotion,
high-dimensional humanoid and a set of 16 robotic manipulation tasks show that
the MCEP brought significant performance improvement on classic offline RL
methods and can further improve SOTA methods. The codes are open-sourced at
\url{https://github.com/egg-west/MCEP.git}. |
---|---|
DOI: | 10.48550/arxiv.2306.03680 |