A Conservative Approach for Few-Shot Transfer in Off-Dynamics Reinforcement Learning
Proceedings of the the 33rd International Joint Conference on Artificial Intelligence (IJCAI 2024) Off-dynamics Reinforcement Learning (ODRL) seeks to transfer a policy from a source environment to a target environment characterized by distinct yet similar dynamics. In this context, traditional RL a...
Gespeichert in:
Hauptverfasser: | , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Proceedings of the the 33rd International Joint Conference on
Artificial Intelligence (IJCAI 2024) Off-dynamics Reinforcement Learning (ODRL) seeks to transfer a policy from a
source environment to a target environment characterized by distinct yet
similar dynamics. In this context, traditional RL agents depend excessively on
the dynamics of the source environment, resulting in the discovery of policies
that excel in this environment but fail to provide reasonable performance in
the target one. In the few-shot framework, a limited number of transitions from
the target environment are introduced to facilitate a more effective transfer.
Addressing this challenge, we propose an innovative approach inspired by recent
advancements in Imitation Learning and conservative RL algorithms. The proposed
method introduces a penalty to regulate the trajectories generated by the
source-trained policy. We evaluate our method across various environments
representing diverse off-dynamics conditions, where access to the target
environment is extremely limited. These experiments include high-dimensional
systems relevant to real-world applications. Across most tested scenarios, our
proposed method demonstrates performance improvements compared to existing
baselines. |
---|---|
DOI: | 10.48550/arxiv.2312.15474 |