Explicit-Implicit Subgoal Planning for Long-Horizon Tasks with Sparse Reward
The challenges inherent in long-horizon tasks in robotics persist due to the typical inefficient exploration and sparse rewards in traditional reinforcement learning approaches. To address these challenges, we have developed a novel algorithm, termed Explicit-Implicit Subgoal Planning (EISP), design...
Gespeichert in:
Hauptverfasser: | , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | The challenges inherent in long-horizon tasks in robotics persist due to the
typical inefficient exploration and sparse rewards in traditional reinforcement
learning approaches. To address these challenges, we have developed a novel
algorithm, termed Explicit-Implicit Subgoal Planning (EISP), designed to tackle
long-horizon tasks through a divide-and-conquer approach. We utilize two
primary criteria, feasibility and optimality, to ensure the quality of the
generated subgoals. EISP consists of three components: a hybrid subgoal
generator, a hindsight sampler, and a value selector. The hybrid subgoal
generator uses an explicit model to infer subgoals and an implicit model to
predict the final goal, inspired by way of human thinking that infers subgoals
by using the current state and final goal as well as reason about the final
goal conditioned on the current state and given subgoals. Additionally, the
hindsight sampler selects valid subgoals from an offline dataset to enhance the
feasibility of the generated subgoals. While the value selector utilizes the
value function in reinforcement learning to filter the optimal subgoals from
subgoal candidates. To validate our method, we conduct four long-horizon tasks
in both simulation and the real world. The obtained quantitative and
qualitative data indicate that our approach achieves promising performance
compared to other baseline methods. These experimental results can be seen on
the website \url{https://sites.google.com/view/vaesi}. |
---|---|
DOI: | 10.48550/arxiv.2312.15578 |