Reinforcement Learning for Classical Planning: Viewing Heuristics as Dense Reward Generators
Recent advances in reinforcement learning (RL) have led to a growing interest in applying RL to classical planning domains or applying classical planning methods to some complex RL domains. However, the long-horizon goal-based problems found in classical planning lead to sparse rewards for RL, makin...
Gespeichert in:
Hauptverfasser: | , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Recent advances in reinforcement learning (RL) have led to a growing interest
in applying RL to classical planning domains or applying classical planning
methods to some complex RL domains. However, the long-horizon goal-based
problems found in classical planning lead to sparse rewards for RL, making
direct application inefficient. In this paper, we propose to leverage
domain-independent heuristic functions commonly used in the classical planning
literature to improve the sample efficiency of RL. These classical heuristics
act as dense reward generators to alleviate the sparse-rewards issue and enable
our RL agent to learn domain-specific value functions as residuals on these
heuristics, making learning easier. Correct application of this technique
requires consolidating the discounted metric used in RL and the non-discounted
metric used in heuristics. We implement the value functions using Neural Logic
Machines, a neural network architecture designed for grounded first-order logic
inputs. We demonstrate on several classical planning domains that using
classical heuristics for RL allows for good sample efficiency compared to
sparse-reward RL. We further show that our learned value functions generalize
to novel problem instances in the same domain. |
---|---|
DOI: | 10.48550/arxiv.2109.14830 |