On the Sample Efficiency of Abstractions and Potential-Based Reward Shaping in Reinforcement Learning
The use of Potential Based Reward Shaping (PBRS) has shown great promise in the ongoing research effort to tackle sample inefficiency in Reinforcement Learning (RL). However, the choice of the potential function is critical for this technique to be effective. Additionally, RL techniques are usually...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | The use of Potential Based Reward Shaping (PBRS) has shown great promise in
the ongoing research effort to tackle sample inefficiency in Reinforcement
Learning (RL). However, the choice of the potential function is critical for
this technique to be effective. Additionally, RL techniques are usually
constrained to use a finite horizon for computational limitations. This
introduces a bias when using PBRS, thus adding an additional layer of
complexity. In this paper, we leverage abstractions to automatically produce a
"good" potential function. We analyse the bias induced by finite horizons in
the context of PBRS producing novel insights. Finally, to asses sample
efficiency and performance impact, we evaluate our approach on four
environments including a goal-oriented navigation task and three Arcade
Learning Environments (ALE) games demonstrating that we can reach the same
level of performance as CNN-based solutions with a simple fully-connected
network. |
---|---|
DOI: | 10.48550/arxiv.2404.07826 |