Episodic Novelty Through Temporal Distance
Exploration in sparse reward environments remains a significant challenge in reinforcement learning, particularly in Contextual Markov Decision Processes (CMDPs), where environments differ across episodes. Existing episodic intrinsic motivation methods for CMDPs primarily rely on count-based approac...
Gespeichert in:
Hauptverfasser: | , , , , , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Exploration in sparse reward environments remains a significant challenge in
reinforcement learning, particularly in Contextual Markov Decision Processes
(CMDPs), where environments differ across episodes. Existing episodic intrinsic
motivation methods for CMDPs primarily rely on count-based approaches, which
are ineffective in large state spaces, or on similarity-based methods that lack
appropriate metrics for state comparison. To address these shortcomings, we
propose Episodic Novelty Through Temporal Distance (ETD), a novel approach that
introduces temporal distance as a robust metric for state similarity and
intrinsic reward computation. By employing contrastive learning, ETD accurately
estimates temporal distances and derives intrinsic rewards based on the novelty
of states within the current episode. Extensive experiments on various
benchmark tasks demonstrate that ETD significantly outperforms state-of-the-art
methods, highlighting its effectiveness in enhancing exploration in sparse
reward CMDPs. |
---|---|
DOI: | 10.48550/arxiv.2501.15418 |