Why long model-based rollouts are no reason for bad Q-value estimates
This paper explores the use of model-based offline reinforcement learning with long model rollouts. While some literature criticizes this approach due to compounding errors, many practitioners have found success in real-world applications. The paper aims to demonstrate that long rollouts do not nece...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | This paper explores the use of model-based offline reinforcement learning
with long model rollouts. While some literature criticizes this approach due to
compounding errors, many practitioners have found success in real-world
applications. The paper aims to demonstrate that long rollouts do not
necessarily result in exponentially growing errors and can actually produce
better Q-value estimates than model-free methods. These findings can
potentially enhance reinforcement learning techniques. |
---|---|
DOI: | 10.48550/arxiv.2407.11751 |