Reinforcement Learning with Action-Free Pre-Training from Videos
Recent unsupervised pre-training methods have shown to be effective on language and vision domains by learning useful representations for multiple downstream tasks. In this paper, we investigate if such unsupervised pre-training methods can also be effective for vision-based reinforcement learning (...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Recent unsupervised pre-training methods have shown to be effective on
language and vision domains by learning useful representations for multiple
downstream tasks. In this paper, we investigate if such unsupervised
pre-training methods can also be effective for vision-based reinforcement
learning (RL). To this end, we introduce a framework that learns
representations useful for understanding the dynamics via generative
pre-training on videos. Our framework consists of two phases: we pre-train an
action-free latent video prediction model, and then utilize the pre-trained
representations for efficiently learning action-conditional world models on
unseen environments. To incorporate additional action inputs during
fine-tuning, we introduce a new architecture that stacks an action-conditional
latent prediction model on top of the pre-trained action-free prediction model.
Moreover, for better exploration, we propose a video-based intrinsic bonus that
leverages pre-trained representations. We demonstrate that our framework
significantly improves both final performances and sample-efficiency of
vision-based RL in a variety of manipulation and locomotion tasks. Code is
available at https://github.com/younggyoseo/apv. |
---|---|
DOI: | 10.48550/arxiv.2203.13880 |