Video Prediction Models as Rewards for Reinforcement Learning
Specifying reward signals that allow agents to learn complex behaviors is a long-standing challenge in reinforcement learning. A promising approach is to extract preferences for behaviors from unlabeled videos, which are widely available on the internet. We present Video Prediction Rewards (VIPER),...
Gespeichert in:
Hauptverfasser: | , , , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Specifying reward signals that allow agents to learn complex behaviors is a
long-standing challenge in reinforcement learning. A promising approach is to
extract preferences for behaviors from unlabeled videos, which are widely
available on the internet. We present Video Prediction Rewards (VIPER), an
algorithm that leverages pretrained video prediction models as action-free
reward signals for reinforcement learning. Specifically, we first train an
autoregressive transformer on expert videos and then use the video prediction
likelihoods as reward signals for a reinforcement learning agent. VIPER enables
expert-level control without programmatic task rewards across a wide range of
DMC, Atari, and RLBench tasks. Moreover, generalization of the video prediction
model allows us to derive rewards for an out-of-distribution environment where
no expert data is available, enabling cross-embodiment generalization for
tabletop manipulation. We see our work as starting point for scalable reward
specification from unlabeled videos that will benefit from the rapid advances
in generative modeling. Source code and datasets are available on the project
website: https://escontrela.me/viper |
---|---|
DOI: | 10.48550/arxiv.2305.14343 |