Representation Matters: Offline Pretraining for Sequential Decision Making
The recent success of supervised learning methods on ever larger offline datasets has spurred interest in the reinforcement learning (RL) field to investigate whether the same paradigms can be translated to RL algorithms. This research area, known as offline RL, has largely focused on offline policy...
Gespeichert in:
Hauptverfasser: | , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | The recent success of supervised learning methods on ever larger offline
datasets has spurred interest in the reinforcement learning (RL) field to
investigate whether the same paradigms can be translated to RL algorithms. This
research area, known as offline RL, has largely focused on offline policy
optimization, aiming to find a return-maximizing policy exclusively from
offline data. In this paper, we consider a slightly different approach to
incorporating offline data into sequential decision-making. We aim to answer
the question, what unsupervised objectives applied to offline datasets are able
to learn state representations which elevate performance on downstream tasks,
whether those downstream tasks be online RL, imitation learning from expert
demonstrations, or even offline policy optimization based on the same offline
dataset? Through a variety of experiments utilizing standard offline RL
datasets, we find that the use of pretraining with unsupervised learning
objectives can dramatically improve the performance of policy learning
algorithms that otherwise yield mediocre performance on their own. Extensive
ablations further provide insights into what components of these unsupervised
objectives -- e.g., reward prediction, continuous or discrete representations,
pretraining or finetuning -- are most important and in which settings. |
---|---|
DOI: | 10.48550/arxiv.2102.05815 |