A Strong Baseline for Batch Imitation Learning
Imitation of expert behaviour is a highly desirable and safe approach to the problem of sequential decision making. We provide an easy-to-implement, novel algorithm for imitation learning under a strict data paradigm, in which the agent must learn solely from data collected a priori. This paradigm a...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Imitation of expert behaviour is a highly desirable and safe approach to the
problem of sequential decision making. We provide an easy-to-implement, novel
algorithm for imitation learning under a strict data paradigm, in which the
agent must learn solely from data collected a priori. This paradigm allows our
algorithm to be used for environments in which safety or cost are of critical
concern. Our algorithm requires no additional hyper-parameter tuning beyond any
standard batch reinforcement learning (RL) algorithm, making it an ideal
baseline for such data-strict regimes. Furthermore, we provide formal sample
complexity guarantees for the algorithm in finite Markov Decision Problems. In
doing so, we formally demonstrate an unproven claim from Kearns & Singh (1998).
On the empirical side, our contribution is twofold. First, we develop a
practical, robust and principled evaluation protocol for offline RL methods,
making use of only the dataset provided for model selection. This stands in
contrast to the vast majority of previous works in offline RL, which tune
hyperparameters on the evaluation environment, limiting the practical
applicability when deployed in new, cost-critical environments. As such, we
establish precedent for the development and fair evaluation of offline RL
algorithms. Second, we evaluate our own algorithm on challenging continuous
control benchmarks, demonstrating its practical applicability and
competitiveness with state-of-the-art performance, despite being a simpler
algorithm. |
---|---|
DOI: | 10.48550/arxiv.2302.02788 |