Human-Timescale Adaptation in an Open-Ended Task Space
Foundation models have shown impressive adaptation and scalability in supervised and self-supervised learning problems, but so far these successes have not fully translated to reinforcement learning (RL). In this work, we demonstrate that training an RL agent at scale leads to a general in-context l...
Gespeichert in:
Hauptverfasser: | , , , , , , , , , , , , , , , , , , , , , , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Foundation models have shown impressive adaptation and scalability in
supervised and self-supervised learning problems, but so far these successes
have not fully translated to reinforcement learning (RL). In this work, we
demonstrate that training an RL agent at scale leads to a general in-context
learning algorithm that can adapt to open-ended novel embodied 3D problems as
quickly as humans. In a vast space of held-out environment dynamics, our
adaptive agent (AdA) displays on-the-fly hypothesis-driven exploration,
efficient exploitation of acquired knowledge, and can successfully be prompted
with first-person demonstrations. Adaptation emerges from three ingredients:
(1) meta-reinforcement learning across a vast, smooth and diverse task
distribution, (2) a policy parameterised as a large-scale attention-based
memory architecture, and (3) an effective automated curriculum that prioritises
tasks at the frontier of an agent's capabilities. We demonstrate characteristic
scaling laws with respect to network size, memory length, and richness of the
training task distribution. We believe our results lay the foundation for
increasingly general and adaptive RL agents that perform well across
ever-larger open-ended domains. |
---|---|
DOI: | 10.48550/arxiv.2301.07608 |