Human-level Atari 200x faster
The task of building general agents that perform well over a wide range of tasks has been an important goal in reinforcement learning since its inception. The problem has been subject of research of a large body of work, with performance frequently measured by observing scores over the wide range of...
Gespeichert in:
Hauptverfasser: | , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | The task of building general agents that perform well over a wide range of
tasks has been an important goal in reinforcement learning since its inception.
The problem has been subject of research of a large body of work, with
performance frequently measured by observing scores over the wide range of
environments contained in the Atari 57 benchmark. Agent57 was the first agent
to surpass the human benchmark on all 57 games, but this came at the cost of
poor data-efficiency, requiring nearly 80 billion frames of experience to
achieve. Taking Agent57 as a starting point, we employ a diverse set of
strategies to achieve a 200-fold reduction of experience needed to out perform
the human baseline. We investigate a range of instabilities and bottlenecks we
encountered while reducing the data regime, and propose effective solutions to
build a more robust and efficient agent. We also demonstrate competitive
performance with high-performing methods such as Muesli and MuZero. The four
key components to our approach are (1) an approximate trust region method which
enables stable bootstrapping from the online network, (2) a normalisation
scheme for the loss and priorities which improves robustness when learning a
set of value functions with a wide range of scales, (3) an improved
architecture employing techniques from NFNets in order to leverage deeper
networks without the need for normalization layers, and (4) a policy
distillation method which serves to smooth out the instantaneous greedy policy
overtime. |
---|---|
DOI: | 10.48550/arxiv.2209.07550 |