Unlocking Pixels for Reinforcement Learning via Implicit Attention
There has recently been significant interest in training reinforcement learning (RL) agents in vision-based environments. This poses many challenges, such as high dimensionality and the potential for observational overfitting through spurious correlations. A promising approach to solve both of these...
Gespeichert in:
Hauptverfasser: | , , , , , , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | There has recently been significant interest in training reinforcement
learning (RL) agents in vision-based environments. This poses many challenges,
such as high dimensionality and the potential for observational overfitting
through spurious correlations. A promising approach to solve both of these
problems is an attention bottleneck, which provides a simple and effective
framework for learning high performing policies, even in the presence of
distractions. However, due to poor scalability of attention architectures,
these methods cannot be applied beyond low resolution visual inputs, using
large patches (thus small attention matrices). In this paper we make use of new
efficient attention algorithms, recently shown to be highly effective for
Transformers, and demonstrate that these techniques can be successfully adopted
for the RL setting. This allows our attention-based controllers to scale to
larger visual inputs, and facilitate the use of smaller patches, even
individual pixels, improving generalization. We show this on a range of tasks
from the Distracting Control Suite to vision-based quadruped robots locomotion.
We provide rigorous theoretical analysis of the proposed algorithm. |
---|---|
DOI: | 10.48550/arxiv.2102.04353 |