AGaLiTe: Approximate Gated Linear Transformers for Online Reinforcement Learning
In this paper we investigate transformer architectures designed for partially observable online reinforcement learning. The self-attention mechanism in the transformer architecture is capable of capturing long-range dependencies and it is the main reason behind its effectiveness in processing sequen...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | In this paper we investigate transformer architectures designed for partially
observable online reinforcement learning. The self-attention mechanism in the
transformer architecture is capable of capturing long-range dependencies and it
is the main reason behind its effectiveness in processing sequential data.
Nevertheless, despite their success, transformers have two significant
drawbacks that still limit their applicability in online reinforcement
learning: (1) in order to remember all past information, the self-attention
mechanism requires access to the whole history to be provided as context. (2)
The inference cost in transformers is expensive. In this paper, we introduce
recurrent alternatives to the transformer self-attention mechanism that offer
context-independent inference cost, leverage long-range dependencies
effectively, and performs well in online reinforcement learning task. We
quantify the impact of the different components of our architecture in a
diagnostic environment and assess performance gains in 2D and 3D pixel-based
partially-observable environments (e.g. T-Maze, Mystery Path, Craftax, and
Memory Maze). Compared with a state-of-the-art architecture, GTrXL, inference
in our approach is at least 40% cheaper while reducing memory use more than
50%. Our approach either performs similarly or better than GTrXL, improving
more than 37% upon GTrXL performance in harder tasks. |
---|---|
DOI: | 10.48550/arxiv.2310.15719 |