CoBERL: Contrastive BERT for Reinforcement Learning
Many reinforcement learning (RL) agents require a large amount of experience to solve tasks. We propose Contrastive BERT for RL (CoBERL), an agent that combines a new contrastive loss and a hybrid LSTM-transformer architecture to tackle the challenge of improving data efficiency. CoBERL enables effi...
Gespeichert in:
Hauptverfasser: | , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Many reinforcement learning (RL) agents require a large amount of experience
to solve tasks. We propose Contrastive BERT for RL (CoBERL), an agent that
combines a new contrastive loss and a hybrid LSTM-transformer architecture to
tackle the challenge of improving data efficiency. CoBERL enables efficient,
robust learning from pixels across a wide range of domains. We use
bidirectional masked prediction in combination with a generalization of recent
contrastive methods to learn better representations for transformers in RL,
without the need of hand engineered data augmentations. We find that CoBERL
consistently improves performance across the full Atari suite, a set of control
tasks and a challenging 3D environment. |
---|---|
DOI: | 10.48550/arxiv.2107.05431 |