Harnessing Discrete Representations For Continual Reinforcement Learning
Reinforcement learning (RL) agents make decisions using nothing but observations from the environment, and consequently, heavily rely on the representations of those observations. Though some recent breakthroughs have used vector-based categorical representations of observations, often referred to a...
Gespeichert in:
Hauptverfasser: | , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Reinforcement learning (RL) agents make decisions using nothing but
observations from the environment, and consequently, heavily rely on the
representations of those observations. Though some recent breakthroughs have
used vector-based categorical representations of observations, often referred
to as discrete representations, there is little work explicitly assessing the
significance of such a choice. In this work, we provide a thorough empirical
investigation of the advantages of representing observations as vectors of
categorical values within the context of reinforcement learning. We perform
evaluations on world-model learning, model-free RL, and ultimately continual RL
problems, where the benefits best align with the needs of the problem setting.
We find that, when compared to traditional continuous representations, world
models learned over discrete representations accurately model more of the world
with less capacity, and that agents trained with discrete representations learn
better policies with less data. In the context of continual RL, these benefits
translate into faster adapting agents. Additionally, our analysis suggests that
the observed performance improvements can be attributed to the information
contained within the latent vectors and potentially the encoding of the
discrete representation itself. |
---|---|
DOI: | 10.48550/arxiv.2312.01203 |