The Utility of Sparse Representations for Control in Reinforcement Learning
We investigate sparse representations for control in reinforcement learning. While these representations are widely used in computer vision, their prevalence in reinforcement learning is limited to sparse coding where extracting representations for new data can be computationally intensive. Here, we...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | We investigate sparse representations for control in reinforcement learning.
While these representations are widely used in computer vision, their
prevalence in reinforcement learning is limited to sparse coding where
extracting representations for new data can be computationally intensive. Here,
we begin by demonstrating that learning a control policy incrementally with a
representation from a standard neural network fails in classic control domains,
whereas learning with a representation obtained from a neural network that has
sparsity properties enforced is effective. We provide evidence that the reason
for this is that the sparse representation provides locality, and so avoids
catastrophic interference, and particularly keeps consistent, stable values for
bootstrapping. We then discuss how to learn such sparse representations. We
explore the idea of Distributional Regularizers, where the activation of hidden
nodes is encouraged to match a particular distribution that results in sparse
activation across time. We identify a simple but effective way to obtain sparse
representations, not afforded by previously proposed strategies, making it more
practical for further investigation into sparse representations for
reinforcement learning. |
---|---|
DOI: | 10.48550/arxiv.1811.06626 |