Learning Sparse Representations in Reinforcement Learning with Sparse Coding
A variety of representation learning approaches have been investigated for reinforcement learning; much less attention, however, has been given to investigating the utility of sparse coding. Outside of reinforcement learning, sparse coding representations have been widely used, with non-convex objec...
Gespeichert in:
Hauptverfasser: | , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | A variety of representation learning approaches have been investigated for
reinforcement learning; much less attention, however, has been given to
investigating the utility of sparse coding. Outside of reinforcement learning,
sparse coding representations have been widely used, with non-convex objectives
that result in discriminative representations. In this work, we develop a
supervised sparse coding objective for policy evaluation. Despite the
non-convexity of this objective, we prove that all local minima are global
minima, making the approach amenable to simple optimization strategies. We
empirically show that it is key to use a supervised objective, rather than the
more straightforward unsupervised sparse coding approach. We compare the
learned representations to a canonical fixed sparse representation, called
tile-coding, demonstrating that the sparse coding representation outperforms a
wide variety of tilecoding representations. |
---|---|
DOI: | 10.48550/arxiv.1707.08316 |