Privileged Information Dropout in Reinforcement Learning
Using privileged information during training can improve the sample efficiency and performance of machine learning systems. This paradigm has been applied to reinforcement learning (RL), primarily in the form of distillation or auxiliary tasks, and less commonly in the form of augmenting the inputs...
Gespeichert in:
Hauptverfasser: | , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Using privileged information during training can improve the sample
efficiency and performance of machine learning systems. This paradigm has been
applied to reinforcement learning (RL), primarily in the form of distillation
or auxiliary tasks, and less commonly in the form of augmenting the inputs of
agents. In this work, we investigate Privileged Information Dropout (\pid) for
achieving the latter which can be applied equally to value-based and
policy-based RL algorithms. Within a simple partially-observed environment, we
demonstrate that \pid outperforms alternatives for leveraging privileged
information, including distillation and auxiliary tasks, and can successfully
utilise different types of privileged information. Finally, we analyse its
effect on the learned representations. |
---|---|
DOI: | 10.48550/arxiv.2005.09220 |