Deep Reinforcement Learning With Part-Aware Exploration Bonus in Video Games

Reinforcement learning algorithms rely on carefully engineering environment rewards that are extrinsic to agents. However, environments with dense rewards are rare, motivating the need for developing reward functions that are intrinsic to agents. Curiosity is a type of successful intrinsic reward fu...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on games 2022-12, Vol.14 (4), p.644-653
Hauptverfasser: Xu, Pei, Yin, Qiyue, Zhang, Junge, Huang, Kaiqi
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Reinforcement learning algorithms rely on carefully engineering environment rewards that are extrinsic to agents. However, environments with dense rewards are rare, motivating the need for developing reward functions that are intrinsic to agents. Curiosity is a type of successful intrinsic reward function, which uses the prediction error as an reward signal. In prior work, the prediction problem used to generate intrinsic rewards is optimized in the pixel space rather than a learnable feature space to avoid randomness caused by feature changes. However, these methods ignore small but important elements of the states that are often associated with locations of the character, which makes it impossible to generate accurate internal rewards for efficient exploration. In this article, we first demonstrate the effectiveness of introducing prior learned features for existing prediction-based exploration methods. Then, an attention map mechanism is designed to discretize learned features, thereby updating the learned feature and meanwhile reducing the impact of randomness on intrinsic rewards caused by the learning process of features. We verify our method on some video games from the standard reinforcement learning Atari benchmark, achieving clear improvements over random network distillation, which is one of the most advanced exploration methods, in almost all Atari games.
ISSN:2475-1502
2475-1510
DOI:10.1109/TG.2021.3134259