C-GRAIL: Autonomous Reinforcement Learning of Multiple and Context-Dependent Goals

When facing the problem of autonomously learning to achieve multiple goals, researchers typically focus on problems where each goal can be solved using just one policy. However, in environments presenting different contexts, the same goal might need different skills to be solved. These situations po...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on cognitive and developmental systems 2023-03, Vol.15 (1), p.210-222
Hauptverfasser: Santucci, Vieri Giuliano, Montella, Davide, Baldassarre, Gianluca
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:When facing the problem of autonomously learning to achieve multiple goals, researchers typically focus on problems where each goal can be solved using just one policy. However, in environments presenting different contexts, the same goal might need different skills to be solved. These situations pose two challenges: 1) recognize which are the contexts that need different policies to perform the goals and 2) learn the policies to accomplish the same goal in the identified relevant contexts. These two challenges are even harder if faced within an open-ended learning framework where potentially an agent has no information on the environment, possibly not even about the goals it can pursue. We propose a novel robotic architecture, contextual GRAIL (C-GRAIL), that solves these challenges in an integrated fashion. The architecture is able to autonomously detect new relevant contexts and ignore irrelevant ones, on the basis of the decrease of the expected performance for a given goal. Moreover, C-GRAIL can quickly learn the policies for new contexts leveraging on transfer learning techniques. The architecture is tested in a simulated robotic environment involving a robot that autonomously discovers and learns to reach relevant target objects in the presence of multiple obstacles generating several different contexts.
ISSN:2379-8920
2379-8939
DOI:10.1109/TCDS.2022.3152081